Unpacking Core Web Vitals - JSJ 620

JavaScript Jabber

A weekly discussion by top-end JavaScript developers on the technology and skills needed to level up on your JavaScript journey.

Unpacking Core Web Vitals - JSJ 620

Published : Feb 13, 2024
Duration : 1 Hours, 19 Minutes

Show Notes

Harry Roberts is a web performance consultant. They immerse themselves in the critical realm of web performance and JavaScript. The esteemed panel, including the renowned Harry Roberts, delves into the intricate details of site speed measurement and the evolving landscape of web performance metrics. The conversation sheds light on the profound impact of Core Web Vitals on businesses and the challenges they pose. Join them as they navigate the intricacies of web development, explore the nuances of user experience, and unravel the complexities of performance optimization.


Sponsors


Socials


Picks

Transcript

 
AJ_O’NEAL: Well, hello, hello everybody. And welcome back to another exciting episode of JavaScript Jabber. And this day, uh, well, I'm supposed to introduce our panel first. And then our guest, Steve has corrected me. So we've got me AJ. I'm your hostess with the mostest or whatever. And then we got Dan. 

DAN_SHAPPIR: Hey. 

AJ_O’NEAL: And we also have. Harry Roberts, also known as the infamous CSS wizardry. 

DAN_SHAPPIR: Nothing to do with CSS. 

AJ_O’NEAL: How are you doing? 

HARRY_ROBERTS: Well, hello everyone. Yeah, I don't write any CSS anymore. Also, I picked that domain when I was 17 years old. So that's a cautionary tale for everyone watching. Don't let your children pick their own domain names. 

AJ_O’NEAL: Yeah, parents need to really be on the ball on that one, especially with the credit cards being like 12 year olds have credit cards now. I don't know how that happened, but that's the thing. 

HARRY_ROBERTS: I haven't seen that. That's terrifying. 

AJ_O’NEAL: Apple has it built into the iPhone. It's like the credit card is you can create a sub credit anyway. But today's topic is what are we, what are we talking about? 

HARRY_ROBERTS: Child debt. I think today, if I'm in the right place, we'll be talking about web performance. 

DAN_SHAPPIR: Yeah. What? 

AJ_O’NEAL: Not the bottle. 

HARRY_ROBERTS: Well, we can have a drink afterwards if we want. 

DAN_SHAPPIR: Yeah, no, I'm jealous. You know, it looks like a really nice collection. 

AJ_O’NEAL: Anyway, so let's go ahead and jump right in. And Harry, why don't you tell us why you're famous and your most three recent controversial tweets. 

Hey folks, this is Charles Maxwell. I've been talking to a whole bunch of people that want to update their resume and find a better job. And I figure, well, why not just share my resume? So if you go to topendevs.com slash resume, enter your name and email address, then you'll get a copy of the resume that I use, that I've used through freelancing, through most of my career, as I've kind of refined it and tweaked it to get me the jobs that I want. Like I said, topendevs.com slash resume will get you that. And you can just kind of use the formatting. It comes in Word and Pages formats, and you can just fill it in from there. 

HARRY_ROBERTS: Well, I don't think I'm famous. I certainly wouldn't say. I mean, I think if you do, if you work in the web performance space, you might have come across my website before, but most controversial tweets. I don't know. I don't really tweet controversial things anymore. Probably the most controversial ever was when Imgur, I-M-G-U-R dot com switched to a single page fully client-rendered app. I don't know if you've noticed, if you look at an ex profile now, a Twitter profile, when you're logged out, it doesn't show exes, it doesn't show tweets chronologically, it shows them in the most liked first. I ended up seeing my own profile logged out and it reminded me of a tweet from quite a while ago. I really went on Imgur, because they switched to a fully client-rendered SPA server. Honestly, you're probably right. I've never heard it said out loud. I've never tried to set out lab until just now. But they switched one simple image tag out for about 2.7 meg of JavaScript. And I got flamed for that one. People are like, you don't understand the business case of Imgur. They need to serve ads. It's like they can serve ads, but they need to serve the image first. Otherwise, people will stop turning up. So I got flamed for that one. I got flamed for saying people shouldn't use emojis in professional contexts, like gifts in presentations. One guy wanted to fight me for that one. But I'm not a very controversial person. It doesn't really suit my style and certainly doesn't suit business. 

DAN_SHAPPIR: About Famous though, I have to say that I said it before we started recording and I promise to say it again that in the particular space of web performance, you, Harry, are one of my tech heroes and I've been following your content for a long time, you know, your talks, your posts, and I've learned a lot. So. Yeah. 

HARRY_ROBERTS: Thank you. That really means a lot, especially because as we were saying just before we started streaming, you and I met like years ago. I reckon we must have met six to eight years ago and stayed in touch ever since, but never met again in person, which is a real shame. But that really means a lot because likewise, I keep an eye on what you're getting up to. And yeah, that means a lot. Thank you. Yeah. 

DAN_SHAPPIR: So I wanted to have you on on the show because there are a lot of interesting things going on in the web performance space. And in particular, we are a few months away, literally actually one month away, I think now, from Google switching up Core Web Vitals. I mean, they've kind of done it before when they changed the meaning of CLS, Cumulative Layout Shift. They they have modified LCP over time a little bit in certain ways. But this is by far the biggest change that they're making. So maybe we can start with that. 

HARRY_ROBERTS: Yeah, I mean, swapping first input delay out and replacing it with IMP, I think is, it's drastic. It's a huge change. And I think a lot of businesses are going to struggle. Um, speaking purely anecdotally, I have clients who've got 22 million good URLs in search console and in what is it the 12th of March. So it's like one month and two weeks or whatever it is. These companies are going to have 23 million bad URLs in search console. The difference is stark. And one thing I do kind of worry about, and it's not Google's fault at all. Um, but people have known this update was coming for a long, long time. And people are kind of, I've not had a single client who's trying to get ahead of the curve here. Uh, I worked with a client only last week who were kind of only just starting to really worry about this. Um, because I don't think, I don't think a lot of companies realize just how big the shift is going to be. I think it needed to happen. I think INP is a much more suitable replacement for first input delay. But I do think it may have helped a bit in the stock gap period because the difference between the two, as well you know, is just enormous. And trying to go from FID to INP in one hop is going to be a real struggle for a lot of companies.  

DAN_SHAPPIR: So maybe it's worthwhile to set the stage a little bit on that, to maybe talk a little bit about what Core Web Vitals are, what FID was or is, but soon will be deceased, and what INP is that replaces it, to give a little bit of context. 

HARRY_ROBERTS: Happily, yeah. So without slides or visual aids, I guess the best way to talk about Core Web Vitals would be site speed has been a thing for well over a decade now, a long, long time. But historically we had to lean on archaic and kind of Esoteric events like the browser's load event, we talk about load times, but a load time is invisible to a visitor. They don't know what the load event is. You can have a really fast-feeling page with a terrible load time, and the load time would be completely irrelevant to what the customer experienced. So Google, to their credit, very thoroughly researched, have tried to get some one-size-fits-all all metrics, metrics that can apply to every site in the world and try to quantify the user experience of how fast that site felt. The nearest equivalent to a load time would be the largest contentful paint, which is simply when was the biggest bit of content presented on screen. We've got cumulative layout shift, which you already mentioned, which is did the page move around as it was loading or as you were interacting with it. That's not a performance metric at all. It doesn't measure speed one bit, but it does measure the user experience and measures this page frustrating to load. And finally, we had first input delay. And the whole point of first input delay is, in fact, I've got a really good analogy for first input delay in INP, and purely coincidentally, I'm sat in the right place for it. First input delay is when someone clicks a button for the first time on that page, how long did it take before the web application could start responding? And it only measures the first click, and it only measures how long before the application could start responding. So this misses out to I must have some kind of emoji there. I don't know what that was. So anyway, that misses two crucial things. The first thing it misses is it only measures the first click on a page. Now, if the first click happens to be really slow, but the next 200 clicks are really fast, you only measure on the really bad one. Conversely, if the first click is really fast and all the next ones are slow, it looks like your page is really quick. The problem with that is it's really easy to game. You could wait for use of the click before booting all your JavaScript. You could say, well, wait for the first click and that's just going to be a click on anything and it'll respond really quickly. Then load all your JavaScript. You know that every subsequent click will be slow, but you know it won't get measured. So that's the first problem with first input delay. 

AJ_O’NEAL: What quick is doing, although, 

DAN_SHAPPIR: Yeah, let's, let's, let's put that, you know, quick is an interesting topic, but let's push it off a little bit. I did want to say that in most cases you really didn't need to even game FID that much because you just got to the FID score almost regardless of what, oh yeah, by definition. I said like we kind of, that measurement kind of won in the sense that, you know, if anybody was a laggard and I worked at Wix and initially when we started we were even bad at FID. And then we fixed it and now Wix has good FID and it's that's that's that. There's nothing more to do in that context. So if you look at most web pages, they just have really good FID. And it's like that if, if you like have an exam in, in school and everybody gets an A or an A plus, then you know, that test is kind of meaning is meaningless. It doesn't really provide any information. So we needed to replace FID with something else. 

HARRY_ROBERTS: Yeah, exactly. So that was the kind of core problem, was that it was like 99% of websites passed FID, despite we all know that JavaScript is a huge runtime bottleneck. So how could 90-plus websites be passing that metric? The second problem with FID is that it didn't measure how long it actually took to do the work. So the way I explain this to non-technical stakeholders, very non-technical stakeholders is, imagine you went to a bar and you wanted to order a drink. Now, if you get to the bar and the bar's really busy, there are hundreds of people and only one bartender, it's gonna take you a long time before you can place your order for your drink. And that's first input delay. It measures your first order for your first drink and it only measures how long it took you to order the drink. So it doesn't matter how quick the bartender is or how long it took them to get the drink to you, that doesn't get counted. Whereas, interacting with the next paint, the analogy would be, you might go to a really, really busy bar and you might order a drink and it might take you a long time to order that drink because there are so many other customers there. But if you just order a glass of water or a bottle of beer, then the bar vendor can produce that drink very, very quickly. If you order a really complicated cocktail, then the processing time of that drink will be increased. If you just stay at the bar and they slide it over the bar too and you take the drink, that's your presentation delay and they can get the drink to you very quickly. However, if you say, oh, I'm going to go and sit over in the corner and the bartender has to now bring the drink over to you, that's your presentation delay. So what IMP does is it doesn't just measure how long it takes to order your drink, it measures how long it took to make your drink and how long it took for the bartender to put the drink in your hand. And it does that for every drink you order, not just the first drink. So I feel like first input delay was really easy to gain for those two reasons. It was only the first click and it only measured the delay to start the work. INP is much harder to pass because it measures between 98th and 100th percentile, so more or less your very worst clicks. And it's not even clicks, it's interactions, so key presses and stuff like that. And then how long it took to process that and how long it took to display that to you, it measures a lot, lot, lot more work and that's why most sites are going to go from green to red overnight. 

DAN_SHAPPIR: A few more thoughts both about FID and INP. So from the get-go, I had issues, let's say, with FID. And I was very upfront with the people, with the great, excellent people at Google about it, Annie Sullivan, Michael Weiswald, he was there, and others, Patrick. And the problem is that in many cases. If your website was especially slow in loading, you actually got better FID. Because if your JavaScript took a long time to download, so long in fact that the user interacted with the page before the JavaScript actually even loaded, then when, and on the other hand, the page had let's say SSR or SSG, so the content was actually already visible, then you would click a button that would literally do nothing because the JavaScript wasn't even wired up yet. And from the FID's perspective, it would be instantly responsive. The response would be effectively nothing, but your FID would be excellent. And that was one of the issues that we had, again, at Wix, that initially when we started to improve the download speed of the delivery of the JavaScript, we actually had hit this kind of valley where it seemed to be getting initially worse before it started getting better, simply because the pages were getting interactive sooner, and that problem I just described was less likely to happen. But, 

HARRY_ROBERTS: yeah, 

DAN_SHAPPIR: go for it, sorry. 

HARRY_ROBERTS: I've run into the exact same thing on client sites. Clients very aggressively deferring their JavaScript would end up passing FID because even clicking a native browser input like a button, you don't have to wait for the JavaScript event handlers to be attached. Browsers have their own event handles attached to buttons already. So exactly as you say, if someone interacts with an interactive element before the JavaScript is booted, it will still capture a valid input. Even that input doesn't do anything. Because here's the other thing, first input delay and INP, oh, sorry, just INP. Even if there isn't a UI update afterwards even if there isn't a next paint, you still capture a score for every click. So what that means is, in the case of FID, the button doesn't have to do anything. It doesn't have to do anything at all. FID only captures the fact that you clicked a button, and if it needed to do something, we could have done it immediately. But exactly as you say, a big risk was people really, really aggressively deferring and late loading their JavaScript could sort of skirt around the system purely because the chance of someone clicking before the JavaScript had run was actually quite high. 

DAN_SHAPPIR: Now, you talked about the fact that you're seeing customers, and maybe it's worthwhile to mention that you work as a freelance consultant, and you work with organizations looking to improve their performance of the websites. So you're obviously exposed to a lot of poor performing websites. 

HARRY_ROBERTS: A lot. 

DAN_SHAPPIR: Yeah, you mentioned that you're seeing scenarios where they currently have really good FID and even LCP and CLS and consequently are really passing core vitals on most of their pages in, for example, in the Google Search Console, but that once they switch over to It's interesting because I brought up this concern with the people from Google a while back. I don't remember if it was with Annie or with Rick or with Michael, one of them. And their response was that according to their tests, most, what they saw was that most organizations that have a poor INP or will have poor INP also already have poor LCP. So their argument was that the overall ratio of passing websites won't change that much because the score for, like you said, the score for that interactivity, the interactivity score will drop, but it will have a lesser impact on the overall cumulative score of all three core vitals. Would you agree with that or not? Based on your personal experience. 

HARRY_ROBERTS: Yes, I haven't seen things that necessarily corroborate that. And do you know what? I don't have, I really don't have the access to the data they do. So I look at a small enough number of data sets that no two look alike and every single site seems to have unrelated issues. So, you know, really I see sites that, yeah. So I've got a client at the moment who is doing really well in LCP, really bad in CLS and INP great, it's been great forever. I don't typically tend to see patterns, but I'm looking at a really small scale. So I'm not talking HTTP archive. I'm not talking Google's corrupt database at large. And it goes through my own clients. I can't really decipher or see any patterns that suggest that the phone is just kind of bucket. 

DAN_SHAPPIR: I have to say that kind of surprises me to an extent because I would expect for you to be seeing the same issues over and over and over again. 

HARRY_ROBERTS: Yeah, I mean, really, it's really surprising. Some issues are really common, but we've had, I think the performance industry, people who care about performance even a little bit are wise to the fact that some things will always cause you problems. Client-side rendering is a terrible idea, so if you want to be fast. Certain things are just starting to get filtered out, and those same mistakes that I would see five years ago are far less common. Now what I'm seeing on a per-project basis is every client has pretty significantly different problems to the last. 

DAN_SHAPPIR: So, you know, that kind of moves us over to the next part of the conversation, I guess, because when we were talking about, you know, what we should discuss on this podcast, I thought that an excellent topic would be to talk about, you know, what you are encountering in various customer sites, obviously, without naming names, unless you really want to. 

HARRY_ROBERTS: But you know. 

DAN_SHAPPIR: Yeah, basically things like on the practical side, you know, things that you tend to see and mistakes that people tend to make, but also amusing stories if you can share them. 

HARRY_ROBERTS: Yeah, well, like I said before, it used to be five years ago, everyone had gone full client-side React, and that's always going to be slow. And then everyone now knows that server-side Perfect, but certainly faster. So a lot of the age old problems seem to be filtered out. I was, I think, well, I think, I don't know how, but it was a tweet of mine that was responsible for a lighthouse check for, are you lazy loading your LTP image? Because that was so commonplace. Even before WordPress plugins enabled that by accident, all these really common problems, I think even just non-performance engineers are now so aware of web performance that basic mistakes. certainly on the projects I'm seeing don't really get made so much anymore. I keep running into things where I, 

DAN_SHAPPIR: if I can interject for one thing, you mentioned, you mentioned, um, client side rendered react, you know, basically websites or projects that initially started with the create react app, uh, have essentially by definition will have poor core performance as is measured by core vitals. And that's not surprising because obviously, you know, if you need to download a ton of JavaScript, then you know, run that JavaScript, make several Ajax requests, run more JavaScript and only then actually start showing stuff, then obviously that's going to be much slower than a website that just downloads stuff from the get-go. But the interesting thing is that when speaking with various, let's call them larger companies, let's call them quote unquote web applications, a lot of them still work this way. If the app, if the web app is not SEO, if it's something that's behind a login screen or whatever, then from their perspective, it's okay to work this way. So I'm kind of seeing the market splitting on this regard. Like some sites care about load performance, and a lot of sites really don't. 

HARRY_ROBERTS: Yeah, and I do agree with that sentiment. So for example, my accountancy software is all web-based, and it's slow. But if I look into my accountancy software, I have to do it. I know it's gonna take me at least 30 minutes anyway, but I don't mind it being slow, as long as once it's there, it's usable. And the sensitivity there is a lot less than if I just want to find out, if I just want to quickly buy some groceries on the way home or something really quick, I just want to buy something quickly and leave again, there's a lot more friction, a lot more sensitivity. If the task itself should be a two-minute task, you don't want to spend 10% of that time just waiting. Whereas if the task is a 30-minute task, you don't mind the upfront cost. I don't have any research to prove this, but I do sympathize with the use case of wear a web app. I'd be happy to wait for Photoshop in the browser. I wouldn't be happy to wait the same amount of time to work out what time my next train is. There is no way of quantifying that sensitivity, but I do believe it exists. And I do think that there is no definition between what's a web app and what's a web site. One thing I do think is if you're a company who can name five different pages on your website, oh, we've got the homepage, a product details page, a product listing page. A search page. If you've just listed four different pages, you're not a single-page app. So don't build a single-page app. Build a multiple-page app. If you can name pages, you aren't a single-page app. Trello is a single page app. Google Sheets is a single-page app. Your e-comm site is not a single-page app because you've got your homepage, your category page, your product listing page, your product details page. You've got your FAQs page. They're all different pages. So don't build a single-page app. 

HARRY_ROBERTS:  A mess. 

AJ_O’NEAL: Twitter is, I mean, Twitter and Facebook, right? Would you say that the feed is one app? And then because it seems like the the in between is don't try to build a single page app where you literally build everything that you do in a single application. I agree. That's I mean, that's beyond silly. Except that, you know, that's what's been done the last 10 years. And, and there's no sense in if you have a lot of interactive stuff doing server reloads for every single interaction. But like, there is some sort of didn't delineation, like your feed is a single page app, your account settings is a single page app, but your feed and your account settings are not the same. Is it? I mean, 

DAN_SHAPPIR: so, so if I can, if I can touch on that. So we've had Alex Russell on, on the podcast, uh, actually twice. And I think we had them almost a year ago, the last time. And the distinction that he kind of makes is based on the amount of interactions. Like is it something that you like click, like are you clicking a few times to maybe a few dozen times or are you going to be clicking hundreds of times? I don't remember exactly where Alex draws the line. I know that he's done a bunch of research on this topic. You can check his website, which is slightly late, I think is, no, that's actually that's his handle. I always forget. 

HARRY_ROBERTS: Infrequently noted. 

DAN_SHAPPIR: Yeah, infrequently noted. So you can check his, yeah, he's funny like that. So anyway, so he's done research on this topic and that's the delineation that he makes and I kind of concur with that. 

HARRY_ROBERTS: So I don't know where the cutoff point is and I don't use, I agree with him, but the way I've always worked, I've kind of, I've never heard him say that. But I agree with him. The way I've always described it to my clients is, how big is the feedback loop? If you've got Photoshop in the browser, you don't want to wait a full round trip before you notice the pixels turning green. If you've got Google Sheets, you don't want to notice a full round trip of latency before cell A and cell B get summed together. If your feedback loop, where you expect feedback, is instant, then that is what I would consider more app-like, if read and write are almost one-to-one or buying... Just look at an e-commerce site or a news website. You don't have feedback loops. You spend most time consuming, especially on a news website. You scroll, you read, you scroll, you read. That didn't ought to be a single page app. If you were flicking through an article every half a second, which you wouldn't ever be doing, but it's not until you're doing really frequently, you need that feedback loop to be super tight, that you need to build things in the browser. So I think for the most common use cases, e-com...Publishing, certainly publishing a single-page app is not the right process. Now, your checkout flow, that might be a single-page app because it might be a case of what it's behind. As you say, it's behind a login. It's not indexed anyway. By the time someone's got into the checkout process, they are statistically less sensitive. Studies have shown that once people have committed to buying something, they are more likely to stick with it. So they're less sensitive to site speed there. Your checkout flow, that could be an SPA. That's absolutely fine because by and large, every page is the same, just different form details. Your address, your billing details, discount code, et cetera. That's where your feedback loop is. Okay, I've got my credit card in my hand, and I'm going to do that. So I think Alex says interactions. I think I'm saying the exact same thing, which is nice because Alex is very smart. But I always describe to clients, where do you want that feedback loop? Trello, you want to drag a card from one column to another and see it land there instantaneously. If you're clicking related product, that's another page. And that feels like the feedback we didn't need to get anywhere near as instant as dragging a card from one column to another. 

DAN_SHAPPIR: So going back to the topic of the issues that you encounter as as a consultant coming in on, I assume mostly, to assist with projects where performance is an issue, what are common problems that you encounter and what are 

HARRY_ROBERTS: Um, really common problems are things like fetishization of things like microservices, micro front ends, uh, composable commerce and, and single page apps. They're really common. So nearly every site I look at, you're a single page app, um, you're struggling with performance, maybe not because of your own fault, but because you weren't aware that co-ed vitals doesn't play very nicely with single page apps, at present at least. So I've got loads of clients at the moment who their sites are pretty fast from a cold start, but the fact that they're an SPA means a lot of that data gets lost. So they appear a lot slower than they are. One thing I keep coming across is, and two projects in a row now, is composable commerce. And it's like, right, we've got to make API calls to X different third parties. That's all latency that all just goes into time for this bike. Parallelize as much of it as you can, but you still kind of, for the most part, only as fast as the slowest response. Then you move to your client rendered. So the app rehydrates and you move into the client-rendered version, the rehydrated version. All those API calls now happen on the client side. And I've seen this in two projects in a row. Everyone loves to have API.company.com on a different domain, which means immediately a cross-origin request, which means immediately you've got pre-flight requests. There are two projects in a row. Never really noticed it before but full round trips of latency inbound on every request to the API endpoint because they weren't caching their pre-flight requests. These are rare things I see that do, they don't amuse me, but I feel like you designed this problem for yourselves. If you just had company.com slash API, get rid of all those problems. But because everyone loves to have API.company.com, you now enforce pre-flight requests. 

AJ_O’NEAL: Hold on, I wanna push back on that. So can you think of a reason why people are doing that? Because I know of some reasons why it's beneficial, but like, are you just dismissing it wholesale or do you think that people don't actually have the problems? Do you understand what the impetus for the API was other than cuteness? Because I don't think it's just cuteness, although I'm sure that plays a part. 

HARRY_ROBERTS: I don't think it's just cuteness and I don't have like a list answers to why, but architectural simplification, like a lot of companies just hang things off domain so they can share them internally a lot easier. They can deploy it as their own sort of architecture in its own kind of, I don't know, its own standalone product. But yeah, I just feel like a lot of problems like that would be the performance problems it brings. It'd be like, if we could just stick this on slash API slash. 

DAN_SHAPPIR: And, and, and using resources like a Preconnect or something like that doesn't mostly resolve that problem. I mean, you're going to be waiting for the JavaScript to download in most cases anyway, before making those API calls. So you could at least use that JavaScript download time to pre-connect to that API endpoint. 

HARRY_ROBERTS: So pre-connect wouldn't work, because pre-connect will open a cause. You can open a cause-enabled connection, but you don't explain why you need it. So what a pre-flight does is it says, oh, you've got a request header. We don't like that. You can't hit us with that request header. 

DAN_SHAPPIR: Did you actually get cores on subdomains? I forget. 

AJ_O’NEAL: You do. 

HARRY_ROBERTS: It's cross origin and a sub domain is a new origin. So I've got a client at the moment. And if they just move from company.com slash dot, if I move from API dot to slash API, they would remove all the cause issues because cross-origin, a subdomain and a different subdomain, a different port would put you in that cross origin category. 

DAN_SHAPPIR: Oh, so it's also a different port. 

AJ_O’NEAL: No, it's either, either. 

DAN_SHAPPIR: Ah, it's either. Okay. Yeah. 

AJ_O’NEAL: That you come into issues with local host on that all the time, which is one of the reasons I tell people not to develop on the host, like user domain or whatever. But, um, so for that, I think that in the modern era, I think that the reasons that we did API dot whatever don't hold as much water as they used to. One of the reasons had to do with cookie policies and all browsers that are out there today other than your Xbox and your Wii have a browser and those who don't really, you're not as worried about like clickjacking and cross-origin forgery of a bank website on those legacy systems that people still use all the time but are never going to get updated. But all of the browsers that you have on phones and on desktops where you're likely to do work, the cookie settings, are now such that you can, you can set a few flags like HTTPS only, server-side only, uh, strict origin policy, you know, a few things like that. And you can get all the benefits that you would have gotten by having your APIs go through a separate domain where you, you didn't necessarily want to mix up the chance that your authentication middleware would be allowing Um, cross site requests via, via API, um, and accidentally get authenticated with a cookie when they shouldn't have been. 

Hey, have you heard about our book club? I keep hearing from people who have heard about the book club. I'd love to see you there this month. We are talking about Docker deep dive. That's the book we're reading. It's by Nigel Poulsen. And, uh, we're going to be talking about all things Docker, just how it works, how to set it up on your machine, how to go about taking advantage of all the things it offers and using it as a dev tool to make sure that what you're running in production mirrors what you're running on your machine. And it's just one of those tools a lot of people use, really looking forward to diving into it. That's February and March, April and May, we're gonna be doing the Pragmatic Programmer. And we're gonna be talking about all of the kinds of things that you should be doing in order to further your career, according to that book. Now, of course I have my own methodology for these kinds of things, but we're going to be able to dovetail a lot of them because a lot of the ideas really mesh really nicely. So if you're looking for ways to advance your career, you're looking to learn some skills that are going to get you ahead in your career, then definitely come join the book club. You can sign up at topndevs.com slash book club. 

HARRY_ROBERTS: So AJ, you're absolutely right. I'm not really forced to have to like this. You've made me realize the whole point of using API. Is you're opting into cause enabled requests. You're opting into cause. You want cause to stop things leaking. But now we've got other ways around it. I guess we don't need it. So I guess the whole point people did use API. It was like, no, no, cause is a benefit for us. It's a feature. We want to strip credentials. We want to make sure we don't leak anything. So that's why if we put it on API. We get all that for free. Yeah, it's a performance hit, but. So you're absolutely right. I guess the simplest answer is cause is a feature, not a bug. 

AJ_O’NEAL: Yeah, so, but I agree. I believe that you can get all of the same security features today and be reasonably assured that if somebody hasn't updated their browser for the last year or two, which is most people, that they still have that in their browser. Like, it's been part of browsers long enough that I'm pretty sure that it's safe to do. But, I mean, my advice would be if you're going to tell somebody to switch over from API to slash API slash, that's great as long as the security concerns fall. 

HARRY_ROBERTS: Oh yeah, absolutely. I would just say go find and replace. Yeah. 

DAN_SHAPPIR: You just said something that really amused me, Ajay, about not updating your browser because whenever I go on my wife's computer for some reason and I open up the browser and I see that red button saying, you must update now. I'm like, ah, how did you not update? 

AJ_O’NEAL: My, uh, my wife probably updates her browser about as frequently as she updates her operating system which is when I sit down at it, I can tell that there's nothing that she's immediately using for something that can't be restarted from, and I restart the computer. So about twice a year, when she asks me to do something on her computer, her browser gets updated. And that's because she has a techie husband. 

HARRY_ROBERTS: Yeah, I really don't think... I don't think... I think... Oh, rather, I think it's really easy for people like us who are very tech-fluent and who are surrounded by everyday. It's second nature to us. Um, but people aren't in tech. Why do we need to update it? It works. 

DAN_SHAPPIR: So, so basically request, 

HARRY_ROBERTS: I will call back in iOS 17. I don't care. 

DAN_SHAPPIR: So basically you're saying that the fact that windows 95, I think like couldn't run for more than 39 days straight without the restart was the feature rather than a bug that 

AJ_O’NEAL: I mean, you know, we give Microsoft a lot of crap for the, you know, your computer is restarting and, and, you know, 60 seconds, but they're not, I mean, they are servicing people who are non-professionals and who are, you know, they're servicing the lay people, you know, like the average person, you know, so, so the Mac, the average Mac user is someone who is it like they're in a different tier of work or a different tier of economics than the average Windows user. 

DAN_SHAPPIR: Although you'll be surprised, like I know, like I don't want to, you know, say anything bad about, you know, designers. But I have seen some that like, we're kind of unaware about tech. Um, but tell me like Harry, don't you like, you know, want to come into a project feeling all good about telling them how they should optimize this or optimize that and then run into like a 12 megabyte GIF that they're downloading in their homepage or something like that. 

HARRY_ROBERTS: Yeah, luckily that's less frequent. And that's almost always a CMS issue. Like, oh, a CMS user uploaded that. So yeah, I like when I find really impactful, interesting or exciting things to tell people, but part of my standard process is, let's look for anything silly. But I take waterfall charts and I organize them by largest to smallest resource. And I once found a 1.2 megabyte Fabicon. And that's because the designer who was asked to export the Fabicon, took the entire sprite sheet. It was back when people use Sketch. And it was the entire icon set for the entire website. And they couldn't be bothered exporting the Fabicon. So they just set the bounding box to cover the Fabicon they needed. And the Fabicon that ICO ended up being this enormous bank of data, every icon for the site. And it's had a bounding rectangle around the one they needed. They exported that, but it contained all the data for all the others. Is that right? You've got a 1.2 megabyte Fabicon. After Jesus, it was unwieldy. So, well, there's your problem. Like just go and sort that out. That's really low hanging fruit. 

DAN_SHAPPIR: Yeah. By the way, one thing that I've also encountered in many cases, like, you know, websites especially created by designers is that they construct the images. So like you have something which is an image, which should have been, let's say, Well, I'll call it a JPEG, but you know, a lossy compressed image. But they've actually, because they use some sort of software for actually doing it, and they work in layers, they actually create multiple layers. And then they need the transparency because they're now layering them one on top of the other. So they're all PNGs. So what could have been one JPEG ends up being comprised of like five PNGs. 

HARRY_ROBERTS: Yeah, I mean, I've seen back when I used to do a lot of very front-end work, kind of slicing other people's Photoshop files, my biggest gripe, when people make flat looking UI, like when I say flat UI, I don't mean the design style. I mean, something looks like it could be a flattened single image, just a raster image exported as a JPEG, not even with any transparency. But the way they made the images by layering things up in Photoshop with loads of different blend modes. I'd have to isolate it, flatten it myself, make it transparent, put a background on it, and then it was all just nightmarish. Still like that, designers then, and I say designers, front-end developers, people, our colleagues, might use sort of blend modes in the browser to kind of achieve the effect. But then that just has runtime overhead. That can get really sluggish on very heavy pages when you've got images and elements interacting with each other and the kind of intersection of their blend modes. So yeah, that's not the stuff I tend to run into very often because I don't get to work on very... I love my clients so much. I love you all very much. I don't get to work on very creative projects. Normally, it's just a case of here's our asset library of every product we sell and that's it. I guess if you look at Wix, you'll see things that are much more creative. 

DAN_SHAPPIR: Well, one of the more creative pages that I saw at Wix. It was actually an internal website built. So Wix does a whole bunch, a lot of dog fooding. All the Wix stuff is actually built on Wix or as much as possible. And certainly all the marketing pages that Wix uses are built on Wix. And I was asked to look at one of those marketing pages back in the day and they talked about waterfalls. In this case, it was literally an image of a waterfall. So they created a beautiful page that had this waterfall running down the entire page. And it was built as one image that was 10 screens long. So I basically told them, let's split this image up and lazy load the rest, something along these lines. 

HARRY_ROBERTS: Do you want to hear a really creative thing that I saw, somebody gaming LCP? So if you look at like, if you go watch me. 

DAN_SHAPPIR: That wasn't intended to game, that really destroyed LCP because of the huge image that- 

HARRY_ROBERTS: The sheer size. Oh, I know, you just reminded me of a funny story about-you broke an image into smaller pieces. So I had a client and if everyone who's watching just looks at like my little rectangle, I'm in here, that's your LCP. Let's say that's your LCP image on your, on your homepage. This client was struggling to get it fast enough because the image, it was quite detailed. They were struggling to optimize it enough that it looked nice, but we're still fast, even if they're preloading, whatever. So all they did is they sliced it into four images and then put them right next to each other and the image loaded like in a random order. But it was about four times faster because the image, each file was about a quarter of the size. So whichever one of those four images arrived first was the LTP, then the subsequent three were ignored. It was a horrible loading experience. As you'd see this image go, and it would be random. So sometimes the second image would be first or the third one would be first. 

DAN_SHAPPIR: Yeah, it is worth noting, by the way, that in the discussions of the W3C Web Performance Working Group, one of the items that keeps coming up is how to deal with images that get progressively rendered. When should the LCP be measured for progressively enhanced images? And it's one of those cases. It's kind of hard to say. 

HARRY_ROBERTS: That's a pretty big topic. AJ was going to say something before we move on. 

AJ_O’NEAL: Yeah, do you know why that happens that splitting it up into four made it load faster? 

HARRY_ROBERTS: Um, because a browser can prioritize one of them above the other, so prioritize, it'll see which it'll, it can, it can give a higher priority to one of them. 

AJ_O’NEAL: The compression algorithm is based on, um, pixel width. So if you, depending on how you vary the pixel width, when it analyzes the image, it compresses differently. So sometimes just randomly resizing an image you'll get better compression because you happen to hit one of the default boundaries that we'll try. There is a tool that you can use. Have you heard of image optum? 

HARRY_ROBERTS: Yeah. Yeah. Yeah. 

AJ_O’NEAL: So image optum will basically brute force every possible, um, dictionary size in the compression algorithm and find which dictionary size happens to be the best one. And on an M two Mac, uh, max, it will take a couple of minutes for a relatively small image file. So it, it really does brute force every combination that it can, but it will reduce the file sizes much, much smaller than probably any other program that's the other, 

HARRY_ROBERTS: I did not know that I thought you were talking about. I thought you were talking from a browser's perspective. How did you know? I'm thinking, yeah, exactly. 

DAN_SHAPPIR: By the way, in that context, I'd highly recommend watching that talk from the Performance Now conference. Who was it that gave it? I need to find it. About how HTTP2 downloads multiple resources in parallel, the talk about prioritization. You know which talk I'm talking about? 

HARRY_ROBERTS: Was it from 2023? 

DAN_SHAPPIR: Yes. 

HARRY_ROBERTS: It was Robin Marks, he was talking about resource loading at the cutting edge. It was incredible. It was the exact kind of content I lived for. It was really, really good. Robin Marks, resource loading at the cutting edge. 

DAN_SHAPPIR: Yeah, it's an incredible talk because he talks exactly about that kind of a scenario where you're using the HTTP2 to download these four images effectively in parallel. And depending on the browser and on the CDN and on the server, it might download faster or it might download slower. And it really kind of depends. 

HARRY_ROBERTS: Yeah, because the risk is if you download all four images in parallel, you just slow all four of them down. But if your browser and server are talking the same kind of prioritization language, it will do image one, two, three, and then four in order. And that's how you can gain LCP is because you can be fairly certain that. Instead of all four images arriving at the same time, you got one, then two, then three, then four. 

DAN_SHAPPIR: And we must say that please don't game the metrics because you're doing more harm than good. At the end of the day, the metrics, Google might care a little bit about the metrics, but the ones who really care a lot are your visitors, and they don't care about because the score, they care because of their experience. And if their experience is shitty, then they will leave and then you'll get high value. 

HARRY_ROBERTS: I've got quite right now. I had this discussion today and it made me realize something really interesting about LCP as a metric. I've been saying this for quite a long time and I know my friend Andy Davis has been saying this for a long, long time. Largest isn't always the most important bit of content. If you go to the Lufthansa website or you go to the British Airways website, the biggest bit of content is a picture of an airplane. Most important bit of content is the form inside of it. So we need custom, in certain scenarios like that, I tell clients you need custom metrics as well, use the element timing API. We don't care how soon a user could see that picture of an airplane. We care how quickly they could see the form for booking a flight. I've got a client at the moment. They have a really, their LTP site-wide is, I think it was 2.6 seconds. As far as co-ed vitals is concerned, we only need to find 100 milliseconds, but they read a study somewhere that said somebody else improved LCP by one second and made 10% more money. So they want to get their LCP down to 1.6 seconds. It's going to be really difficult for this website because they hit a lot of third-party domains to build the page. A lot of the pages are still client-rendered. So it's going to be really difficult for them to hit 1.6 seconds. So I said to them, kind of as a joke, was, I know what we could do. Why don't we redesign? It's an e-commerce website. Why don't we redesign all the product pages, so the heading is the biggest bit of content and the image is really small, then you'll have a really good LCP because text is fast and images are slow. And they're like, no, we can't do that because we need people to see the image. I was like, so do you care about the metric or do you care about the customer experience? What we need to measure is how should... And then I said to them, you don't care about the largest bit of content. Your image just happens to be the largest. What we need to measure is how soon someone can see the image. If you just want to get a 1.6 LCP, just to say you've got a 1.6 LCP. We'll just make sure your LCP is an H1 and not an image. No, we can't do that, we can't do that. 

DAN_SHAPPIR: So funnily enough, I was working on a page where the hero image was only really slightly larger than the title text. So I said, you know, if we make the text just a little bit bigger or the image just a little bit smaller. Yeah. 

HARRY_ROBERTS: Well. But It's interesting because I've read the LTP spec inside out. So I wrote an article about how to... Because it comes back, actually, this brings us nicely down back to progressive images. But the LTP algorithm also uses intersection observer to see how much of the element is in view. So I've got certain clients where if the mobile phone screen is short enough, the hero image is majority off screen. So it is the H1 but someone on a higher resolution or slightly longer, slightly, if someone's on an iPhone Max, then the image, well not on an iPhone, obviously because we don't capture iOS data. If someone's on a big phone, the image is far enough in viewport that it becomes LTP. So the exact same page on different mobile devices is really difficult. So that's why when you look at PageSpeed Insights or Crux, and it just says desktop and mobile, it's really important to remember that inside of desktop and mobile, there are also a lot of varieties. What I'll find is I don't like Lighthouse as a tool for me professionally. It's too simple. It's too basic. But clients love it. Web page tests, their mobile device screen sizes are smaller than Lighthouse. Yeah, it's a Boto T1. Yeah, so I'll have clients who run a Lighthouse test. Yeah, I'll have clients who run a Lighthouse test. And it'll come up with a different LCP to the web page test that I'm running. So there's loads of disparity there. But yeah, I read the spec inside out and the first thing it does, or one of the first things it does, is it calculates how much is even on screen. Because if the image is bigger than the H1, but the image is majority off screen, like you say, then we just don't count what we can't see. And that brings us on to the progressive images thing, which I think it's been a frustration for years. There is no, there is no, even though the customer experience is better, there is no benefit to performance scoring using a progressive JPEG, which I just think is unfair. Or any progressive format. 

DAN_SHAPPIR: Yeah, I agree. Um, 

AJ_O’NEAL: why, why would you want to use a progressive JPEG? 

DAN_SHAPPIR: Because you get content, you get meaningful, potentially meaningful content on the screen faster. 

AJ_O’NEAL: Do you really? Cause I remember progressive JPEGs and they just kind of suck. 

HARRY_ROBERTS: So you'd have to... You'd probably want to build them yourself. So they contain scans and you can pass in a scan file to your batch process, your images or do it all in bulk, whatever. But what can end up happening is the image, the progressive JPEG embeds like eight versions of itself. And the first ones will look really disgusting. And that's not a good customer experience. You're going to build a scan file that says, just build me three scans, like medium-high, very high quality and do it that way so you avoid the really, really low quality end of progressive JPEGs and just opt into medium, high, and super high res. That's what I tend to do. 

AJ_O’NEAL: I think I could buy into that if there was a way for the high DPI displays to know to pick the... For it to just not bother loading the super high res one unless it's high DPI. 

DAN_SHAPPIR: Well, you can with the picture tag or the image tag even these days with you can specify images to be associated with it with the devices DPI the browser is smart enough. 

AJ_O’NEAL: So then why wouldn't I just use the picture tag and use multiple JPEGs rather than a progressive JPEG? 

HARRY_ROBERTS: Because each one of those complicated. But yeah, each one of those multiple JPEGs would have multiple JPEGs inside it. So for every use case, you could potentially for every one of them have a slightly faster LCP. Progressive images, I think serve a real benefit. 

DAN_SHAPPIR: I'll give another scenario. Another thing that I encountered at Wix a lot are people building their website, let's say, with white background, a dark background image and white text. And then what would happen is that until the dark image would actually load, the text would be white on white. So the text would be there, you just couldn't see it. So even have... So at a certain point in time we actually would have image placeholders, kind of like what you get in YouTube style thing. Or I think Medium kind of introduced that. Just so that you could get the feeling that how the page should look like and see that text faster than otherwise. But I do have a question before we're running long. And there are a few questions that I did want to get to talk about. Because I assume that with at least some of your customers, you're running into scenarios where, let's say, you've got poor INP. But looking at the site, it's because they have a lot of, let's say, third-party scripts or pixels. And then you say, well, you know, you want to improve INP, get rid of your pixels or get rid of half your pixels and say, we can't, we need them for our marketing campaigns. Or alternatively, their score is poor because they're using like some sort of a heavy framework client-side rendered whatnot. Like, what are you going to tell them? Rebuild, re-architect your entire solution? Like, what do you do in those scenarios? 

HARRY_ROBERTS: It's difficult. It's really difficult to advise because the answers are very complex. One thing I find really interesting is if you read anything on web.dev, the advice is always the same. Break up your long tasks. It's like, step by step, how does one break up a long task? Because if I've got a function, and JavaScript runs to completion, so a function can't be interrupted or paused halfway through, it has to finish. If you've got a click handler that has to do X amount of work. Telling someone to break that up is really difficult because, well, when do I do it? So what I try and tell clients is, it's just going to be really difficult. It's going to be hard to do. But the biggest... I was with a client last week. The biggest culprit of theirs was they did loads of synchronous Google Tag Manager stuff on every click. They track everything. Someone clicks the menu open, it gets tracked. They close it again, it gets tracked. They click in the search field, it gets tracked. They don't type anything, that gets tracked. And all of that is like, that measures like, okay, 10% of people who enter, who clicked in the search box, never typed anything. So why didn't they type? So it's valuable business data, but it all happened synchronously. So for them, one of the biggest things, and like you say, it's a third-party tag manager, is just fulfill the user need immediately and sling all of that tracking work into like a set timeout or request idle callback or, you know, soon it'll be scheduler.yield. So it's look for the obvious things first look for the obvious things like, okay, tracking, move that into the next task. When you still end up with a task that's really large and you can't break it down any further, it's really hard to work out what to do. If you've got a function that needs to necessarily do quite a lot of heavy lifting, that's when you start to get into trouble. And I will admit, I'll be the first person to admit on a JavaScript podcast, I'm not a very proficient, hardcore JavaScript developer. I never have been. When it comes to breaking up long tasks, all I can really tell clients is I can show you the tooling. We can look at what it's doing, but you need to decide which of this thing needs to happen immediately. Which of this needs to happen for the user and what can you do safely elsewhere? So, party town if you've got really egregious tags. 

DAN_SHAPPIR: That's actually what we ended up doing at Next Insurance, where I currently work. So, we had not terrible, but kind of mediocre INP, but we were really anxious about having good core web vitals across the board even when INP lands. So we introduced party town and the improvement was dramatic. So now we really have excellent INP. But aside from maybe third-party scripts, like I'm really surprised when people have really long tasks. I mean, unless you're building a really sophisticated web application that does something lik super heavy, in which case that kind of brings us back to the web applications where these issues are kind of different, then I'm really surprised at having really long tasks. Like, what are you doing? Like, computing pi to the millionth digit? What's all this computation about? 

HARRY_ROBERTS: Do you know what? I've got... Anyone listening? I am for hire, and IMP kicks in on the 12th of March. So get into it. But I've got a full section of my workshop that starts out with, we always blame JavaScript. And JavaScript is yellow in DevTools. It's a big yellow slide. We always blame JavaScript. And the next slide is purple for layout and recount style. But it's usually going to be a CSS. With INP, what you'll find happening all the time is there'll be some synchronous layout thrashing. So the JavaScript overhead might be quite minimal. But someone's getting, I had a client where they had a list of 250 product images. And they wanted to line the image widths up with the image, with an element outside of that. Instead of getting the width of the element outside and then applying that width to 250 images, they got a node list of the images. Then for each one, Habig's image, make it that big, Habig's image. And they did that all synchronously. It took five seconds. 

DAN_SHAPPIR: Layout thrashing. 

HARRY_ROBERTS: Severe layout thrashing. So oftentimes what I find and what I try and tell clients and developers that I'm training, everyone's quick to blame JavaScript, but seriously, go and look for your purple. And then purple is going to be either recalc style or it's going to be layout. Are they in the right order as you're looking at them? Recalc style, layout. If you've got long recalc styles and short layout, that is because you've got complex CSS selectors, or you've got a lot of CSS, or you've got a lot of HTML. If you've got a long layout task, that's going to be structurally the page is quite complex to layout. You might have a lot of overlapping things, you might have a lot of grid or flexbox that relies on each other. So very heavy, how to describe it. Pages that look like tables of data, but aren't, that are built using things like flexbox. Generally you want to simplify the layout of the page there. Make sure you don't add things to the DOM, but nudge the DOM wider. So a lot of the time, long tasks are going to be CSS-y, recalc style and layout. If ReCut styles your cost, go and look at the cost of your CSS electors, the amount of CSS or the size of the CSS object model, and the DOM. If it's layout, try and simplify the page. 

DAN_SHAPPIR: Yeah, two points I'd like to mention. So in the context of we talked about layout thrashing, one of the things to take into account is forced reflows. You kind of mentioned it a quick explanation, when you modify the DOM inside JavaScript, then in most cases, the browser will try to not re-layout the page until it absolutely has to. So basically, it will try to run all the JavaScript and only then, when the JavaScript is done, actually apply the layout changes that result from the DOM changes you've made. But if you kind of ask the browser, the DOM, about, let's say, the size or position of an element, you're kind of forcing it, you're forcing its hand. You're telling it you must apply all the changes that you've done in order to be able to answer that question. That's called a force reflow. And layout thrashing is when you set something, ask something, set something, ask something, and you're forcing the browser to reflow like on every iteration of the loop, as it were. So in that case, it's beneficial to break it into even two loops, like do all the sets and then do all the gets, or something, or do all the gets and then do all the sets, whatever. But don't interleave the gets and the sets together. And I had a similar situation, but we don't have time to go into the details you described it. The other thing I'd like to mention is the CSS content visibility setting which you can, if you have a super complex page, you can at least say, well, all these areas that are below the fold, don't try to lay them out now because nobody sees them. And if you know that they can't impact what the user is actually seeing, then that can really speed up that sort of thing. 

HARRY_ROBERTS: Yeah. Yeah, I've got a really, it's a nasty CSS lightens about. I used it on my, I profiled it and it turns out even the nasty CSS selector is faster than the cost of laying out long blog posts. I've got certain articles on my site that are tens of thousands of words, really, really long. Most users aren't going to scroll all the way to the bottom. So most people aren't going to see everything. So I've got a selector, which is main content space, h2 tilde h2. So any h2 anywhere after the second h2, or you could do h2 and type 2 everything after the second H2, Content Visibility Auto. So everything after the second H2, so you've got an H1, an H2 here, maybe an H2 here, that's going to be pretty far down the article already. By the time you're at the second H2, you're quite far down. So what I do is I say, put Content Visibility Auto on everything after that. So the majority of the article never gets rendered until someone starts scrolling towards it. And that's more than half the layout cost for me on my particularly long blog pages. 

DAN_SHAPPIR: The only downside is unless you're careful, you can get weird effects with the scroll bars. The scroll bar like grows and shrinks. 

HARRY_ROBERTS: If you've got visible scroll bars, yeah, the best way to check if that's working is go to the layers panel. And if the document layer is rendered really short, or not really short, but if the document layer is rendered say maybe 15,000 pixels tall, then you scroll all the way down to the bottom of the page, then go back to the layers panel, and it says, oh, it's actually 17,000. That's how you know it worked because the layers panel is what the browser actually rendered. So try this hack. Go to the layers panel, see how big the document layer is. Then go back to your browser, scroll all the way to the bottom of the page, go back to the layers panel. And if that figure has changed from 15,000 to anything else, it means the trick worked. But you're absolutely right. If that number's very different, let's say it rendered 15,000, but then it ended up being 45,000, that scroll bar is going to get a lot shorter. 

DAN_SHAPPIR: So what is the trick? So two things. So first of all, there's also the content. What's it called? The intrinsic layout height or something, which you can kind of use to indicate. 

HARRY_ROBERTS: Contains intrinsic layout. 

DAN_SHAPPIR: Yeah. And the other thing. 

HARRY_ROBERTS: Absolute dimensions. 

DAN_SHAPPIR: Yeah. And the other thing you can do is that when the user scrolls, you can remove the auto, something along these lines that may change it to visible. So there'll be like a jump when they start scrolling. But at least that will only, or at least you can. No, something along these lines. So you're kind of deferring some of the layout work. 

HARRY_ROBERTS: It's all built on intersection observer anyway. So you could hook into that and say, as we get closer, turn it from auto to visible. But that happens automatically. But you're absolutely right. If you're going to do this, you need contain intrinsic size to say, every paragraph is roughly 300 pixels tall. So every P element, you put contain intrinsic size one pixel by 350 or whatever. And that should minimize the jumping that you're describing. 

DAN_SHAPPIR: Yeah. I think we're nearing the end of our show, and we need to move into Picks. So is there anything else you really would like to mention? 

HARRY_ROBERTS: Honestly, for me, not much at all. Just INP is the next thing to worry about for most sites that are benchmarking Core Web Vitals. So if you haven't been taking it seriously already, 12th of March, if you haven't seen the announcement yet. If you don't know where to look for your INP scores, ask your marketing team or your SEO team or whoever's in charge to give you access to Search Console and go look for Core Web Vitals in there and that'll tell you where to start looking. And if you need any help with it, you can always get in touch with me. You can tweet at me. I've got some resources around INP. That's the only real pressing thing that's on my mind at the moment. Do you know what? As long as Google invent a new metric every two years, I've got a job for life. 

DAN_SHAPPIR: Well, the next one I think that's the next big change that's probably going to land is when they start they make soft navigations official. So even like in multi-page applications in single page applications, they'll start measuring the navigations between pages that happen on client-side rendered apps. But I don't know what the impact will be. Will it make scores worse or better? It will be interesting to see. 

HARRY_ROBERTS: Yeah, I think better. I think sites will get, it'll be easier to optimize them and the scores will get better as well, I think. That's my prediction. 

DAN_SHAPPIR: Time will tell. 

AJ_O’NEAL: All right, well, I've got a hard stop that was 30 seconds ago. So let's go ahead and wrap up. First of all, Harry, if people want to find you, where should they find you at? 

HARRY_ROBERTS: Regressively, CSS wizardry everywhere. So Mastodon or X or my website or email is CSSGmail. If anyone needs anything, got any questions, or wants any of the resources that I've spoken about, you can find me at CSSWizardry everywhere. 

AJ_O’NEAL: All right, thanks. And with that, we'll go ahead and wrap up, get to picks. 

Hey, this is Charles Maxwood. I just wanted to talk really briefly about the Top End Devs membership and let you know what we've got coming up this month. So in February, we have a whole bunch of workshops that we're providing to members. You can go sign up at topendevs.com slash sign up. If you do, you're going to get access to our book club. We're reading Docker Deep Dive and we're going to be going into Docker and how to use it and things like that. We also have workshops on the following topics and I'm just going to dive in and talk about what they are real quick. First, it's how to negotiate a raise. I've talked to a lot of people that they're not necessarily keen on leaving their job, but at the same time. They also want to make more money. And so we're going to talk about the different ways that you can approach talking to your boss or HR or whoever about getting that raise that you want and having it support the lifestyle you want. That one's going to be on February 7th, February 9th. We're going to have a career freedom mastermind. Basically you show up, you talk about what's holding you back, what you dream about doing in your career, all of that kind of stuff. And then we're going to actually brainstorm together, you and whoever else is there and I, all of us are going to brainstorm on how you can get ahead. The next week on the 14th, we're going to talk about how to grow from junior developer to senior developer, the kinds of things you need to be doing, how to do them, that kind of a thing. On the 16th, we're going to do a Visual Studio VS Code tips and tricks. On the 21st, we're going to talk about how to build a software course. And on the 23rd, we're going to talk about how to go freelance. And then finally, on February 28th, we're going to talk about how to set up a YouTube channel. So those are the meetups that we're going to have, along with the book club. And I hope to see you there. That's going to be at topendevs.com slash sign-up. 

AJ_O’NEAL: Uh, picks are where we just talk about something that was interesting or that we liked or that's on our minds. Uh, that could be tech related or not. I will actually go first this time. So we talked about image optimum. So I will drop a link to. Image optimum. It is like I said, the, the best tool as far as I'm aware for being able to. Um, resize images so that they're 

HARRY_ROBERTS: staggeringly effective. 

AJ_O’NEAL: Well, it's brute force effective. So yeah. 

HARRY_ROBERTS: Yeah. 

AJ_O’NEAL: Um, yeah, it does. It does way better than it. It just takes a long time. You know, you just sit there and you wait. Um, okay. So there's that. And then of other things I've been playing around more with home assistant. I now have it to the point where there's a sensor downstairs. And I've almost I have the path to get it so that the temperature is changing based on the downstairs temperature the thermostat is turning on or off based on the downstairs temperature rather than the temperature that's in the thermostat itself. I just I don't have it quite working yet because I basically have to set four different events of like if the temperature goes here if the temperature goes there turn on turn off if it goes here if it goes there turn on turn off. I got one of the events set up so I know I can set up the other ones. And then I just ordered, IKEA has Zigbee stuff. And so I just ordered some of that and I don't know if I'll get it this week or next or whatever but they've got some switches and whatnot. The thermostat is Z-Wave, the temperature sensor is Zigbee. I'm using the Home Assistant-made Sky Connect for the Zigbee devices and some, I can't remember what it is that they recommend, but it's on their website is the one they recommend for the Z-Wave devices. And so we'll, we'll see where all this goes. It is a pain in the butt to set up, but if you are looking for an alternative to a smart home without all of that data being constantly monitored and stored by Google and Amazon. There's a path. It doesn't seem like it's a pretty one. It seems like it might be a really good market opportunity. Because I just don't... IKEA actually has a smart app. I haven't tried it out. Because I don't really want all my data going to IKEA either because I'm sure that they don't... They're not even really a tech company per se. So whereas I at least trust that Google's spying on me privately, you know, IKEA will just get hacked and all the data will be out there. 

DAN_SHAPPIR: They're keeping it in some IKEA cupboard. All that data. 

AJ_O’NEAL: Yeah, exactly. So anyway, and then I don't recall if I've been talking much about the aquarium stuff that I'm doing, but it's just so fun. It's such a good hobby. So I've got two aquariums set up, and I'm gonna set up a few more and try some different things, different plants, different fish. I kind of found the fish that I like. One is just referred as autos and they are bottom feeders. They must have algae in order to survive. So if you don't have algae in your tank, it's actually good to have a little algae in your tank. But if you don't have algae in your tank, you gotta put in some fresh vegetables and let them, or blanched vegetables and let them eat that. And then the other ones are the neon tetras. They just look so cool. You have to buy, they say at least six of them, but I'm gonna say probably at least 10 of them because the schooling behavior that you want to see, what's cool about them other than they look beautiful is that they swim around together. And if you only have six of them, they're not gonna do it as much. So you gotta get probably 10 of them. And they say you need like a 10 gallon tank for six or a 20 gallon tank for more. But if there's room in the tank, you can put 10 in a 10 gallon tank, it'll be fine. And then I've got a mystery snail and um, a fancy guppy and, uh, they all, they all are in our main tank. Well, I, in our main tank, it's not a fancy guppy, it's a betta, but the betta is a mild-tempered betta. So it's not attacking the other fish and the autos are bottom dwellers and the, um, the neon Tetras are mid dwellers and the betta is a top dweller. So they, they kind of have like, it's only a 10 gallon tank, but they have enough space that they don't have to to get in fights with each other. But it's just, it's a really cool hobby. It's, I'd recommend it. It's just, it's therapeutic to, it's art. You build the tank. You're not just throwing in some gravel from PetSmart or Petco and then dropping some fish in. Like you gotta select the aquasoil and you pattern the sand you way you want it. You put a hill, you put some stem plants, you put some other plants. It's just, and then you actually have less upkeep than if you were to go the know the typical gravel you know painted gravel route because then the right bacteria in there it becomes an ecosystem and it all cycles um although there is a little bit of prep work you have to do to let the tank sit and cycle before you put the fish in so that that the right bacteria are there to help them live because if they're not there then the fish the first fish you put in the tank if you don't know about that they'll die um uh but anyway like there's there's less upkeep once, once the ecosystem is going, you know, very, very little feeding that you need to do very little cleaning that you need to do, you know, small, you don't change the whole tank. You just do like a 10 or 20% water change now and then. Um, but it's, it's, it's, it's very. I it's, it brings joy to my life. So I pick the aquarium hobby. Um, and with that, I'll, I'll pass it over to Dan. 

DAN_SHAPPIR: So I like my dog. Dog's beat fish any day other week in my book. But that's not my pick. So my first pick, how can I not pick it? So it's the Apple Vision Pro. I don't have it. 

AJ_O’NEAL: No. 

DAN_SHAPPIR: I'm not planning on getting it anytime soon. But from the videos I've seen, it's simultaneously amazing and ridiculous. So, you know, I got to, you know, to say that it's a brave product, but I'm just not seeing the killer app for it yet. And I'm not, and people, you know, I've seen people, like videos of people walking with it on the streets. Like that's like an invitation to getting beat up, I think. Yeah. So that would be my first pick. My second pick is we're watching this show on Netflix. It's called Griselda. It's not like a great show, but it's nice. We enjoy it. If you're into the narcos kind of thing, it's interesting. It turns out that the. She was like a drug kingpin in Miami in the 70s onward. And it's a TV show with Sofia Vergara who's actually playing pretty good in the lead role. And it's fun. So that would be my second pick. And oh, one more thing that I wanted to pick. So when we discussed recording this episode, One of the things that I initially hoped that we'd get a chance to talk about was caching rules. But since we haven't gotten around to it before we ran out of time, I would like to recommend Harry's talk from the Performance Now Conference, the 2020s free talk about caching rules, everything. And I had questions I wanted to ask, but I guess I will save them for another day. But in the interim, I highly recommend that talk. And finally, I'm like hoping for peace in the Middle East and Ukraine. And that would be my picks for today. So over to you, Harry, if you've got anything that you'd like to shout out about anything at all. 

HARRY_ROBERTS: Yeah, so nothing specific, nothing that is relatable as yours. I'm missing cycling. I do a lot of cycling, but the weather's just been terrible. So That's going to be my next thing to focus on outside of tech is getting back on the bike. It's hanging on the wall, just there looking very, very lonely and unused. I was obsessed with caching last year. So summertime onwards, I just got obsessed with caching because I had a client with some real specific cache problems. That was my tech obsession for fall and winter last year. My new obsession is memory management. I've got a client who's trying to debug memory leaks. They don't care about co-ed vitals. They don't care about load times. 

DAN_SHAPPIR: Memory leaks in the browser? 

HARRY_ROBERTS: Memory leaks in the browser. It keeps crashing their Android app. It's a WebView, and it keeps crashing the Android app, and customers are complaining. So I'm now obsessed with the memory panel, which is very under-documented, and it's fascinating. 

DAN_SHAPPIR: Yeah, let's talk about it, because I'm actually doing similar work. We've got some issues with node-based services and ultimately you use the same memory panel. You take a memory dump and then you effectively use the same panel. So it's not an easy panel to use. But it's fairly rare that you run into real memory issues with browser applications. We should probably talk about it you know, schedule some time after this podcast. I'd really love to hear about some of your insights and maybe I can also make some suggestions. So maybe we should schedule a presentation about it. 

HARRY_ROBERTS: I'm new to my performance management journey, but I'm writing a talk called Memory Management for Mere Mortals, where I am the mere mortal. And I just talk about, okay, I had a client who had this, I even told my client, I don't think I'm the right guy for this job. I do very web, like loading performance. I don't do I'm like, no, we're convincing the right person for this job. So I did all the research and I was like, okay, we can do it. And found some really interesting insights. The memory panel hasn't really been changed for nearly 10 years, maybe longer. Um, but that's what I'm obsessed with at the moment. So I might either write a blog post about that or there's definitely a talk upcoming. Yeah. Danny, if you want to talk about it, um, specifically after this, then yeah, let's, um, let's, let's call it. We can swap notes. 

DAN_SHAPPIR: Probably a good idea. I'll ping you on, on X. All right. Well, with. 

AJ_O’NEAL: Well, thanks for coming on. It's been great to have you and, you know, especially have you and Dan going back and forth and dropping all those knowledge bombs on us. And we hope to see you again. 

HARRY_ROBERTS: Yeah, it's been great. I was very privileged to be invited on. I've seen I've seen the podcast for years now and I finally got an invite. So I'm really grateful. 

DAN_SHAPPIR: Yeah. 

AJ_O’NEAL: Alright, well, we'll catch you later. Adios.