Core Web Vitals and Whatnot - JSJ 537

Today’s guest Annie Sullivan, a software engineer on the Chrome Platform team, focussing on core web vitals metrics which is all about performance and user experience metrics for websites. We discuss topics such as Largest Contentful Paint (LCP), and how it works behind the scenes. We also touch on Cumulative Layout Shift (CLS) and things that impact browser experience.

Special Guests: Annie Sullivan

Show Notes

Today’s guest Annie Sullivan, a software engineer on the Chrome Platform team, focussing on core web vitals metrics which is all about performance and user experience metrics for websites.  We discuss topics such as Largest Contentful Paint (LCP), and how it works behind the scenes.  We also touch on Cumulative Layout Shift (CLS) and things that impact browser experience.  

On YouTube

Sponsors


Links


Picks


Transcript


CHARLES MAX_WOOD: Hey everybody and welcome back to another episode of JavaScript Jabber. This week on our panel, we have Steve Edwards. 

STEVE_EDWARDS: Hello from a very sunny Portland for a change. 

CHARLES MAX_WOOD: We also have AJ O'Neill. 

AJ_O’NEILL: Yo, yo, yo. I'm coming at you live from just the garage office. 

STEVE_EDWARDS: The purple room. 

CHARLES MAX_WOOD: Right. 

AJ_O’NEILL: The purple room. Coming at you live from the purple room. 

CHARLES MAX_WOOD: Dan Shapir.

DAN_SHAPPIR: Hey from sunny as usual Tel Aviv. Well, actually now it's evening, so it's less sunny, but yeah, it was really sunny today. 

CHARLES MAX_WOOD: I'm Charles Maxwood from Top Ed Devs. It's pretty sunny here too. It's been kind of nice. We also have a special guest this weekend that is Annie Sullivan. Annie, do you wanna introduce yourself and let us know what kind of guru you are with all this stuff? 

ANNIE_SULLIVAN: Hi everybody, I'm Annie Sullivan. I am a software engineer on the Chrome web platform team and I lead the team that develops the core Web Vitals metrics. Those are performance and user experience metrics for websites. 

CHARLES MAX_WOOD: Awesome. 

 

Hey folks, this is Charles Maxwood from Top End Devs. And lately I've been working on actually building out Top End Devs. If you're interested, you can go to topendevs.com slash podcast, and you can actually hear a little bit more about my story, about why I'm doing what I'm doing with Top End Devs, why I changed it from DevChat.TV to Top End Devs. But what I really want to get into is that I have decided that I'm going to build the platform that I always wished I had with devchat.tv. And I renamed it to Top End Devs because I want to give you the resources that are gonna help you to build the career that you want. So whether you wanna be an influencer in tech, whether you want to go and just max out your salary and then go live a lifestyle with your family, your friends, or just traveling the world or whatever, I wanna give you the resources that are gonna help you do that. We're gonna have career and leadership resources in there, and we're gonna be giving you content on a regular basis to help you level up and max out your career. So go check it out at topendevs.com. If you sign up before my birthday, that's December 14th. If you sign up before my birthday, you can get 50% off the lifetime of your subscription. Once again, that's topendevs.com. 

 

CHARLES MAX_WOOD: And I know Dan invited you, so maybe you two can tell us how you met and how that all worked out so we can get you on the show. 

DAN_SHAPPIR: Well, Annie's pretty much set the parameters for my career over the past two to three years, ever since Google came out with Core Vitals, which I think is celebrating its second birthday more or less right now, then it really has had a dramatic impact on how we all measure performance, what we optimize for and whatnot. And so I thought that it would be great, again, celebrating this second anniversary, as it were, to speak with Annie and learn about how all this thing came to be.

CHARLES MAX_WOOD: Ooh, Storytime. Annie, I didn't know this was Storytime. 

ANNIE_SULLIVAN: All right, so I mean, like the history of web performance is super long. And I've been working in this space for a little bit over a decade. Mostly internally at Google, I worked on Google Docs. I'm making it faster and then on the web search front end. And then I came and worked on Google Chrome. And when I started on Chrome, I think we were a little bit, it's a client application, right? So it gets installed on different computers and it was just a really different way of working than working on a website where you make a change and it goes live and you see everything instantly. And at first I worked on performance testing and tooling and then I started collaborating with Tim Dresser who was leading an effort at the time to kind of standardize how do we even decide what's fast. If you look at the benchmarks, the JavaScript benchmarks that all used to come out, there were like so many different metrics and so many different ways of measuring and at one point with the Octane benchmark, I'm not sure if people remember that, like, there was a big problem where we started to do tests on, like, say, 100 real world websites. And we make a chain, if you look at the way that the V8 works, it's a bunch of different subsystems, you know, there's garbage collection, etc. And if you want to make Octane faster, you optimize one subsystem. But if you want to make those real websites faster, you optimize a different subsystem and we realized that these kind of like lab benchmarks were really actually making things not necessarily better for users. They were just making it like a number, just improving a number that wasn't really measuring a real user experience. 

DAN_SHAPPIR: You kind of remind me of a funny story that I recall that some like two decades ago, there was this real competition between C++ compiler makers who could generate the fastest code from the C++ code that was handed to them. And there were also a bunch of benchmarks. And one of them was the Siva Varistophanes, which is like this algorithm for calculating prime numbers. And the test would be to take the C++ code and see which resulting executable could generate a certain number of prime numbers the fastest. And what some of these compiler makers, what they did is that they would literally identify code and then instead of actually compiling it, would just output a hand-optimized bit of assembly language that calculated these numbers really, really quickly. So just so that they could win the benchmark. But obviously that's an extreme example, but I could totally, what you're saying totally, make sense to me, the fact that you can easily optimize the wrong thing if you're not looking at real world data that what you're measuring often determines what it is that you optimize. 

ANNIE_SULLIVAN: Yeah, that's exactly... Sorry, go ahead. 

CHARLES MAX_WOOD: No, it's all good. I'm running for school board and it makes me think of the people who teach to the test. Can they actually do math? 

ANNIE_SULLIVAN: Yeah, yeah. My kid's in his third standardized test this year. Yeah, I definitely hear you. But Dan, that's exactly what we were trying to do on Chrome, go from having these individualized benchmarks to really looking at what are real users seeing in the real world. So basic things, like how fast do web pages load? It turns out that there wasn't actually perfect metrics for that 10 years ago. We had this thing called Speed Index. And Speed Index is awesome. It looks at the average time it takes to display pixels on the page when the page is loading. The problem is that we couldn't get this metric for real users because it's slow to complicate, it's slow to compute. And Pat Meenan, who created WebPageTest, he helped my team, who at the time we were working on resource prioritization. So I don't know how much you've all done page load optimizations, but a big part of it is which resources come in which order. And so our team was like, oh, maybe if we kind of tweak the resource prioritization, we can make pages load faster. So Pat was like, great, show me your changes and I'll run like 100,000 pages on WebPagetest and we'll see like what changed between the old and the new version. And you did that. 

DAN_SHAPPIR: Again, just to interrupt you for a second because our listeners might not be familiar with it, webpagetest.org is this amazing online tool that can analyze the performance of websites in what's known as a lab type scenario. It basically runs it in a virtual machine and looks at CPU, the network traffic, it looks at the various resources that get downloaded and how the screen gets rendered. And obviously, you can use it for your own website. In fact, you can run it from various locations around the world, which makes it possible to test how your site loads in those various locations on various devices, various types of browsers. It's an amazing tool. But beyond that, and correct me if I'm wrong, I think you're also running web page tests on the websites, automatically on various websites that Google collects performance information for. So it also makes it possible to analyze, not just manually analyzing a site, but like you said, you can look at thousands of websites and see impact of things that you're changing. Y

ANNIE_SULLIVAN: eah, this is one of the early ways that we started looking at our changes. Now we usually run experiments for real users where we just change this and see how it impacted. But we started with this large-scale lab test using WebPagetest. I make a change to Chrome. How is before and after? What does WebPagetest say is different on about 100,000 web pages? And WebPagetest gives you a couple of different metrics. It gives you this really cool speed index, which is this visual thing that the user can see. When did they actually see the web page content? And it also gives you more traditional page load metrics, like the onload event. So the onload event wasn't actually window.unload. It wasn't actually meant to be a performance metric, right? It's like a measure for like, when might I want to inject some JavaScript, right? All of the sub resources on the pages are loaded, everything's parsed, et cetera. And we had thought that this was like a, just a fine proxy for webpage performance. But what Pat found out when he ran this test was that sometimes our changes would make onload faster and they would make speed index slower. And so again, we see this thing where we've got a bit of an artificial metric and we can make changes so that make the real user experience worse when the metric gets better. And so the first thing that happened was the team set out to make metrics that really were able to measure in the field for real users, what are they seeing? And we hooked into the paint system and we made first Contentful Paint and Largest Contentful Paint. And then we went back to the lab and we found that Largest Contentful Paint is very highly correlated with speed index. So it's really, this is a new metric that's just, it's very simple. What is the largest image or text that paints to the screen? And that really does correlate visually with what the user sees as a page load. Sorry, Dan. 

DAN_SHAPPIR: No, I was just, I'm really, I just said I'm really curious about largest contentful paint because it's a relative late comer compared to some of the other metrics that you, that some that you've mentioned and others. I think that first paint and then first contentful paint predate largest contentful paint significantly. For a while, I recall the Lighthouse lab test actually using first meaningful paint, which is kind of problematic because it's kind of subjective, but or at least a result of certain heuristics. But I'm really curious about that because effectively you guys invented largest contentful paint. It didn't exist. As I recall before COVID vitals was announced, I don't recall ever hearing of it anywhere else. Whereas, you know, those other metrics that you mentioned like speed index have been around for a long time. So I'm wondering, how do you invent a new metric like that? 

ANNIE_SULLIVAN: So we didn't fully invent Largest Contentful Paint. I think there's some pre-existing work that Steve Souders and SpeedCurve were doing where they're trying to do something very similar. But they called it hero image. Like, can we automatically detect what the hero of the page was. And so we were very inspired by that. But also, I think that there's a component of it, that's what's possible to measure, right? With Speed Index, a big innovation that led to Speed Index was web page tests can take a video of the web page. And then if you have a video of the web page, you can look at the pixels, pixel by pixel. So now it's possible to have Speed Index. And a very similar thing happened with Largest Contentful Paint where we worked together with the team that actually manages painting in Chrome's rendering pipeline. And they were able to tell us image by image, text piece by text piece, what's painting when, in a way that's standardizable. And that's actually a pretty new thing that we had not been able to do before. And so that's why all of a sudden, we were able to make this metric when it wasn't possible before. 

DAN_SHAPPIR: I think it is worth noting, by the way, that while first, is now measurable, let's say, also in WebKit, and also maybe in Firefox, I don't recall, off the top of my head, a large content for paint currently is only measurable on Chromium-based browsers. So it's only really available in Chrome, Edge, and so forth. The other, I think that currently Apple or the WebKit people don't really intend to implement it as far as I recall. Or at least they're saying that they have issues with it, or something like that.

ANNIE_SULLIVAN: Yeah, currently, their latest feedback was that they were for something where you could measure image by image, which is element timing. So I'd love to sync up with them on that again, because you could calculate LCP from element timing. But so far they haven't expressed interest in implementing. But Mozilla is prototyping LCP right now. So we're really, really excited about having it in a second browser and getting that feedback and hardening that metric up. 

DAN_SHAPPIR: Cool. That'd be awesome. Yeah. You guys kind of worked with the people who were responsible for the rendering pipeline and throwing ideas around and that's how LCP came to be and in influence by speed index as it were 

ANNIE_SULLIVAN: and influenced by a speed curve as well and the the work God that Steve Souters did on hero elements 

DAN_SHAPPIR: and And once you've had this idea, how did you actually? verify that it it's worthwhile that it actually measures something that's worth measuring. 

ANNIE_SULLIVAN: Yeah, so validating the metrics takes a really long time, and that's where I spend a lot of my time day to day. The first thing we do is we go to the lab and we measure a bunch of web pages. So at first we didn't necessarily know that we were gonna do largest contentful paint. We kind of had four things that we were gonna try. The largest text paint, the largest image paint, the last text paint, and then the last image paint And we work with the paint team to be able to kind of like piece together, when did each of these things happen in a Chrome trace? If you're not familiar with Chrome tracing, it's like a performance debugging system in Chrome where we can just output a bunch of things. And then a lab tool can come along and read that and reconstruct some things about what might've happened. So we took like, you know, I started with about 10,000 pages in our lab debugging system output when all these different things are happening. I'd also output a film strip. We looked at, first off, are they even different? When are they different? Why are they different? We looked at the film strips to see when are they the same? Does a page look like it loaded there? Then when are they different? Which one looks better and why? From there, we got more confident that largest text or image paint would probably be the thing that we wanted as opposed to individually splitting out text versus images or having the last paint. So that made us really confident in the idea of Largest Contentful Paint. And then we implemented it in Chrome in our monitoring system and added what we call URL key metrics logging. That's the thing that logs to Crocs. But we can see what were the, the LCPs of various pages. And then we can go and look at outliers or look at pages where, where metrics are different. That's one of the things that, that helps us a lot in understanding when a metric might not be working right. We look at, we have some ideas about what we might expect. So, for example, we know Wikipedia is really fast. And then we go and we look up what's the LCP of Wikipedia, and we would expect it to be lower than the LCPs of sites that we think are slow. Or we compare it to Onload. We did a correlation analysis between LCP and FCP speed index and Onload. And it's much more correlated to speed index than to the other metrics. So that also gave us some confidence that people really like speed index. They like how it's is showing you that visual indicator of the page loading. It's got nearly a decade of feedback. And we're seeing that it's more correlated with that than with other metrics, which we know have some edge cases that don't work out well. That was really exciting to see too. 

DAN_SHAPPIR: So two comments. First of all, you mentioned the term crux, which I don't know if you mentioned before, just to tell our listeners that we had a whole episode with Rick Viscome, who effectively manages crux, I think, at Google or something like that. And Crux is this cloud-based database into which you collect performance data from Chrome sessions out in the wild, right? As I recall, like if you've enabled browser history sync and some other things and don't enable some other things, then you guys collect anonymous performance data from those sessions and put it into this Crux database where you obviously have access to it, but amazingly, you also give everybody else access to it, at least for read, obviously, not for write, and also in a slightly restricted way for privacy reasons. But other than that, it's pretty free access, or at least if you're running BigQuery, you need to pay for the queries, but not for the data itself. 

CHARLES MAX_WOOD: I had a question just to kind of back up on this a little bit, because we're talking about you can get a feel for, hey, this, this website, be performs well. The site doesn't perform well, or maybe the site performs well. And the site performs better. You said that you built the core web vitals. You built last contentful paint into Chrome and now they're building it into Firefox. Is that the same thing? Is that building the reporting in for that, the measurement in for that? Or 

ANNIE_SULLIVAN: is that a good question? 

CHARLES MAX_WOOD: Yeah. And for Firefox, what does it mean for them?

ANNIE_SULLIVAN: So yes, when we say largest contentful paint, we specifically mean the specification. In this case, it is a web standard in the Web Performance Working Group called largest contentful paint. And it actually specifies which paint is the largest contentful. How do we measure it? And it goes over those details. And it's an API, which means that you can have a performance observer in your JavaScript that listens to the data it'll tell you when the largest constable pane is. And then you can go and send it to your run provider or your analytics. So Firefox and Mizzou are implementing that API. 

CHARLES MAX_WOOD: Okay. So, so when you say they're implementing that API, I mean, what does that do for me? Is it just, does it just make me aware that it's being collected somehow? Or is that something that I can go look at or?

ANNIE_SULLIVAN: I think you would have to collect it yourself. 

DAN_SHAPPIR: Yeah, exactly. 

ANNIE_SULLIVAN: Go ahead, Ed. 

DAN_SHAPPIR: No, I'm exactly what you were starting to say. And I apologize for barging in. But as a person who, as the user of these APIs, and you need to make an important distinction between Google, who are collecting this information directly from Chrome for any website, again, assuming that you as the user have not opted out of this process, then Google basically collects this data anonymously for any website that you visit versus you deciding that you want to collect this data yourself for your own website. So Google collects all this information and you can look at the information that Google collects and we can talk about some of the caveats that are associated with that. But if you want to have more control over the data that you're collecting, as it were, you can either purchase and integrate a third party solution from Speedcurve, from Akamai, from Sentry. There are like a whole bunch of providers out there. And then you put it in your own website, you start collecting data for your own website. And that's distinct from the data that Google collects. Now, previously, those APIs were only available in Chromium-based browsers. So if you, let's say, integrated Speedcurve into your website, they could get collect performance data from sessions on Chrome, but they could not collect all the same data from sessions on Firefox. Well, if Firefox does implement these APIs, then they would be able to collect this information for Firefox as well. But it doesn't mean that Firefox will start sending their own data into the Google database. Well, I mean, they might decide to, I don't know, but it doesn't automatically happen.

CHARLES MAX_WOOD: I think you kind of answered it, but I want to just clarify this and you can just tell me if I'm right or wrong. Effectively, what you're saying is that this is an API that is callable in JavaScript or through some other mechanism within the browsers. If I'm running, Raygun is our sponsor here, similar to Sentry or whatever. Though Sentry is sponsored in the past. Anyway, I think a lot of them have implemented these Core Web Vitals metrics. What they do is when they're running on the page they make the call to the API in Chrome and say, hey, it loaded the page again, give me the last contentful paint for this guy. And then they're aggregating that where I can see it just as Chrome is doing the same thing for Google so that they can rank my page. 

ANNIE_SULLIVAN: Yep, and they may have additional data about your page that will help you more. 

CHARLES MAX_WOOD: Yeah, okay, that's good. I was trying to figure out what that meant, but that makes a lot of sense because then I can go get a tool that can aggregate the data as my page loads on different phones, computers, or tablets, or what have you, and then come back and say, you could really reduce this by making an image smaller, or by doing these couple of optimizations, and that's not information I'm going to get from Google. 

STEVE_EDWARDS: So Annie, you mentioned that Firefox is implementing this. What about other browsers? I know for instance Edge is Edge of Chromium, so does that mean it's in there? What about Safari? or Opera or some of the other many browsers that are out there. 

ANNIE_SULLIVAN: Yeah. So as far as I know, it's available in all Chromium based browsers, which would be like Edge, Brave, Opera, Chrome. Safari has not, they have not made a commitment to implement large contentful paint there. So if you're interested, you could give feedback directly to the WebKit team to see if they'd be willing to do that the best space I can get. 

STEVE_EDWARDS: Safari seems to be the IE6 of the day anymore from... Oh, fuck. I've heard that. I've seen that thrown around in a lot of different places. So I was just curious to see what other browsers might be implementing this as well. 

DAN_SHAPPIR: I would also mention that it's worth noting that some of the issues have to do also with rendering strategies. For example, Firefox, not Firefox, sorry, WebKit, Safari recently implemented first content for paint friend of the show, Norm Ozenthal, who was our guest on several occasions, was actually the person who contributed the code into WebKit that added this capability and then Apple approved this change. But one of the issues is that the first paint or first contentful paint can actually happen for the same page, could happen at different times on Safari, on WebKit, and on Chrome using Blink. Because they have like different strategies deciding how long to hold on to the image of the previous page before rendering the content of the new page, like what is the threshold to erase the previous view and start rendering the new view. And there was a concern that because of these different strategies, one browser might seem to have better performance than another browser, even though that might not really be the case.

ANNIE_SULLIVAN: Yeah, I think that the biggest difference is called, I think, paint holding is the term for that. And the biggest difference is in the first paint, more so than the first contentful paint. And so WebKit decided not to implement the first paint metric because for them, first paint and first contentful paint are always the same. Where there's some situations where Chrome would paint a skeleton full of divs, for example. Whereas WebKit might hold that paint until there's some text to display as well for, for example. 

AJ_O’NEILL: So I've, I've noticed a lot of these modern pages. They do this. It's, it's almost like watching one of those old progressive JPEGs load from the nineties where they load a bunch of text blocks and then they load some content and they load some more content. It just feels like websites are getting really slow, but what is. Is that for performance hacks? Is that just to look goofy? Do you know anything about that? You know what I'm talking about where they they've load a skeleton page that's got no content and then. 

ANNIE_SULLIVAN: Yeah. So, so I have seen them loading no content skeletons. And I think it's the first thing I'll say, I think Dan and I talked about this a little bit in another context is that there isn't a whole lot of user experience research about you have some end state that the page wants to be in. And like how many intermediate states are good? Should it be very progressive and slowly load or should it all snap in? And so I think that that's something that we're hoping to get more user experience research on and sponsor. But currently, we don't necessarily know. But I do know that a lot of the companies that are doing these sorts of skeletons, they do have pretty advanced product teams and pretty advanced metrics. And so when I see these, at least in the larger products, I'm pretty sure that there's some reason that they're doing it, right? That their metrics look better when they have a skeleton versus don't.

DAN_SHAPPIR: Yeah, it kind of reminds me of what happens with progress indicators or spinners. Like is it better to have a spinner or better to have nothing at all in that area? And obviously the best thing to do would be to not need a spinner and just load the content more quickly. Uh, but, but given that it is what it is, you need to decide between showing nothing and showing a spinner and it's not always clear which is better. 

ANNIE_SULLIVAN: Yeah. It does seem that we want to show something. Right. But how many some things and how progressive they should be, there's a lot of debate and it's, I think it's very unclear. 

AJ_O’NEILL: Yeah, it just seems strange to me because it's basically, it almost seems like a waste, right? Like why load an image three times? Why load a website three times? Why not just load it once faster, especially when it just seems strange to me. You got a site like Twitter. I'm not sure if this is one of the ones that does it or not, but there's, there's sites like this that do these skeleton loads and they They put the placeholder there with all these blocks and images and everything, and then it loads up with text afterwards. Why? Okay. Why not just get the text? It's just text. 

DAN_SHAPPIR: By the way, the one that I immediately springs to my mind is YouTube, where when you load YouTube, especially on slower connection, you get like this grid of gray boxes, which then get replaced by the various poster images and the text that's associated with the various videos.

AJ_O’NEILL: Yeah, that makes a little bit more sense in terms of you're dealing with a grid of images rather than a grid of text. 

STEVE_EDWARDS: But I remember, I think part of the reason too, and this is something I've had to work with on large sites or to work on on larger sites isn't so much the performance as it is you're trying to improve the visual performance. So you don't want to hold much of jank for lack of a, I don't know if that's a technical term or not, but where you've got. The image is going to come in here, so there's no placeholder. So then the image comes in and pushes the page down and around and all that kind of stuff. So in my experience, it's been more of a visual performance so that the user knows something's loading and they're not sitting here with fresh refresh, just delaying their page load even more. So that has that's sort of in my mind, that's sort of an indirect performance. 

DAN_SHAPPIR: And finally, 

STEVE_EDWARDS: you're informing your users that, yeah, something's really coming. It's coming and might be coming fast, but it's coming.

DAN_SHAPPIR: And funnily enough, there's a core vital metric for that as well, which is CLS, but about that visual jank. But before we get to that one, I wanted to ask you, Annie, about another thing related to LCP. So actually two things. One is there is something that's kind of problematic for me with LCP, especially when I contrast it with FCP, which is the first contentful paint. So the problem with the largest contentful paint is like, how do you know? How do you know that in like, I don't know, a third of a second, an even larger paint won't happen? When do you stop? Is kind of like one of the big questions. Like, who's the last in line? I don't know. Maybe there's somebody after me. 

ANNIE_SULLIVAN: Yeah, so we do have that specified in the largest kind of paint specification. We wait until, like, if the user doesn't scroll or doesn't input into the page, then we we just wait for the largest. And ideally, when you're sending a ping back to a Realm Analytics provider, the largest contrival pane has occurred, you actually wait until that ends, right? After the user inputs their scrolls or after they unload as they're navigating away from the page. So that's how the algorithm actually works. In practice, it's very unusual past a few seconds in for a larger image to actually show up. And then I think you may be about to ask the question, what if a big image is loading while they click? We actually don't count that that edge case for the reason that we're worried that encouraging users to interact faster would reduce the metrics. So if it's clear that the content is still loading while they're interacting, that page load doesn't count for LCP. 

DAN_SHAPPIR: Why did you decide to stop on user interactions?

ANNIE_SULLIVAN: Because it can be really difficult as the user is scrolling down. You then, right, you can have larger, more content constantly coming in, right? Like if you're scrolling down your Facebook feed, you could end up with the largest image half an hour in, but that doesn't happen if you wait, you know, because it's supposed to be a page loading metric, not, not a interaction focused metric. 

 

So I'm here with JD from Raygun. JD, why did you start Raygun? You know, I started Raygun. It was actually our 11th product that we built. So if you're a fellow software engineer thinking you want to build something and build a business, this was the 11th try. And we built it because way back when I was writing more code for customers, I used to instrument my code to send an email to myself when something went wrong. And it would let me kind of get in front of the issue before the customer complained. And so we built a whole product called Raygun for Crash reporting initially. I'd expand it out into other areas, but it was really just building a full solution to what I'd been doing years earlier to try and build better software. I love that. Just scratching your own itch. It makes a ton of sense. And I do that too, with some of the stuff that I'm doing either with podcasting or programming. Yeah, absolutely. The most awkward thing was when we actually instrumented some of those prior 11 products, and that's when we realized that about 1% of users will ever actually report an issue. And you go, Oh, we might've been a lot more successful earlier if we'd known that. So that's kind of the whole value prop of Raygun. Yep, absolutely. And it makes sense just to put it in there. So folks, if you're looking to try something like this, that'll tell you what your problems are, go check out raygun.com and get a free trial. 

 

DAN_SHAPPIR: Another way that I kind of explain it, but I don't know if that's, um, something that you even considered, so maybe it's just something that I thought of was that largest contentful paint kind of strives to measure when a visitor perceives the page as loaded, visually at least. And if the user has decided to start interacting with it, then it is a sort of an indication that it's sufficiently loaded from their perspective and consequently whatever was the largest contentful paint up to that point is kind of like good enough. But I don't know if that was one of your considerations. 

ANNIE So if all the content is loaded, you start interacting, then we call it good enough. It's definitely less than 5% of pages where the content is loading while they're interacting, and we discount it. But we're not quite sure what the best way to do is in those cases. 

AJ_O’NEILL: I feel like that's a lot more than 5%. It seems like a lot of the web is loading while you're. thinking you can click and then nothing's happening and you got to refresh. 

ANNIE_SULLIVAN: I mean, if you got to, those pages are abandoned, but it is not that many pages that have a first contentful pane and then no largest contentful pane because of an interaction. It's kind of interesting. It seems like maybe users have been trained not to interact with pages. 

AJ_O’NEILL: I think things have gotten so bad. We don't wait for the loading spinner. We don't wait for the thing to show up. We just sit and we just wait.

ANNIE_SULLIVAN: I think that that actually might be part of the reason that first input delay, which I haven't really touched on, is the last core orbital that we're talking about has such good scores, is that the people do seem to wait before they interact with the page. 

DAN_SHAPPIR: Before we get to that one, I would actually like to talk about CLS or cumulative layout shift. Yeah. And that's the one that kind of goes to that scenario that Steve mentioned where you have jank as things kind of move around. I think everybody has experienced a situation where we're reading some news article and then an ad gets loaded and pushes everything down and all of a sudden we lose our spot and it's really annoying. So that's the thing that gets measured in CLS. And I'm really curious about how that metric came to be because in a lot of ways it's not really measuring performance. It's measuring experience, and yet you've kind of put it into this box of performance metrics. So how did this kind of metric occur to you guys, your people? 

ANNIE_SULLIVAN: So first off, the Core Web Vitals, I mean, they're not the full complete experience of the web page yet, but we're aiming to make them more and more focused around the whole user experience and not just performance. But the idea behind the metrics is that they're kind of like a three-legged stool. We were worried, right, like somebody, you can make your largest contentful paint very fast by putting, like delaying all the scripts and making it interactive very slow. Or you can make it fast by like kind of the worst case of what AJ is talking about, where like content is like slowly appearing in little by little and it's really jarring. And so we thought if we had these three metrics, first, the largest contentful paint, how fast is the page load? And then the first input delay, like how interactive is it during load? And then lastly, cumulative layout shift, like how much are things moving around? That those metrics would really balance each other and overall show a good user experience. But it was really exciting to work on cumulative layout shift because it really, I think, is one of the first metrics that isn't performance focused, that's more user experience focused. 

DAN_SHAPPIR: And also, it doesn't actually enD at page load. I mean, with LCP, like you said, it usually happens fairly soon after the page finished loading, you know, at a certain point in time. And FID, which we'll get to, is the first input delay. So obviously, it's just the first time that the person interacts with the page. But CLS actually, well, initially, it actually did accumulate throughout the the entire lifetime of the page. Now you've kind of changed that about a year ago, I believe. But so it's not just about the loading experience. It's about something that could theoretically happen fairly late in the lifetime of the page after it's really totally done loading. So could you talk about that? 

ANNIE_SULLIVAN: Yeah. So the change that we made was instead of accumulating through like the entire time the page is open, even if it's open for an hour or hour and a half. We just take the worst burst of layout shifts that occur while the user is browsing. So we still do measure cumulative layout shifts over the whole time the person is on the page. And there's some really bad experiences that we catch. One of those is you're scrolling and the images don't have the width and height set, so they pop in and you can't figure out where you were. You're scrolling and then all of a sudden you're lost. So it captures that really frustrating experience. One other experience we realized was really frustrating is a single page app navigation. Sometimes they're beautiful and perfect and they all fit in the skeleton just right. But sometimes not only do they have a lot of layout shifts, but because you can click like in the middle of the page or in the bottom of the page, the layout just can be kind of like a star outward from that click. And that's, you just find that very, very jarring as well, that the layout shifts might happen in many directions. 

AJ_O’NEILL: What do you mean by a star? 

ANNIE_SULLIVAN: So. Like normally, like when you're browsing the content kind of shifts down more. But if you click like in the middle of the page and it does a single page app transition, sometimes some content shifts up, some content shifts sideways and some content shifts down. And it's very jarring when that happens. It, it feels weird. 

DAN_SHAPPIR: Basically single page applications, you know, in an effort to create what they might consider to be a better transition effect and utilize the fact that you're actually staying technically within the same page they reuse visual components from the quote unquote previous page to the quote unquote new page, but then they have to kind of shift them around and rearrange them if the layout of the page has changed. So things could get pushed in various directions. But again, so with Largest Contentful Paint, like you said, it was kind of like the stepchild of First Contentful Paint and Speed Index where you kind of merge them together and that's what you ended up with and it's a metric that actually measures time which is a really natural thing. With cumulative layout shift it can be even kind of technically challenging to explain what it actually even measures because it's kind of like the product of the impacted area and the the rate of movement and the you know ratio of movement sorry and it's like this unitless number and I know a lot of people are kind of struggling to even figure out what it is or even know what the limits are. Like obviously it could be as low as zero, but like what's the highest possible value? It's not obvious. So it seems kind of like really out there. Like how did you guys come up with this?

ANNIE_SULLIVAN: Yeah, so first off, the goal was to come up with something where we didn't incentivize people to load their pages really fast, but have content sliding in from everywhere and have it be a confusing experience. So we were thinking about ways that we could make that happen. And again, we had this new innovation where we could hook into what the paint system is seeing and when it's painting content, and then know when content is unexpectedly moving. But as you said, it can be a little bit complicated to translate. Like the paint system sees boxes, right? Boxes are moving unexpectedly to something that the developer can understand. So what we did was we took each time a box kind of shifts unexpectedly, like how much of the viewport is affected and how far did the box shift? So like the limit, ideally the limit for CLS would be zero, but the, the limit we've set is about like 10% of the page moving point.

DAN_SHAPPIR: Now, one interesting thing that I find with CLS is that to a certain extent, and I might be totally off on this, it feels like you're kind of fighting against the way the web was initially designed to work. Like one of the salient aspects or defining aspects of the web, at least as it began as a document display system, is that content flows. You don't need to specify the initial size of the image. When it downloads, the intrinsic size determines the final size, and it pushes content aside so that stuff doesn't overlap, and you can properly read the content of whatever document you're viewing. And you're kind of saying, no, this isn't good. You need to specify in advance exactly what size everything will be. It's kind of like saying like you should app the position absolute everything and I'm intentionally being you know extreme here but that kind of to an extent feels like what it is that you're saying or down with just use the grid layout all the other layouts are not so good anymore that's kind of how it feels to an extent with cls because otherwise you end up with potentially poor cls scores. 

CHARLES MAX_WOOD: I'm just going to use the tables.Just use tables. 

ANNIE_SULLIVAN: Tables do work too, but I think you can use the dynamic layout system. It's just that we wanted you to specify, you're going to send an image down. How big is that image? I think that's a lot of the reality of modern web development is ideally you do think about what are the dimensions of that image? What is it going to look like on different screens? How big would it be displayed? Should you be using a source set or things like that for images and for other like dynamic content, like just specifying a size for the dynamic parts, but the browser still does quite a bit of layout. 

DAN_SHAPPIR: So basically you're saying the web today is really different than what it was back in the late 90s. 

ANNIE_SULLIVAN: It is quite different, yeah. 

DAN_SHAPPIR: So as I said, you made the big change in the way that CLS gets computed about a year ago, and I think that was the biggest changed so far that you've made to Core Web Vitals since they were introduced. Can you speak a little about that? Like how did you decide to make the change? What was the change that you made and how did you verify that the change you made was actually a good one? As I recall, you were also considering like different strategies and you know, like you came up with the winner. 

ANNIE_SULLIVAN: Yeah. So when we first launched CLS, we were really focused on the kind of like numerically understanding the web and when we looked at the data overall, if you look at the time that's spent on a webpage, overall, the fact that the CLS, basically, you have content that shifts around. So individual pieces of content shifting. We add those all up into a cumulative score. So the cumulative sum of all the content that's shifted. And numerically, the time that people spent on page didn't really impact the score overall. But when we released the metric, we got a lot of feedback from web developers where in their specific situation, it was a lot of tiny shifts were impacting their score in a negative way. There were some implementations of infinite scrollers that we found were negatively affected and some single page apps that were open for a very long time that were negatively affected. And what we realized is that we have to look at the qualitative data and not just the quantitative data when we're making a decision about a metric. If we're basically telling developers there's no good solution there, don't use an infinite scroller or don't have your page open too long for your specific page. It's not good enough. So we decided that it was important to take that developer feedback and to change the metric into something that for all pages, it could be open a very long time and the metric wouldn't be affected. And so then we set out on how to do that. And we thought of like a lot of different ways. Maybe it should be the average layout shift. Maybe it should be like a sliding window or what we ended up using was a session window that kind of expands around a bubble of layout shifts. Because layout shifts kind of happen in multiple frames sometimes. A lot of the times you'll have like a burst of layout shifts in several frames in a row. As we wanted to capture that, we looked at like different windowing approaches to that. We looked at average, we looked at like just looking at the worst individual layout shifts. And what we did is first we went back to our idea of doing a lab study. We recorded a bunch of user experiences that either we'd seen or that people had given us feedback about. Single page app interactions, scrolling, page loads with various amounts of layout shift in them. And so we recorded these interactions in Chrome. We took a video of them and then we also use our Chrome tracing to kind of take all of the data that we had about the individual layout shifts for these user experiences. We asked about 50 different users internally at Google, but not people that were on our team, to rank the user experiences and say what they thought were best and worst. And then we came up with a bunch of different competing ways to summarize these layout shifts. Should it be the average? Should it be the session window? How big should the session window be? So we ended up with 150 or so strategies that we made up by permutating all these different, you know, how long is a session window, what type of window is it, are we using the average or using the maximum, et cetera. And when we kind of put the user study data together with the different approaches, there were a couple of things that kind of came to the top. One was like kind of looking at a group of bad layout shifts together. So some kind of bubble of bad layout shifts, the strategies that measured that really correlated with how people felt about the user experience did not go well was the average layout shifts where we saw tiny layout shifts would wash out the bad experience. We came up with a couple of different variations on that bubble of bad layout shifts and we implemented them in Chrome's logging. Then we compared them on real user data. If we ranked all the websites by each metric, which ones were most different and why. And that's how we came up with the idea of a session window that has up to one-second gaps. So if you have a bunch of layout shifts and then they stop for a second, that's your window. The window bubbles around a series of layout shifts up to five seconds. 

DAN_SHAPPIR: I have to say that in the excellent Google website, the web.dev website, you've got amazing graphics that really illustrate all these concepts that you're explaining them amazingly well, but again, the limitations of this medium. So if somebody is a more visual person, they can find excellent examples over at web.dev. 

ANNIE_SULLIVAN: Yeah, I worked really hard on those graphics. 

DAN_SHAPPIR: No, they are excellent. I do have to mention that one of the reasons that people were so engaged with Core Vitals from day one and all the feedback that you got about CLS, like why people were upset that their website was getting a poor CLS score than they thought they deserved was the fact that Core Vitals were, it was announced that they are a ranking signal into Google search. I know that you people at Google are really secretive talking about the search, but...Can you maybe tell us a little bit about how this came to be, that Core Web Vitals became a ranking signal into the Google search engine? 

ANNIE_SULLIVAN: Yeah, so first I'll just say that it's not even that it's secretive outside of Google. It's also secretive inside of Google, in that search is its own organization. And most of the people that you talk to about Core Web Vitals are actually on the Chrome team developing the metrics. And we don't actually have any visibility into how search ranking works, other than what's been talked about publicly. But I can talk a bit about how the decision was made. A couple of years ago, there was a search ranking change where they just told people, we're gonna include site performance in your ranking. And as soon as that announcement was made, all of the site performance numbers, like the LCP, the FCP, numbers of people didn't even know how to compute at that point, they all went tumbling down. The performance on the web got way better, not just after the ranking changes enforced, but when they told people that we're going to do this. That was really exciting, but when we looked at it, we said it's kind of unfair. We didn't tell people how we were going to measure performance. We didn't do anything but say, we're going to include performance in your search ranking. We thought that if we made open, standard metrics and we were really clear about how it's measured and we integrated them into our tooling, We tried to keep it clear, like just three simple metrics that maybe that would help the ecosystem improve in a more sustainable way. And so that was kind of the idea for Core Web Vitals. 

STEVE_EDWARDS: So question for you, just speaking as a user, I understand that if I'm searching the web, people tend to get impatient in terms of performance. There's legendaries that I've heard bandied around about how Amazon has figured out that so much delay will cost them a billion dollars in money because people are just strictly impatient. And so I'm going to play a little bit of devil's advocate here. I'm going to take on my AJ role. And AJ got a big smile there. So if I'm searching the web for some data and I find a website and there's a website out there that has the data that I'm looking for, I couldn't care less if it's a few seconds slower if it has the data that I'm looking for. Okay. And so I'm trying to understand the reasoning for including performance in page ranking when it comes to looking for content. And to me, that is sounds sort of like a top-down forcing thing, sort of along the lines of the way our current administration is trying to force everybody into electric vehicles and away from fossil fuels because they've determined it's better for everybody. And so when you're looking at data, and I want to find some data, some information, whether it's on a disease, on web development, whatever the topic can be, my concern is that my site has the data and not so much the performance. So I'm trying to understand the logic in including performance in page ranking when it comes to search. 

ANNIE_SULLIVAN: So I think that the important thing to think about here is that there's many factors which go into search ranking and Core Vitals are one of those factors. Most of the time, there's multiple sites that can give you the information you need and you want it quickly. And so that's why it's available as a ranking factor. But Obviously, like about blank is the fastest page and that's never going to come up in your search results because it doesn't have the content you need. That's not the only factor. 

DAN_SHAPPIR: I can even give a reverse example, I guess, or the fact that if you search for news, CNN will come up as I think the second place right after Google News. Surprise, surprise. And that's despite the fact that the CNN website is atrociously slow. It has terrible scores when they search for news, they care about the news and they expect to see CNN and they expect to get at the content that CNN has. And that's the reason that CNN ranks so high, even though their performance is pretty poor. So like Annie said, it's just one consideration. But I think you would agree that if there are two websites that have essentially, let's say the same information that you're looking for and one of them will load this information in, let's say, half a second, and the other one will load the same information in five seconds, I think that you would prefer the faster one. And at the end of the day, the way that Google sees it with Search, I think, is that the customer is not the website. It's the user. They're trying to please the user. They're trying to give the user the experience that the user would enjoy how quickly the system, as it were, responds to a click on the Google search results. 

STEVE_EDWARDS: Yeah. So I know you said, Annie, you said you really search sort of a walled enclosure from a Google standpoint from everybody else, like to keep their proprietary stuff proprietary. Do you have any inclination in terms of where Core Web Vitals ranks in the big bucket of criteria that determines when a page ranks high in search? Is it like, for example, oh, it's only 2% as compared to 20% or something along the line. Do they give you any indications of the priority of Core Web Vitals in the overall search algorithm? 

ANNIE_SULLIVAN: No, my guess is that the overall search algorithm is very context specific and kind of not that simple. 

STEVE_EDWARDS: Oh, I'm sure it's complex. We've...Who's the dev rel we've talked to before, Dan? 

DAN_SHAPPIR: Martin Splitt, I think it was. 

STEVE_EDWARDS: Yes, Martin Splitt. Yeah, that was mind numbing just listening, not mind numbing, but dizzying, shall we say, listening to the complexities of that. 

DN: It ended up as two episodes. And how that works. It ended up as being two episodes because the conversation just ran so long because there was just so much information. But yeah, it was an awesome conversation and it's really mind boggling how complex and sophisticated the system is. But I'm still curious. So this idea of using Core Web Vitals, is this as part of the search ranking instead of whatever arbitrary mechanism existed before? Is that kind of like an initiative that came from the Chrome people or came from the search people, if you can say? 

ANNIE_SULLIVAN: I'd say that the Chrome people were really excited about the idea and talked to search people about, you know, like, this is a way we think we can make it better.

DAN_SHAPPIR: And the final metric, which I'm giving like a spoiler, will actually have an upcoming episode where you'll join us again to talk about that in a lot more detail and how you're thinking about changing it. But if you can just like briefly explain what that final first input delay metric is that's also currently a part of Core Web Vitals. 

ANNIE_SULLIVAN: Yeah, so first input delay measures the amount of time. So I don't know if people are familiar, but main thread blocking JavaScript we by default, everything runs on the main thread. And that means that the user interface is going to be blocked until all this stuff runs. And so first input delay measures the time from when the user first interacts until the event handlers are able to run. So it's a measure of main thread blocking time. But there are some problems with it. First off, it's had like significant improvements actually since the Core Web Vitals were launched how they handle user inputs. They started using the request title callback API to basically yield to user input immediately. And Chrome also made some changes. When a user input occurs, we no longer wait as aggressively for double tap delay. So the pass rates for first input delay have gone from 83%, I think, to about 97%. So at this point, first input delay isn't really measuring main thread blocking time as well as it could be. And so our team is working on a bunch of improvements to really measure better how user interaction with the page is going because there's still problems with it. 

DAN_SHAPPIR: Yeah, I think that first input delay is a great example of how a metric succeeds to such an extent that it effectively becomes irrelevant. Because like you said, once we've gotten to the point where like the aggregated average, I think for all websites is something like 93%, then effectively it's like, you know, it's like, if a school teacher gives an exam and all the kids ace it, then what are they actually even testing? 

ANNIE_SULLIVAN: Yeah. 

DAN_SHAPPIR: So yeah, so like you, so as you said, like you guys are working on a new metric and like I said, we will have you over again in a couple of weeks with the members of your team to to actually talk about these new metrics that you're looking at that might replace first input delay. And I think it's really great that you're updating and enhancing Core Web Vitals as you go along, both based on the things that you learned from looking at the results, and as a result of what's actually happening out there as our industry reacts to the Core Web Vitals that you've put out. And again, from my own perspective, when you released Core Web Vitals and when you made it official and we had something to optimize for. So at that time I was at Wix, it really changed the way that we went about optimizing, identifying performance issues and addressing them and being able to get all the team up to speed on what it is that we're aiming for. It really made a huge difference for us. 

ANNIE_SULLIVAN: Awesome. That's exactly what we hope would happen.

DAN_SHAPPIR: On the other hand, I do have to say that we still have a lot of work ahead of us, because while I'm looking at the crux data, and the numbers are still not that great. I mean, even with everybody effectively passing the FID, so you're really only measuring us on two of the, like the stool really has two legs now, rather than three. And despite that fact, when I look at at like the general data for all websites in the CrUX database across the globe, only like 40% of websites actually get a passing grade for all the Core Web Vitals, which I think is still, you know, not really not where we want to be. Yeah, 

ANNIE_SULLIVAN: we definitely have a lot to work. 

AJ_O’NEILL: Half of that 40% is probably because they're from the 90s and they're just loading text like you said before.

ANNIE_SULLIVAN: Well, the thing is a lot of the sites from the 90s performed really, really well because they just don't try to do so many things. So I do think we have a long way to go and I'm really excited to just keep going with it. 

STEVE_EDWARDS: I would think those older sites would be really quick because they're just straight HTML and CSS and maybe a little bit of JavaScript. 

AJ_O’NEILL: They say an image is worth a thousand tables. 

CHARLES MAX_WOOD: Well, the thing the thing that I think is interesting is that when it comes down to these kinds of metrics or these kinds of measurements or things like that. They're A, never going to be perfect and B, yeah, people are going to find ways around them. We talked about that with the compiler examples and stuff like that. People are going to find a way to master those particular things. But the flip side is that as we start looking at these and measuring these and evaluating how they affect web performance for the users and things like that, then we can start moving the- the process along and saying, oh, well, what about this? And so we can start adding things to it as we go and making those measures more along the lines of what we want them to be so that we're evaluating that things are moving in the right direction. And so as far as any of this goes, I do like the fact that even if it's not perfect, it's getting us closer to the things that we care about. 

AJ_O’NEILL: So I have one more question. Do you have any idea when it comes to, for example, things that are rendered with, say, word or old style server-side rendering versus new style server-side rendering versus client-side rendering. What tends to, for dynamic applications, what tends to actually perform best? 

ANNIE_SULLIVAN: I think it gets really complicated because a lot of the techniques, like when you look at, you can look at like HDB archive data and do a ton of analysis on 8 million sites. And what we see, the most of all is that like, just having too much stuff on your site, pulling in every single library, yanking all the third parties, like that is the thing that causes the most problems. I do think that server-side rendering in general is faster, but I think that there's a lot of things that can go wrong on a case-by-case basis that kind of overwhelm the higher level architectural advice. I'd be really curious to see what Dan says. I think he has a lot more on the ground experience. 

DAN_SHAPPIR: I would add, and we probably don't have enough time to talk about it this time because I think we're running towards the end of our show, but it's also a great topic of conversation, is the fact that Core Web Vitals, as they are specified today, are not ideal for single-page applications because they really identify as a starting point only hard navigations and not the soft navigations that are associated with page transitions in single-page applications. And that really impacts the whole game. So for example, it tends to give an advantage to websites that are implemented as multi-page applications over single-page applications because it totally discounts the quote-unquote fast navigations that might occur when you transition between the pages within a single-page application. These transitions are wholly invisible to Core Web Vitals currently. Correct me if I'm wrong, Annie.

ANNIE_SULLIVAN: They are invisible and I think that this is something that we're really, really interested in understanding a lot better. And this is an area of work that Yoav is leading. So he'll be coming to talk about it next time. 

DAN_SHAPPIR: Yeah, that's your advice. So was also a guest on our show a while back. So, so yeah, another, another face that'll be great to see again. But all I can say, AJ, is that client side rendering of the main content is really problematic because it means that you're not seeing that content until after the JavaScript is downloaded and executed. And compare that with referencing the main content directly in the HTML itself, which means that as soon as that part of the HTML arrives, the browser is able to immediately start downloading that resource and displaying it. So client-side is great for enriching and adding stuff around the quote unquote main content, which ideally should be preset in the page as it arrives from the server side. Now whether it's rendered by WordPress or SSR via let's say Next.js, the browser could care less. 

AJ_O’NEILL: So this is really best for content sites. I mean, if you were creating the New York Times. This is the thing where you'd really want to be paying attention to this. 

ANNIE_SULLIVAN: Then the core one vitals. So they do measure like the initial page load for single page apps. It's, uh, it's just that we're not measuring those subsequent loads. So there's more we need to do, but I think that looking at these metrics is useful for all pages. It's just that it's a starting point. 

AJ_O’NEILL: Well, but it's particularly useful if you're developing content as a, as opposed to an app. Well, I guess that makes sense anyway, because most apps you're not search indexing because they're apps. 

ANNIE_SULLIVAN: But I mean, if you want your app to load fast, the LCP metric still does measure that. 

DAN_SHAPPIR: There are a whole bunch of things here, factors here. For example, there's the issue of intent, for example. So I'm the company that I currently work at, Next Insurance. We have our public website, which is like the landing page for people who are searching for insurance for their small businesses. So they have effectively very little intent because they've not committed to us in any way. They're just searching the web for relevant stuff versus our portal, which is a web application intended for people who've already purchased insurance and want to do various operations or look at information about the insurance that they've already purchased. And obviously they have very high intent because they've already paid us. I mean, you know, they're in a contract with us. They can't view this data anywhere else. They have to come to our portal. So Core Web Vitals are especially relevant for the public site, because if you're not getting this information really quickly, you'll probably just bounce. In the portal, you might be interested in other performance aspects like obviously would want the portal to load as quickly as possible. I mean, why not make your users happy? But there's also the issue. But first of all, they will likely wait a little bit longer if they have to. They might grumble about it, but they'll wait. And you might want to maybe sacrifice a little bit of that loading time in order to make operations within that application faster and more efficient. If there's like this trade off between them because they'll probably spend a bunch of time within that web application and you want their experience to be nice overall. So they might go and prepare some coffee while that portal is loading, but then once they sit with their coffee cup in front of that web application, they want it to respond quickly to whatever operations they do within it. So it really gets fuzzy. I mean, that's one of the core issues that I think, Annie, that you had to tackle with Core Vitals. And I don't think it's perfect because I don't think it can be perfect, but I'm pretty amazed in how well you've been able to do it, which is to come up with a universal, generally applicable set of metrics that could be really used across a huge variety of websites. Now, like I said, it will never be perfect. There are always scenarios where it might be less applicable. But overall It's amazing how much you've been able to cover with just three metrics. 

CHARLES MAX_WOOD: All right. I'm going to wrap us up here just because we still have to do picks and I want to make sure that we're mindful of everyone's time. But before we do that, Annie, if people want to connect with you online or if, you know, they have a whole bunch of questions that we didn't answer, is there a good place to connect with you or get those answers? 

ANNIE_SULLIVAN: Yeah, I'm on Twitter, Annie Sully. That's A-N-N-I-E S-U-L-L-I-E. 

CHARLES MAX_WOOD: Awesome.

 

Hi, this is Charles Maxwood from Top End Devs. And lately I've been coaching some people on starting some podcasts and in some cases, just taking their career to the next level. You know, whether you're beginner going to intermediate and immediate going to advanced, whether you're trying to get noticed in the community or go freelance. I've been helping these folks figure out how to get in front of people, how to build relationships and how to build their careers and max out and just go to the next level. So if you're interested in talking to me and having me help you go to the next level, go to topendevs.com slash coaching. I will give you a one hour free session where we can figure out what you're trying to do, where you're trying to go, and figure out what the next steps are. And then from there we can figure out how to get you to the place you wanna go. So once again, that's topendevs.com slash coaching. 

 

CHARLES MAX_WOOD: All right, well we're gonna go ahead and do our picks, and then we'll start wrapping up. Steve, do you wanna start us off with picks? 

STEVE_EDWARDS: Oh, the best first, okay, I got it. 

CHARLES MAX_WOOD: So actually, that's not a, that's not a bad joke. I mean, a dad joke. 

STEVE_EDWARDS: No, it's not. That's why there was no rim shot. So actual blog post here actually, sorry, not a news story. I thought this was sort of interesting that, uh, published two days ago as an excuse me, yesterday, as of today, May 23rd, New York city is removing their last payphone from service. So payphones are officially a thing of the past and this is, and as I understand it, this is sort of the open air type phone as compared to the enclosed Superman type booth, but it's a CNBC article. I'm gonna put the link in the show notes for sure. And then to the high point of the podcast, the dad jokes of the week. 

CHARLES MAX_WOOD: The bad jokes. 

STEVE_EDWARDS: Dad, bad, it's all the same. So first of all, I was telling my, when my son was younger, I tried telling him that it's perfectly fine, you know, to accidentally hoop your pants, but he's not buying it. In fact, he's still making fun of me. 

DAN_SHAPPIR: Okay. 

STEVE_EDWARDS: Yeah. That's rather stinky. I know. Secondly question. What do you guess get when you cross an angry sheep with an angry cow? You get two animals in a bad mood. 

AJ_O’NEILL: You got to subscribe to Steve on Twitter so you can get these, uh, while they're hot, while they're hot and get them while they're hot. 

STEVE_EDWARDS: And then finally, you've all heard the statement. You are what you eat, right? Unfortunately, that really isn't true. For example. If you eat a vegan, you definitely are not a vegan. Thank you. And those are my dad jokes of the week, or bad jokes, as Chuck would say.

CHARLES MAX_WOOD: Those were better than anyway. Dan, what are your picks? 

DAN_SHAPPIR: Okay, no jokes for me. So the first pick that I have is actually Annie. I think that she's an awesome person to follow. She's like the expert on all things core vitals and one of the leading experts on web performance in general. And I cannot believe that she only has 2000 followers on Twitter when she absolutely should have like 10 times that at least. So Well, 2001 now, I'll just follow her real quick. So, if you haven't followed Annie already, what's wrong with you? You definitely should. And I think we should all be really grateful for the amazing work that Annie and her team has made with Core Vitals because I think that they've, you know, very few people can literally say that they've made the whole web a better place. And I think that you can say that. And that's an awesome achievement in my book. So that would be my first pick. 

STEVE_EDWARDS: My dad jokes make the web a better place, but that's different, I guess. 

DAN_SHAPPIR: Okay. Kind of related to trying to make the web a better place, my second pick is an article that's just been published on the register, which is titled, Safari is crippling the mobile market and we never even noticed. That has to do with the Apple browser ban, which I think that I've mentioned on previous picks as well. The fact that if you're using iOS, you think that you might be using your favorite browser, but actually it's Safari on the inside because everybody is forced to use WebKit as their browser engine. Now, WebKit is not a bad browser engine, but the fact that you're forced to use it is a problem, especially given that Apple refuses to implement certain APIs that are available in other browsers push notifications, which means that you cannot create a web application that runs on iOS and uses push notifications. Now, we may like push notifications, or we may dislike them, but I think we can all agree that in certain cases, there is room for them. You might want to get a push notification when you get a new email or something like that. And currently, that cannot be implemented as a web application that works on iOS, and that's a big problem. And I'm happy that everybody's kind of waking up to this, in case in point, that article on the register. And maybe the regulators will wake up as well and push Apple to do the right thing and allow other browser engines on iOS as well. And so that would be my second pick. And my third pick as always, but I'll make it short this time, is the ongoing war in Ukraine I wish it would just end. I will keep on mentioning it until it does. And, uh, and those are my picks for today. 

CHARLES MAX_WOOD: Very cool. AJ, what are your picks? 

AJ_O’NEILL: Well, there's this old nineties TV show called Pretender. Oh, so good. Yeah. My wife and I have been watching it and it's, we're into season two now. She watched it as a kid, apparently, and, uh, I'd never even heard of it. Anyway I think that that might be my, my primary pick. And then, yeah, just the normal stuff. If you want to follow me on Twitter, with all of the nonsense, CoolAge86, if you just want the tech stuff, that's underscore beyond code. And then I've got the live streams on YouTube, CoolAge86. And then the, the useful bits that are clipped out are on the Beyond Code Bootcamp channel on YouTube. 

CHARLES MAX_WOOD: Nice. I'm going to throw out a few picks. So I'm going to, first off, I'm going to pick a game that I play with my kids. I don't know if I've picked it on here before. So if I have, I apologize for repeating. It's called taco cat goat cheese pizza. It's a card game and the way you play it is you flip your card over and you, you know, in turn say taco cat goat as you flip them over, right? And if you're incorrect, you just keep going. But if you actually flipped over the thing you're saying then everyone tries to slap the pile and the last person to slap it picks it up, right? Which in my house is usually the six-year-old, but she's a good sport and so it's fun. And then there are special cards. There's the narwhal and there is the gorilla and the groundhog and you make a different motion for each of those. And the unicorn, no, it's the narwhal, nevermind, that's the unicorn. Anyway, you make a different motion for each of those before you slap the pile. And if you make the wrong motion, then you pick up the pile. If you go to slap the pile and then realize that you're wrong, you have to pick up the pile for faking, right? So you can't try and fake anyone out. It's built into the rules. So if you're going for the pile, it's because you know you have to slap it. Otherwise, you're going to pick it up. And it's really fun. It's simple enough, like I said, for my six-year-old to play it. Sometimes we'll put a little bit of handicap on or not make her pick it up every time, but she's usually a good sport. So if we're just playing normal and we tell her that, then she's fine with winding up with the majority of the pile. And then the first person to go out and then be the first person to slap the pile with no cards in their pile in front of them is the winner, right? And so you stay in after you run out of cards, trying to get that first slap. So anyway, it's a lot of fun. We've really, really enjoyed it. It takes about a half hour or so to play it. And it's, yeah, the illustrations are fun. The game is fun. So I'm gonna pick that. Um, one other thing that I wanted to just shout out about, because I don't know if I acknowledge the hosts enough on the shows, but I was talking to somebody, I was wearing a, my JavaScript, Jabber shirt, my yellow shirt yesterday to the carnival at my kid's school. And I had a guy look at me and go, he goes, where did you get a JavaScript Jabber shirt? Right. I'm at the school and I go through my life not with people, not knowing who I am from this show, right. I go to Walmart and I'm just another dude at Walmart, but occasionally while I'm out somebody will see me in Swag from the shows or something, and they'll be listeners to the show. This guy was a listener of JavaScript Jabber, and his kids just happen to go to the same school that my kids go to. So we start chatting, and he starts telling me about how much of a difference the show has made for him in his career. We did a show about Ionic a few years ago, and that inspired him to get into mobile development a little bit, and then he's doing a whole bunch of web stuff, and he's picked up some other stuff from some of our other episodes. Anyway, and lest you believe that I didn't take any credit for it, he didn't even know who I was once I told him I was one of the hosts. I had to tell him which host I was, but it was just terrific. And afterward he's just like, you guys are my heroes. And it just really occurred to me that we are making a difference here. And so what I want you all to do is if somebody on the show has talked about or shared something that makes a difference to you, I want you to tag them in a tweet or something or email them, or if you email me Chuck it. topendevs.com, I'll forward it on. But just let these guys know that what we're doing here makes a difference and that we've been able to help you out. I know not every episode is going to land in your lap like something that really matters to you, but I think we do hit the right notes for different people on different occasions. 

STEVE_EDWARDS: That's for sure. One of the funniest tweets I ever got was somebody responding to somebody about JavaScript Jabber and they mentioned, I like that funny guy and the other smart people. So that's, so I'm funny, but not smart. 

CHARLES MAX_WOOD: So I was going to say, which one of us is the funny guy? 

STEVE_EDWARDS: I, I, I'm a fun guy. You know, there's a fungus among us. 

DAN_SHAPPIR: And recently another tweet really, another tweet really made my day is somebody who tweeted that they listened to our show, got a bit, a piece of information, used that bit of information in an interview at Apple a few days later, and it helped them land the job. And that's like, you know, awesome. Like I can't wish for anything better than that.

CHARLES MAX_WOOD: That is awesome. Just keep in mind that they, Apple won't let them come on any of our shows as guests until they move along to a different job. But anyway, that's neither here nor there. Uh, that is awesome. And Apple from all the people that I've talked to that have worked there is a terrific place to be. So, uh, that makes me happy too. And I could tell other stories, you know, people that I've met at conferences and stuff, but I won't. I just know that, yeah, it feels good to be able to know that we're making a difference. So I just wanted to shout that out. Austin, if you're listening to this. We recorded this on the 24th of May, probably comes out sometime in June. So I'm talking about you anyway, but yeah, beyond those picks, I'm working on finalizing the sponsorship offerings and the conference scheduled this week. And so if you go to top end devs.com slash sponsor, top end devs.com slash conferences, uh, you should be able to see when our conferences are. And then finally, one last thing, and this is an opportunity for pretty much anybody who listens to this show that works in Node on a regular basis. I've had a couple of sponsors reach out to me and say, we like sponsoring JavaScript Jabber, but we wish that you had a Node-related podcast, because that's the audience we're looking to reach. And I feel like it would be nice to kind of niche down in that area where we have shows on Angular React and Vue for the front end. So if you're interested in being a host on a Node show, email me, chuck at topendevs.com, and we'll see if we can figure that out but I think it's an opportunity that's worth considering. So that's pretty much everything I have. Annie, do you have some things you want to shout out about? 

ANNIE_SULLIVAN: Yeah, first technical one, if you all want to check out Jake Archibald's talk at Google I.O. bringing page transitions to the web. We're really excited about this new API and kind of how it intersects with single page apps and performance. So check that out. From a non-technical perspective. Hope this isn't too far out there, but my kids are really interested in making video games. So we've been trying to figure out how do you make the art for video games. And there's this cool iPad app called Procreate. And I found this channel called Art with Flow. So you get the iPad app and then you follow like the YouTube channel Art. And it's just so much fun. You can make the coolest stuff and it's really easy and simple. So I think it's really neat. 

CHARLES MAX_WOOD: Cool. I'm gonna give you a tip. So I'm gonna preface this by saying that I talked to Jason on a regular basis. But Jason Wyman, he's done some bonus episodes on here talking about game development. He has a bunch of courses on building games. And lately, he went to the Game Developers Conference, and they actually gave him some kind of code or discount code. I can't remember exactly what it was. But if you go to his YouTube channel, I'll put a link to it in the show notes. I'm pretty sure that there's a discount code for the Unity Asset Store, I think is what it's called where if you're looking for artwork and you're not an artwork person, that you can find some options that will probably work for you. So I'll just drop that in there. And if you're looking for help on writing games, he's terrific on that stuff too. 

ANNIE_SULLIVAN: Awesome. Thanks. 

CHARLES MAX_WOOD: Yeah, no problem. All right. I'm also going to put a link to his courses into the show notes because I know people are interested in it. I'll be completely transparent. It's an affiliate link. So I do get a kickback if you buy anything, but just buy it if you're interested in it. I'm not going to push. I'm not going to push it. His, his content's terrific, but anyway, But yeah, let's go ahead and wrap up. Thanks again for coming, Annie. 

ANNIE_SULLIVAN: Of course. Thanks for having me. 

DAN_SHAPPIR: Like I said, we will have Annie again pretty soon. So I'm really looking forward for that as well. 

CHARLES MAX_WOOD: All right, folks, till next time. Max out. 

DAN_SHAPPIR: Bye. 

STEVE_EDWARDS: Adios. 

 

Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit C-A-C-H-E-F-L-Y.com to learn more.

 

Album Art
Core Web Vitals and Whatnot - JSJ 537
0:00
1:21:20
Playback Speed: