Exploring the True Measure of User Experience: Core Web Vitals & Beyond - JSJ 598

JavaScript Jabber

A weekly discussion by top-end JavaScript developers on the technology and skills needed to level up on your JavaScript journey.

Exploring the True Measure of User Experience: Core Web Vitals & Beyond - JSJ 598

Published : Sep 12, 2023
Duration : 1 Hours, 31 Minutes

Show Notes

Barry Pollard is the Web Performance Developer Advocate on Google Chrome. They dive into the world of website performance metrics and the complexities surrounding them. From the confusion around reliability to the impact of front-end optimization, they explore it all. They discuss the importance of Core Web Vitals, the influence of user location and device speed, and the challenges in presenting aggregated information about website performance. They also touch on the ongoing debate between front-end and back-end optimization, as well as the current state of website scores and metrics. 

On YouTube

Sponsors


Socials

 

Picks

Transcript

Charles Max Wood (00:01.481)
Hey, welcome back to another episode of JavaScript Jabber. This week on our panel we have Dan Shapir.
 
Dan (00:11.435)
Hello from a hot and muggy Tel Aviv.
 
Charles Max Wood (00:15.997)
I'm Charles Maxwood from Top End Devs, and this week we have a special guest, Barry Pollard. Barry, welcome back.
 
Barry Pollard (00:23.322)
Hello from a cold and wet Cork in Ireland.
 
Charles Max Wood (00:27.381)
Cold and wet sounds nicest about right now. Anyway, you wanna introduce yourself? Let us know who you are, why you're famous, all that good stuff.
 
Barry Pollard (00:35.682)
famous, I'm not sure about that. My name is Barry Pollard. I am a developer advocate in the Google Chrome team, specifically in the web performance team. So I deal a lot with Core Web Vitals. I look after the Chrome User Experience Report. You might have seen emails from me saying the new reports out there. You might have noticed if you're looking up any of the Web Vitals documentation, my names are splattered all over those. So if you go web.dev slash LCP, for example, you'll see my name because I've helped contribute to those docs.
 
And yeah, I'm here to talk a little bit more about Core Web Vitals.
 
Charles Max Wood (01:11.201)
Awesome. We've covered Core Web Vitals here and there, but it sounds like there's something new coming that Dan was telling us before the call is giving people heartburn. Do one of you want to give us a little context there as to what we're looking at here?
 
Barry Pollard (01:26.746)
Yeah, so we're changing Core Web Vitals. I think we've made no qualms about that these are intended. They were never going to be set in stone and not changing from there on. So we've been tweaking the Core Web Vitals for quite a wee while now, changing things like LCP. We've been stopping invisible images being candidates for LCP encoding.
 
Charles Max Wood (01:31.925)
Nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
 
Barry Pollard (01:54.722)
Some people use those as hacks to try and get around to those such things. CLS went through a big change about a year and a half, maybe two years ago now, where we stopped making it totally cumulative and only looked in window segments. I think the biggest change that we've done since launch is coming up now. We recently announced that one of the core web titles, First Input Delay, which is intended to measure your
 
responsiveness, your interactivity of your website is being replaced by a new metric called IMP or interaction to next paint and which we think is a bit more of a comprehensive metric we'll talk a little bit about why in a minute but I think that we've been saying this for a while we announced it in Google I.O. that it's actually going to take place in March next year but then our colleagues in the search team
 
went and emailed everybody that has IMP problems to say, hey, you've got a year to fix these things, they're responsive issues on your website, which caused a bit of a flugger in the SEO industry and got everyone in a panic because much as me giving talks about it, my team giving talks about it gets the word out a little bit. I think search has a bigger voice in us and suddenly lots more people heard about it and started panicking about it and saying, hey, what does this mean? Is this something?
 
that search is now going to say is wrong with my website, that I need to fix immediately and what's going on. That was my view from the inside, Dan, your equipment involved, I don't know if you've got a slightly different view.
 
Dan (03:32.387)
Well, it certainly had impact. I mean, we all of a sudden got an email from our VP of Marketing asking us about what's this INP thing and what it's going to do to our rank. So it did have an impact, I'm guessing on a number of companies. But you know what? I think that's a very good thing because the situation with
 
Charles Max Wood (03:38.965)
Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha
 
Dan (04:01.183)
IMP or isn't good. And it's important that people start getting in front of this thing and not just because of the search impact. In fact, mostly from my perspective, not because of the search impact because the search impact at the end of the day probably won't be that significant for most websites. It's because of the impact that it has.
 
on actual users visiting a website.
 
Barry Pollard (04:34.146)
Yeah, and I think that's one of the first things I'll say in response to that email, which I love by the way, it's something that's got a lot of people's attention. It wasn't saying something new is wrong with your website, it was saying two things. One, you have an existing problem and here's making you aware of it. I think that's what a lot of people didn't get. They thought, oh my god, something's broken with my website, we need to fix it right now. So one, it was, nope, you have a problem, your website is not as responsive as it should be.
 
Charles Max Wood (04:42.761)
Ha ha ha!
 
Barry Pollard (05:03.666)
And two, it was, if once becomes a core web vital and any ranking impact that comes from that is from March next year. There's no point in telling you in February, you've got a problem and you need to fix it from March. So it's getting well ahead of the game of saying, hey, there's an issue here. Once this becomes a core web vital, you need to have time to do it. It's definitely one of the more complicated ones. So we need to give people time.
 
to actually understand it, see what's wrong, and potentially fix it.
 
Dan (05:36.827)
I think it's worth, even though we actually had a show about this, we actually had guys from the team working on Core Vitals on the show and we had Michael talking about INP and how it's different from FID. So I highly recommend for people to go back and listen to that episode. But can you give like a brief description of what INP is, how it's different than what we had before?
 
Charles Max Wood (05:48.99)
Yeah.
 
Dan (06:06.207)
you know, why you're making this change.
 
Barry Pollard (06:08.81)
Yeah, so as I say, Core Web App was supposed to measure the user experience of a website and we do it from a number of different aspects. And what I love about it and what I'm saying, the talks I give and stuff like that, is they're explainable to the users. I can explain to my dad what they mean. So LCP, Largest Contentful Event, when is the biggest bit of content there? Be that a hero image, be that your H1 title tag, when does that appear? CLS, that's...
 
whole shifty annoying thing as things move around as you're loading an article and use your face because an ad loads above. And then the last one is the responsive metric. So we measured that with first input delay. So you click on something, be it the menu, you know, your mobile phone, you're clicking the menu at the top left. And how long does it take until that menu opens? So first input delay, that's the intent of these responsive metrics is when you click on something, does something happen immediately?
 
or does nothing happen and you start tapping it again and then something that opens and closes and opens and closes and you're like, oh my God, what's going on? That's really frustrating experience. Yeah. I do it all the time. I'm a very fast phone, trust me. So if I experience it, whenever you actually, that's really interesting. We as devs tend to buy nice shiny tech, but whenever I go to my family and their phone's broken and I have to use that, I'm like, oh my God, how do you use this thing every day? We've got to realize that we're very privileged.
 
Charles Max Wood (07:11.365)
I never do that. Ever.
 
Charles Max Wood (07:31.384)
Right.
 
Barry Pollard (07:32.918)
We have nicer tech that most people do, but if you go to other people who are, and phones are lasting a lot longer these days. So used to be a year, a day a phone would be, kind of the screen would be cracked, the battery would be dead, you need to replace it. But now quite often, either they're lasting, or if you're wealthy enough that you want to have the latest tech, you're passing it down to someone else. And phones can last six, seven, 10 years now. And it's not unusual to see these older phones on the market. And that's before we even get to
 
sort of, you know, out of the Western world, where cheaper phones are the norm and expensive latest Android phones are very much the exception. So yeah, measuring that responsiveness is the aim of this third metric. So first input delay, what it measured was how long after you did that tap or that click or that type until your code starts running. So that delay part of what's going on.
 
And that was kind of meant to be a measure of, is the site too busy doing other stuff that it doesn't even start executing your code to open that menu or to do anything like that. But what it didn't measure is how long then that interaction actually took and then any result of that interaction. So it was a start, but it was always kind of lacking a little bit. And what we see now is that, I can't remember the stats off the top of my head or quickly with them up here.
 
But 99.99% of desktop websites have a good fit and mobile isn't too far behind.
 
Dan (09:06.914)
Yeah.
 
Yeah, I was going to get to that. I like to say that FID has done its job. That this metric has become effectively useless, because it's like an exam in class where all the students get an A+. Obviously, you're not really testing them. So that's kind of where we are.
 
with FID. If I'm looking across technologies, for example, and I look at all the cross sites built using React or cross sites built using Vue, then when I look at FID for React, 96.44% of React websites have good FID. So effectively, it's all of them.
 
With VIEW it's 96%. You're effectively the same and again effectively all of them. With Svelte it's 96.5%.
 
Barry Pollard (10:15.186)
Yeah, and if that was a true measure of the thing and everyone was getting A+, and sites were responsive, we'd be delighted. We'd say job done and we'd close it down. But as I said, even on my high-end phone, that's not true. And certainly not whenever I go to a lower phone. They are laggy, they're annoying, they're difficult to interact with. You have to give it a few seconds. So you're right. BID doesn't measure what we want it to measure anymore. The idea
 
Charles Max Wood (10:38.503)
Yeah.
 
Barry Pollard (10:44.662)
behind it was good and I think it was a start and I don't think we were ready with that MP a few years ago when I first introduced it. So it was a start along the path and in the same way as we tweaked LCP and we tweaked CLS, as I mentioned we're now tweaking FID but we're tweaking it in a much bigger way in that we're going to replace it completely. And I think so the idea of FID was
 
Page load is a particularly bad time, so at that point trying to interact is probably a problem. So measuring that delay, in theory, was a good measure of when that happened. The problem is, as its name suggests, it's first input delay. So we only look at the first one, and if that was good, we say, hey, this website must be great. But people do a lot more than just one click or one tap on the website, and particularly long-lived webpages or SBAs or things like that.
 
an awful lot more that happens. So we need to look beyond that first one. That's the first big change with IMP is we're not just going to look at that first one, we're going to look at all of them. We're going to pick the worst one, kind of the worst one. There's a couple of caveats around that, but basically the worst one and say, this is a measure of how responsive that page was for that user across its whole lifetime in the same way that CLS looks across its whole lifetime.
 
And LCB is a load one, so it's intended to be only just during the beginning of the lifetime, but the other two are intended to be across the lifetime. And then the second thing is, as I mentioned, it kind of only measures the delay, Sid. Interaction to next paint is trying to measure more of that interaction. Now, this is one that gets people confused a lot, because it doesn't measure the entirety of the interaction.
 
Again, it's a step forward, whether we ever need to get to that, I don't know. But what we do is we measure, again, we try to name these metrics in a somewhat useful name. We don't just choose the three-letter acronym that's free at that time, even if it may seem like that. So we measure from your interaction until the browser is next able to paint. And that means, so you click a menu, ideally the menu opens and that's the paint.
 
Barry Pollard (12:58.41)
Sometimes some interactions are going to take a bit more. So if you do load more articles, for example, on the page, it might have to make a network fetch, it might need to do stuff, it might take a little while. If that takes a while and nothing happens, that's not a great user experience. But if it takes a while and the whole page is blocked and you can't do anything else, you can't click on anything, you can't scroll, you can't open a disclosure with it, you can't fill in a form, then that's really bad. So it's meant to measure that sort of...
 
Is the main thread being blocked completely by this interaction or is there an opportunity to do other things, even if the interaction takes a while? Again think of another one is if you're running a video encoding website, you're running YouTube for example, you upload your video, that's going to take a long time to upload to process your video to say done. But as long as the website isn't frozen during that time...
 
and you can browse around and look at other videos, read the comments and stuff like that, and that's a good experience. Now, of course, we want people to sit there and say, uploading dot dot, estimated time, five minutes or whatever, and give you some sort of feedback there. But the first point of being able to give that feedback is to actually allow paints to happen. So even without that feedback, you can get away with it. From a UX perspective, we recommend that feedback.
 
And ideally, as I say, the actual interaction should finish. But there is a possibility of other stuff happening in the back.
 
Dan (14:24.919)
So to put it kind of a different way, what you're saying is, when you launch an interaction, the logical conclusion of the interaction might take a while. Like you said, if it's downloading a whole bunch of additional articles that can take, I don't know, a second or two. But you want to, first of all, display some sort of visual response much earlier than that as an indication to the user that
 
acknowledging their interaction, that something is happening in response to their interaction. And during that operation, you don't want the user interface to be stuck or frozen. You want for it to continue to respond to any additional interactions that the user might do while you're processing that lengthy operation.
 
Barry Pollard (15:14.99)
Correct, and I saw, I think it was on Hack and Use or something like that, whenever this went around it was like, oh, all you need to do to get around it is just not block your main thread and just sit there and say, processing or paint one pixel, and I was like, yeah, and that's a good thing by the way, we're happy if you do that, if you do those two things, our job is done.
 
Dan (15:32.947)
And by the way, that's kind of the way that the web works out of the box. Because if you could look at what buttons do when you press them, you have that 3D effect of being pressed and then unpressed. That's exactly it. Then processing that button click might take longer, but at least you want that visual cue that you press that button. So yeah, I'm totally with you on that.
 
Barry Pollard (15:58.43)
Yeah, and quite often those browser widgets, like there's a default button, some of them, it's a bit complicated, but some of them can happen automatically because they happen often, but most of them, and particularly if it's custom button, you need to allow the main thread to be free enough to draw that interaction or a paint to happen to show that 3D effect. So yeah, that's another thing is that we want that to happen even.
 
If you're, you know, you don't put up a, I'm processing what I'm doing and all you get is that 3D effect, at least that's something. But as I say, from a UX perspective, it's much better to give them further feedback and actually keep the user informed as to what's going on. Or, yes, thank you. We've received that button click. We'll get back to you in a minute. This is going to take some time.
 
Dan (16:42.863)
Now, with FID, as you said, and I quoted some numbers, the situation in most websites is great. In fact, it's kind of too good. That's kind of the inverse of the situation with INP, because if I'm looking across the various frameworks again, then only 47 or 48% of all React websites
 
actually have good INP. With Vue, it's slightly better at 49 percent. And with Svelte, surprisingly, it's slightly worse at 46 percent. I think by the way that it might be worse because Svelte is more often used in countries with lower-end devices precisely because it's lighter weight. So it's kind of being penalized as it were for being like
 
too good, so it's used in more problematic scenarios. I can check by filtering for the US, but it's really not that important right now. But the bottom line is that if for FID, we were at almost 100% across all frameworks, with INP, we are at around 50% with all frameworks. So it effectively means
 
that the number of websites that have good core vitals will drop in half more or less come March.
 
Barry Pollard (18:20.058)
No, no, that's not quite true. So we did some analysis for this for the web almanac, which is another project that I'm involved in and was involved before I joined Google. They did some analysis in the performance chapter, which I'll just pull up. And there is definitely going to be a drop in the number of websites passing core web files. However, despite
 
The IMP being more difficult and yes being more difficult on rich interactive websites is typically JavaScript framework based websites are intended for rather than a static blog type website.
 
Charles Max Wood (18:54.937)
Right, reacts broken. No, I'm just.
 
Barry Pollard (18:58.582)
No, I mean those are used for bigger things, but even with that, LCP is still the tougher metric to pass. So the number of websites that are failing LCP is lower than the number of websites that are failing IMP. So given that, yes, some sites might move from failing LCP to now also failing LCP and IMP, but as an overall number of websites passing core or failing core web vitals,
 
Charles Max Wood (19:14.171)
Oh really?
 
Barry Pollard (19:27.902)
It's not as dramatic a shift as that. There is a, I'll just need to leave that up there. I think it's like a 10% drop or something like that. So it's not a 20, 30% drop or whatever numbers you were quoting there. But there is a drop. I'm trying to see exactly. So across all of them, 40% of websites have a good score of vital. This was done in June, 2022, and it drops down to 31%. So yeah, it's not.
 
from 40% to 20% as those numbers might indicate. Because you've got to remember, it's a combination of all those three. And again, if LCP was a lot easier and a lot more websites were passing that, and INP was the reason of the holdback or the lowest metric, then you would see a more dramatic shift. So again, we don't need to panic about this thing and where the intent of these metrics are to push people in the right direction. And if you make it too...
 
Charles Max Wood (20:03.166)
Right.
 
Barry Pollard (20:25.686)
low, like only 5% websites pass, everyone's gonna go, this is an impossible target to achieve, what's the point? So, and I think that's another good point, is like, we set thresholds for each of the core web vitals and for INP, it's 200 milliseconds is a good threshold. UX research says that should ideally be 100 milliseconds. That's when humans start to notice that there's a noticeable delay. But we just don't think that's achievable at the minute on the web.
 
especially on mobile, especially given the devices that range that are out there. So rather than giving an impossible target that everyone's not going to, you know, the vast majority of people aren't going to be able to achieve, we want to make a realistic target. And if that means lifting our thresholds a little bit, that's what we do. So hopefully in the future, we can maybe bring that in a little bit. We haven't really adjusted the metrics on any of the core web vitals at the minute because we haven't seen enough of them move.
 
but that's always another option of continually tweaking and changing these things. But at the minute, we're going to have to be a little bit lax and we'd ideally like...
 
Dan (21:29.719)
So I have a couple of questions about how, in fact, to deal with pro test results. But before I take us there, I have another question, slightly on the side maybe. I know that you guys have also been looking at including soft navigations in the measurements, because you mentioned that INP, like CLS, is measured across the entire session.
 
and for single page applications or SPAs, that could be a fairly long time. And all the navigations that quote unquote occur in between are actually considered to be effectively the same page throughout. But I know that you guys are looking at ways to consider these soft navigations to be kind of the same or similar.
 
to the hard navigations that occur in multi-page applications when you go from page to page. Is that the case? And if and when it does happen, have you looked at how that might impact both IMP and CLS?
 
Barry Pollard (22:43.554)
So yes, it is something we're experimenting with. Now it's early enough days with it. We're encouraging people to try it and give us feedback. We're desperate for feedback on all the things that we try and launch. So that's my plea out to the people there. That's the thing that interests you. I published a post on that. If you just Google experimenting with soft navigation, you should find it easily enough. But yeah, the minute we treat each page load from a browser perspective differently, and we report the...
 
Core Web Vital Metrics at the end of each page lifetime. So if you visit 10 pages on a traditional, I hate the term multi-page app, but let's go with it, traditional MPA, and then you do 10 soft navigations as we call them on an SPA where, as far as the browser's concerned, it's one page and you've...
 
Charles Max Wood (23:23.701)
Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha
 
Barry Pollard (23:36.566)
faked the page navigation. So from the user might look like the different pages and from the user perspective, they think it's 10 pages, but from the browser's internal perspective, it's one page. Then you only get one core website reported back for that. Which also means then that if, in a lot of these cases, like INP, you'll get the worst INP reported back, you'll get the worst bit of CLS reported back. So there's always been an accusation that
 
This is unfair because, you know, the more you interact with the SPA, eventually you're going to hit a glitch or something and you're going to get a bad interaction. Or if nine out of 10 of your ones are good, then we should get nine out of 10 being reported good rather than one out of one being reported bad. And I think that's a fair accusation and it is a challenge. I think in IMP, we've considered that a little bit. And I said we report the worst interaction and I said there was some caveats around it.
 
We used to tell people it was a 98th percentile and stuff like that. We found that very confusing because most of the Core Web Vitals are reported at the 75th percentile across page loads. So that 98th percentile for a single page load, which is then aggregated across the 75th percentile, was a very confusing concept to people to get. And then you started getting confused about, oh, why is this one different? It's not. But basically, whenever we're reporting the worked IMP for a page, we ignore...
 
a number of outliers. So if you're in a typing app, you're in WordPress using Gutenberg and you're typing your blog post, or you're in Google Docs and you're typing something, you'll probably notice that at some point through your computer doing something else or your network freezing or something like that, you might get a glitch where your typing suddenly pauses for a second and then catches up. So in general, the page is fairly responsive. It just glitched there for a second.
 
Charles Max Wood (25:29.481)
Mm-hmm.
 
Barry Pollard (25:33.71)
So we're going to kind of ignore those glitches. So we ignore one glitch for every 50 interactions. So the longer you live a page, you will get more of a glitch allowance for one of the better word to say those ones won't count because some things are just going to go wrong. We want to measure generally the kind of the, how the response felt to the user. And we accept that the world isn't perfect and occasionally you get these sorts of glitches. So there's a little bit of that built into ANTI already.
 
But yes, you're right. Ideally, what we would do is measure those 10 page interactions. I talked earlier about how Core Web Titles is supposed to measure it from the user perspective and things that you can explain to your dad, um, rather than from a purely technical perspective. And as far as the user is concerned, often these 10 pages, these 10 soft navigations are the same as 10 real navigations in the real world. And in theory, they're faster because everything's loaded up front and that sort of thing. Um.
 
Whether that's true or not is something that's been debated a lot in the past. And that will be interesting if we can measure this and see what the impact is. Because there's always been this upfront cost of SPAs, but it's worth it because people visit five, 10 pages of our app. Well, let's actually see if that's the case whenever we get there. Um, but yeah, that that's going on and there's a lot to figure out about that. So the minute we're concentrating on the technicalities of doing that, um, and your advice is working very hard in that.
 
And we're now, we think we've got a couple of heuristics of measuring when we can do these page navigations. Because again, with Core Web Vitals, we kind of want to treat every website the same. We don't want individual websites to trigger when a soft math happens and do it a million times and get great Core Web Vitals scores when that doesn't reflect reality. So we're kind of looking at a more heuristic based approach where the URL changes and something changes in the DOM and it was interacted by a user. It's not just an animation happened or something like that.
 
We've got a couple of those sort of things. And then we start admitting new core web vitals for those. They aren't used anywhere yet. They're purely for developers experimenting and feeding back and saying, now this doesn't measure that wasn't a soft nav as far as I'm concerned, or this soft nav, which I do think was soft nav wasn't picked up by that. Why not? And then once we get all that, we've got to figure out what do we do with this sort of thing, do we treat every single soft nav one for one, the same as a
 
Barry Pollard (27:59.626)
hard navigation, for want of a better word. Do we treat them as half a hard navigation? Because in some of these apps, there's questionable what a navigation is and what it isn't. The URL's changing, the whole page's changing, that's fairly obvious, but if only half the page is changing, if you're clicking on a tweet and it suddenly pops up to fill half the screen, is that a navigation or is that more dialogue type thing? So there's lots of
 
Charles Max Wood (28:20.413)
Mm-hmm.
 
Barry Pollard (28:29.382)
nuances to figure out here.
 
Dan (28:31.735)
Yeah, we spoke a while back with Tanner Lindsay, creator of 10stack, and he's a big proponent of putting as much of the state as possible in the URL. He created the type safe router, and that includes also type safety around URL parameters. So he's very much in favor of putting
 
information or state information in the URL parameters. So if every time you change the URL, that potentially counts as a soft navigation and you're using the URL for state, that you check or uncheck some button and all of a sudden that counts as a soft navigation, that obviously can be problematic.
 
Barry Pollard (29:26.91)
Exactly. But then I think the other problem sometimes, I remember someone, I can't remember if it was now, but they said, should it be linked to the URL change at all? And quite often solve navigations happen without URL change. And my reaction to that is why? The whole web is built on webs and URLs. We should be able to go back to that state of whatever is loaded or go into page number three of your SBA.
 
Charles Max Wood (29:53.758)
Mm-hmm.
 
Barry Pollard (29:56.398)
Whenever SPA developers don't use a URL, really, I'll try not to swear here, really annoys me. Because you can't go back to where you were. You can't bookmark it. You can't send a link to someone. And I'll say that's what the web was built on. So to me, it's really important to Europe.
 
Charles Max Wood (30:03.861)
hahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahah
 
Dan (30:05.335)
I'm with you on that. Amen.
 
Dan (30:15.399)
And that's by the way, again, that's the point that Tanner was making. The fact that putting all the state in the URL meant that he could, you could easily save the state just by bookmarking the site and you can send somebody else. A link, copy the link and send it effectively. They see exactly what you're saying.
 
Barry Pollard (30:36.738)
So I think that's definitely part of it. Then we need to work out how much of the page needs to change. You know, if it's a checkbox just going on and off, is that only 5% of the page, therefore it doesn't count as a soft nav and it has to be 50%, 100%. But you know, we can figure out what it is at the minute. We're just saying something has to change, whether that heuristic is good enough or not. Um, I don't know. We'll need to figure that out and actually see, but that's what I mean. We're trying to work on the technicalities of it.
 
And I think a lot of people are trying to say, okay, when's this coming to Core Web Vitals? And my argument is we're just trying to figure out if we can measure this properly. Then we've got to figure out what we're going to do with that and understand, as I say, one for one. Does that make sense? Core Web Vitals, I mean, a lot of what people are interested in them for is the SEO benefit. And if a page isn't a separate page in search, does it make sense to have its own Core Web Vitals?
 
From a UX perspective and a measurement perspective, absolutely, you might want to know about it. But from a search perspective or a ranking perspective, if these internal pages or mini pages aren't surfaced as, you know, in the search engine results page because they're part of an app and not a separate page, then does it. So there's lots of things to figure out there. But first of all is to figure out, can we even do it? And then we can figure out, okay, we think we've solved it. We dealt with all those edge cases. We're happy where we're landing in this.
 
Now what do we do with it?
 
Dan (32:06.407)
Now if I can pull our conversation in a slightly different... Oh Chuck you wanted to say something?
 
Charles Max Wood (32:06.909)
Um.
 
Charles Max Wood (32:11.233)
Yeah, I think my question with a lot of this, because we're talking about, okay, what do we measure and how do we measure it? From my perspective, I'm looking at it and thinking about, okay, my marketing team or my boss or whoever comes to me and says, hey, our ranking isn't where it needs to be. Do these numbers show up anywhere where people can see them? Because I remember way back in the day when we talked about it.
 
you could kind of get the numbers. Has that changed? Like, can I go see now what, you know, how far off I am? And, you know, not necessarily get pointers on how to fix it, but just, you know, can I see what's being measured?
 
Barry Pollard (32:49.346)
Well, yeah, so there's a few different places where you measure this. Google Search Console, which you have to be a site owner to register, you have to prove that you own the site with various ways of doing that. That'll tell you the number of URLs that are passing or failing each of the Core Web files. And they added IMP to that recently, which has then triggered email, it's been triggered panic. So that's ultimately a good place for...
 
Charles Max Wood (32:58.281)
Uh-huh.
 
Charles Max Wood (33:07.247)
Okay.
 
Charles Max Wood (33:14.101)
Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha
 
Barry Pollard (33:18.227)
Give me all the pages and do that.
 
Dan (33:21.239)
There's one, there's one, another great thing about the Google search console in that context is that it does some smart grouping because very often a page is like other pages, like, you know, all the blog pages have the same structure maybe or a lot of the landing or product pages have the same structure. So instead of telling you like, you know,
 
these this page has a problem and this page is a problem and this page and like a hundred pages on your site have you know each one shown individually separately it kind of groups them and tell them hey you've got a problem with your blog pages which is a really nice thing.
 
Barry Pollard (34:04.102)
It is and it isn't because it is a source for a big confusion. When that works, it's really nice. And certain sites it works for and I don't know how they do it. I don't want them to search sites. So, yeah, whether it's all those slash blog pages are grouped together or this combination of what the pages look like. I have no idea. But on certain websites, I did a lot of work with Smashing magazine, worked brilliantly for them.
 
They had slash blog or slash article pages. They had slash author pages, which used a different template and loaded in a different way. And they had slash newsletter or whatever. You know, they had a number of different types of templates in effect for the page. And you're right, Dan, whenever the article pages were slow because the banner image for the article was slow, that affected all 20,000 articles that they've got. And it's one small fix and suddenly boom, 20,000 articles are all better.
 
and everything's happy and it's very good. Where it gets confusing is whenever you've got either the group at wrong, which they're actually very smart at doing that, but again, you can't really see that. Or you get outlier pages. So it will sit there and say, your slash articles is slow. And here's an example of 10 articles that we know that we have individual core web titles for. And the first one passes and the second one passes and the third one passes.
 
All 10 pass, but in aggregate, that group is slow. That causes mass confusion with people going, these aren't great examples. And I get it, I totally understand why it's confusing. They're actually ranked in order of visitation. So the first 10 most visited pages are showing there and they might all be passing because the other confusing aspect of Core Web Vitals is it's not what Googlebot thinks of your website.
 
Charles Max Wood (35:39.977)
Right.
 
Charles Max Wood (35:48.209)
Ah.
 
Barry Pollard (35:59.986)
or what Lighthouse thinks of your website. So it's not a standard, we run this page and it comes back fast or slow. It's not even what Google thinks about your website, it's what the users are. So you might have a lot of people visiting older articles from a slower country or a slower devices or people reading, why is my web slow articles are probably from slower devices. And...
 
if you get enough of those and suddenly that group looks slow but the examples of the busy pages are fast and it causes a lot of confusion that way but it is a very complicated thing to try and aggregate all that information up in an easily presentable way of saying hey you've got a problem here's some example ones and in a lot of cases it works brilliantly and everyone's happy in a lot of cases it's less brilliant and it's a bit more confusing and you need to explain why
 
Seemingly contradictory information has been shown to me.
 
Dan (36:59.539)
So yeah, so Google Search Console is the way to see the results aggregated across all the pages in your website, which is especially useful for larger websites. If you've got hundreds or even thousands of pages, then obviously it makes, it can make all the difference in the world. And then again, you can then check individual pages using a tool like PageSpeed Insights or Web.dev, right?
 
Barry Pollard (37:26.746)
Yeah, and PageSpeed Insights on the web.dev website will show you both individual pages, but it will also show you across your whole website. And that's another complication that makes sense when you think about it, but takes a little bit of explaining. Because if you have 10,000 pages on your website and 1,000 of them are super fast, and they're the ones everyone
 
Barry Pollard (37:56.022)
then you might see in Google Search Console, 9,000 pages are slow, only 1,000 are fast, you might panic and go, oh. Then if you go and check any of your popular pages, they might all pass in PageSpeed Insights. And even at the origin level, which PageSpeed Insight also says, it will say most of your traffic is getting fast web pages. Because those other 9,000 ones are old ones, they're not cached in your CDN Edge node, so they are taking a little bit longer to show. So...
 
You get different views on the same data that sometimes doesn't seem to make sense until you get it explained to you, or you think about it a little bit more. So the origin level data that you get in PageSpeed is very interesting because it tells you as a whole is your site fast. Whereas sometimes a Google search console page, it's looking at it more at the page by page level, because you might want your slow pages to rank. You might want to know why they're not showing as fast, but it's different ways of looking at it.
 
Dan (38:27.008)
Yeah.
 
Dan (38:51.451)
Yeah, and another thing, I know another thing that confuses people is the fact that in some views you only get data if you've got enough visitors and you never informed us what the limit is. So, you know, we get, I guess that it's a few thousands a month, a week, I don't know, something like that, you tell me.
 
Barry Pollard (39:15.934)
Yeah, you're not getting that out of me down there. We're unimportant for a reason. Um, yeah, we do, we do...
 
Dan (39:22.944)
But then you get origin data because for the site as a whole, there is enough traffic. So that can be confusing. And then in Google Search Console, you will actually get indication for pages even if they don't really have sufficient traffic, because they're yours.
 
Barry Pollard (39:43.162)
It's not actually different. Again, it's different ways of putting that. We have certain limits that we don't publicly disclose, below which we won't show it. And that's for two reasons. One, it's a public dataset. So we need to be aware of privacy implications of being able to measure another person's site and how much traffic they get and that sort of thing. And then two is also statistical relevance. If you're getting one or two website visits,
 
from God knows where on God knows what device. At some point, whenever you don't get enough traffic, that's just, could be a complete outlier. It could be statistically irrelevant. So you need enough traffic that we're comfortable saying, yeah, this is broadly representative of these pages or these page groups or this origin. So in Google Search Console, we don't give actually any more data, but because we gather it up to that group level, there's a better chance of passing that threshold. So again, this is why you see
 
A group of all your blog posts has an average LCP of 2,500 seconds and here's your top 10 ones that are 1,800 seconds. So they might have enough to show individually at a page level but the group level is a different figure. Or none of the pages have enough to show at an individual level but we could still give a group level because that passes that threshold of those two reasons.
 
Dan (41:08.811)
And which brings me to another point in there are, like you said, various limitations on the data that you get from Google in this context due to privacy issues and various other restrictions. There's also the fact that you aggregate across a 28 day window. So if you want to have, you know, finer grain control, the other option is to just use a third party rum tool.
 
And there are quite a number of providers out there. You know, we've had several on our show, you know, we had Sentry, there's Speedcurve, Raygun is a sponsor, Akamai, there are quite a number of such tools. You can even roll your own by the way, it's not that different.
 
Charles Max Wood (41:47.445)
Mm-hmm. Ray Gun is sponsoring.
 
Barry Pollard (42:00.518)
Yeah, and I think we're very much proponents of that. We're big fans of that. We've always said having your own run solution is the way to do it. Crocs is fantastic in my eyes. To me, it's democratised run. It's given free run to a vast majority of the internet there, or at least the visited internet. So, we're talking 18 million websites suddenly have some idea of what the real users are actually experiencing on the website.
 
Prior to that, RUM was a very neat product. Um, it used to be quite expensive as well. Um, so it was the big boys that had it, you know, that could afford to eat these sort of RUM products. So we've now given it to everyone. But with major caveats in that it's very high level. So the way I like to say it is, crux will tell you you've got a problem, but not necessarily why or how to fix it. So a RUM product will allow you to then drill down into it in a lot more detail.
 
Which particular pages is it? Google search console is an attempt to do that. And I say, sometimes it works fantastically. Often it works fantastically, sometimes less so. A run product will allow you to say, categorize these pages in these ways. It's not, shouldn't be done by blogs and articles and newsletters. It should be done by products or product categories or whatever. And it will also allow you to say, to look at your users and say, Oh, the people from UK who are logged in.
 
get great performance, the people from France who are logged out get poor performance. Why is that? Oh, we don't have this, we don't have that, and we're allowing you to look at it. So, yeah, I would definitely recommend looking at ROM. And yeah, there's ways of running your own. Google actually gives a free library, the Web Vitals JS library that I also look after, which isn't a full ROM product, but does give you the core Web Vitals metrics and a couple of others.
 
allows you with a couple of bits of JavaScript to stick it on your website, measure these metrics and report them back to wherever you want to. Google Analytics is one place that we save and Google will go back to.
 
Dan (44:02.935)
By the way, as far as I know, most if not all of the third party run providers are actually built on top of that library.
 
Barry Pollard (44:12.938)
That's not quite true, a lot are, and more and more are. So I think Cloud player uses it, and there's quite a few of the smaller players that use it. I think the bigger players like Akamai, Built in Boomerang, which has been out for a number of years before that library, and some of the other more traditional run players have their own library. But yeah, we've seen a lot of new ones enter the market that are one-man, two-man workshops that are giving great solutions to this sort of thing.
 
Dan (44:40.171)
So when I just, you know, my own personal experience when I worked at Wix and I implemented co-web vitals for all across all Wix websites, initially I implemented it, I rolled my own because I actually predated that library. But now that I work at Next Insurance and we, and again, I actually rolled our own custom solution because of our, you know, special needs.
 
I actually did leverage that library, which is just excellent. And it's much easier than going directly to the DOM APIs.
 
Barry Pollard (45:16.522)
Yeah, I think that was the intent of it is whenever we add something to the web platform, there's a constant argument between should it be low level and people can build on top of it? Or should it be high level? This is exactly what you need and what you want. And generally there's a trend for more getting low level because we don't know how these things are going to be used. So it's better to give you the shifts and you build CLS on top of those shifts. Or we give you all the
 
Charles Max Wood (45:38.493)
Mm-hmm.
 
Barry Pollard (45:44.134)
interaction timings you build out of the top of that. But the more we did that, I was having a discussion with Phil Walton who created that the other day, who created that library, is there were certain things that were getting more and more complicated of how exactly is CLS calculated, how exactly is LCP calculated, particularly in sort of weird edge case scenarios, if you open a tab in the background and then go to it, you're going to get a very high LCP because that's when it first got painted and so on. So...
 
He created that library as an intent of kind of explaining that. And it was kind of more of a reference point library rather than something we necessarily intended people to use. Um, but it's been very successful in its taking off and done that. I think even the run providers who aren't using that, we point them to us, to that library and say, here's a coded example, not even docs, you can actually see in the code how it works for IMP. Um, this is sort of a thing that you should be doing your run library to actually use.
 
Dan (46:40.583)
Now, I'd like to pull us in a slightly different direction because we've been talking about how to measure and what is measured, but we've not talked so much about what actually causes the performance problems and how to deal with it. And I recently saw this kind of spicy tweet from Ryan Florence who apparently specializes in spicy tweets.
 
Charles Max Wood (46:55.63)
Yeah, I wanted to go there too.
 
Charles Max Wood (47:09.013)
Ha ha.
 
Dan (47:09.447)
And, you know, Ryan, for those who don't know, is one of the creators of first React Router and then Remix. So he wrote, and I'm quoting, web performance is almost exclusively about, one, being good at databases, two, parallelizing a synchrony, and three, distributing access in that order.
 
I've got thoughts and feelings about that, but I wanted to hear your own, Barry.
 
Barry Pollard (47:42.63)
I'm going to go back to Steve Souders who kicked off this whole web performance sort of thing. I can't remember his exact quote but it was something like 90% of web performance is on the front end and only 10% is on the back end or something like that. So he advocated much more looking at what the browser does rather than necessarily what the database does or your network requests or things like that.
 
I think someone else recently, a couple of years back, re-ran some of his analysis and came up with basically the same conclusion. His browsers are fantastically complicated and unable. There are whole operating systems there. So a lot of what he says, I think, what Ryan said there has a grain of truth to it. I'm not sure, like you, that I'd agree with it totally. And there is a lot to be done with the front end as well.
 
Some of the things he's talking about databases there, asynchrony, whether he's talking about network fetches or more of what happened in the client there, I'm not entirely sure. And again, distributing access is the same thing. But I think there is an awful lot that can be, that needs to be done also on the front end. I think databases are typically, you get a lot of leeway there, a lot of, you can...
 
get away with a lot of bad database code. I spent the first 10 years of my career looking at database codes, working in banking. And these two pieces are amazingly fast and they forgive an awful lot of things. And to be honest, technology in general is amazingly fast and forgives a lot of poor practices, whether that be users or developers. But I think the distributed nature of the web.
 
means that clients you have no idea whether it's going to be a fast Apple Mac with the latest GPUs and gigabytes of RAM for free or whether it's going to be a 10 year old Android phone with one single CPU and they're also trying to play a game with and watching a video in a picture and picture screen at the same time as writing the web. So I'm not sure I totally agree with that. I'd be interested in hearing what your issues were with that tweet.
 
Dan (50:03.879)
Look, I think it really depends on what you're measuring. I mean, if we're looking at Core Vitals and the three Core Vitals we've mentioned before are LCP, CLS, and let's say INP, because effectively, soon that'll be the Core Vitals. Then CLS, that's not, databases don't have impact on CLS.
 
Uh, you know, whether things like jitter and jump, you know, uh, that doesn't really have anything to do with how the duration of your database queries. Uh, likewise databases don't, shouldn't really have impact on your INP or FID because you should be. If you're going to perform a lengthy database operation in response to the user interaction.
 
you want to show some visual cue up front so that the user will know to wait for the response. So out of the three Core Web Vitals, databases totally don't impact these two of them. And even with LCP, it's kind of debatable because it's kind of dependent on how your web application is architected. So I really think it depends on what you're measuring.
 
which very much depends on what your web application is all about.
 
Barry Pollard (51:35.582)
Yeah, I agree. And I also think there's a tendency to, like in the past, we concentrate a lot on load metrics. And I think a good point of the Core Web Factors is when we're getting away from that, it's still very important, which is still why one of the Core Web Factors is about that. But the other two are, I say CLS can be heavily influenced by low down, B can be slow while the browser is busy there, but we're looking across more holistically, the whole web life cycle, web page life cycle there.
 
And maybe that's again, I don't want to say dated, but if he can do a spicy tweet, I can give a spicy cake. Maybe that's a dated view of web performance is not only about load anymore, at least in our view at Google. It is a bigger thing than that. So having, you know, people might be okay with a website taking slightly longer to show as long as a good experience when it's there. Ideally, it's great.
 
in loading and in experience, but if your LCP is slow but your CLX and your IMP is fast, maybe that's good instead of everyone thinking, well your LCP has to be fast, that's a given, and who cares about interactivity as long as it's there.
 
Dan (52:46.216)
Yeah. So there are differences. Are you building a marketing or, let's say, an e-commerce website where all the pages need to be searchable? And so SEO obviously matters for every page in the site. And it's mostly about loading times, because you really want the product page to load quickly.
 
Or maybe you're building some sort of a web application that's even sitting behind some login screen and, you know, I don't know, like a Twitter. The loading time for Twitter is not that important or it's less important compared to how quickly it responds once you're inside Twitter. So yeah, it really depends on what the website does, how it works and what the goals are.
 
But it, yeah, go for it.
 
Barry Pollard (53:42.606)
Jake Archibald gives a great example of this. So he put Photoshop on the web and he said, Photoshop taking some time to load is okay. You know, it's sitting there and doing that loading bar and loading filters, loading this, loading that, whatever. That's fine, because I'm not coming here to just read an article and then go away. I'm coming here to do some hardcore graphic work and probably can be in here for half an hour, an hour, and then do it. That's a different expectation than clicking on a link.
 
to be honest I'd say Twitter is more closer to, or X, is more closer to that, although quite often you just wanna click on a tweet and read it, or you just wanna check in while you're, you know, I've got five minutes to spare. So yeah, there is definitely different use cases of when you're gonna accept something as slow or whatever compared to, you click on a product that you're interested in, or you click on, I don't know, I wanna buy black shoes, there's 10 websites, you click on one, it's taking five seconds load.
 
I get that, I'm going back and I'm going to site number two and clicking on that one. Oh, it loads instantly. Right. I'm with a bank there. So there it's much more important to have a sort of quick load and as I say, ideally a good interactive experience afterwards.
 
Dan (54:55.339)
So if we consider the three core vitals, then can you give us, from your experience, working with so many websites in your capacity, in your role at Google, what would be the top two performance-related recommendations for each one of the three core vitals?
 
Barry Pollard (55:18.542)
Okay, this is getting back to my JS Nation talk, which first put me in your guys' radar. Yeah, so the beginning of the year, we published a post, the top core Web Vitals recommendations. We did a lot of work of looking at what things people can do to make their websites fast. And there's a blog post out there, I will look for it, or I did a talk at JS Nation on it. And then we came up with three metrics for each, but I'll agree to your terms and try and narrow them down.
 
Charles Max Wood (55:23.899)
Yeah.
 
Barry Pollard (55:48.314)
And what we wanted to... We'll see if I can remember all mine. But what we wanted there was things that we think people can actually make a real world difference with. Because we can sit there and say, don't use React, server side everything, do that. Are people gonna do that? There's good reasons for people using React and server side rendering is more complicated or inlining CFS is another one.
 
Dan (55:48.339)
You can give three if you feel like it, for each one.
 
Barry Pollard (56:14.47)
It's amazing for making your websites faster, but it's really complicated to get right. And is it worth the hassle? So yeah, we came up with a few thoughts. So LCP, the main thing there we've got to say is put stuff in your HTML. The browser works best when it's given work to do. If your entire webpage is a div with app.js inside of it, or ID equals app, and then the
 
You're basically saying, browser, I don't trust you to load this website. I trust this framework to do it better than you, or I trust my developers to load it better than you. Um, whereas if you sit there and give as many resources as possible to the website, even if it's a heavy JavaScript website, at least the browser can get started on those ideally as HTML, you know, so it displays even though there's not JavaScript, even if it's a kind of skeleton screener type thing, if that's not possible, then.
 
resource hints and preloads and that sort of thing. Give the browser something to do while it's not doing anything.
 
Dan (57:19.696)
So a contentful response, which is better for both users and for search engines.
 
Barry Pollard (57:26.334)
Yeah, I mean search engines come a lot, Google in particular, they can process JavaScript and they do very well with that. But yeah, a contentful resource. Secondly, fetch priority, I think is magic. It's a new attribute that Chrome introduced. I think it was only this time last year, it's been out there a while, we've been talking about it well. But it's basically a way for you to say this thing is important. So...
 
Browsers are very good at loading webpages, but they have to be very generic. So they have to sit there and load websites in a way that will work for most websites. And one way they do that is they deprioritise images initially. They look at scripts, they look at your CSS, they say, this is what I need to render a website. We'll leave a big white blank square there for an image, and we'll get to that whenever I've finished all this critical stuff. Fetch priority. Very good.
 
Dan (58:22.823)
You're actually making a change there now, I think.
 
Barry Pollard (58:25.378)
We are a very exciting changer. So anyway, I'll talk about it in a minute, but such priority is a way of saying, this is my main image. I think it's important. I think it's super important. Consider this as critical as the CSS and your scripts and load that early. And then the change I presume you're talking about was path means change from a couple of days ago, is that the one? Yeah, so the downside of that,
 
Dan (58:50.763)
Yes.
 
Barry Pollard (58:54.331)
is that we need website developers.
 
Dan (58:55.859)
No, but first say what it is. Ah, you're, okay, sorry.
 
Barry Pollard (59:00.254)
Yeah, so we need to, yeah, you need to sit there and say, this image is super important. Don't delay me, don't deal with me later, get me now. And the idea is hopefully it's there quicker. Ideally, whenever the first paint happens and the website shows, oh, I've got my nice super banner image or my headline article image or whatever. The downside is that developers need to then actually add that little bit of thing. It's one HTML attribute, it's not difficult, but the way web pages are built or.
 
You know, if you've got a template that works in the CMS, maybe that's easy to do. If it's more dynamic or more complicated, it might be more complicated to do. So is there, we started thinking, is there a way that we can do this automatically? Um, can we load, uh, it's very difficult to actually know what the most important image is, developers write all sorts of weird and wacky websites out there. You like to think that website will be all in order, but with CSS it can be.
 
Dan (59:50.923)
Ha ha ha!
 
Barry Pollard (59:55.006)
rearranged all over the place with JavaScript, more stuff can load and other things can go in there and stuff like that. So figuring out what the LCP image is difficult for an author as say if you've got a CMS, you've got a template, you might be able to sit there and slap that attribute on that LCP image as we know what it is. So we've, when I say we, Patrick Meenan is the one who should give all the credit to, experimenting with is could we figure that out?
 
Could we pick the first image? Is that the one? And quite often, no, it's not. Maybe it's, I don't know, the logo for the website or it's a phone icon. Could we first pick the first biggest image? Sometimes we know the size of the image, sometimes we don't until we see it. Well, he tried a few different combinations. What we settled on is that five images, we're gonna sit there and try and boost those, not as high as fetch priority. So it still has a case, but we're gonna try and boost them up to medium.
 
The fetch priority is typically used to boost it all the way up to high. So it will just start fetching those a little bit earlier and hopefully affect that improvement. Improvements in LCP weren't spectacular. Some websites are going to see a boost, but we're not going to see a 10% jump in pass rates or anything like that. But some websites definitely didn't notice that. In CLS, we've actually saw a bigger thing than that because getting images down means that they will be drawn initially rather than...
 
stuff moving around as they're loaded later after the first short period.
 
Dan (01:01:24.327)
If I can provide my own additional recommendation related to LCP from my own experience, is make sure you've got your caching headers done properly.
 
Barry Pollard (01:01:36.578)
I was actually going to cheat on LCP and do the third one. So our third one was CDNs, and particularly, yes, caching headers. Most people use CDNs, but they use it for the content. They use it for their images or for their JavaScript. They've got static.bbc.co.uk or images.cnn.com. Few enough people have either CDN or caching headers on the HTML themselves, because your blog site...
 
is so critically important you can't even cache it for three seconds or an hour because when you publish that new blog article it's got to be out there you publish one a month but you have a cache control of zero seconds just in case that's the second you're going to publish it and I really encourage people to look at that. I normally do three hours on most of the websites I do. No, I don't look after news or roycers.com or in the gap.
 
Charles Max Wood (01:02:12.989)
hahahaha
 
Dan (01:02:35.18)
In that context, it's important, it's worthwhile noting that you can distinguish in the caching headers between caching in a shared cache, like a CDN or proxy, and a private cache, which is your browser cache. And if for some reason you're wary of caching in the browser because you're worried about somebody being stuck with, well, not so much necessarily an outdated version.
 
I've seen developers worry about being stuck with a buggy version. When you're caching in a shared cache, like a CDN, you should be able to put a longer caching header and you will get much of the benefit. So even if you're putting zero in max age for the browser cache, you will get much of
 
Barry Pollard (01:03:23.478)
Yeah, I think some of them like...
 
Dan (01:03:31.147)
put a higher value for S-Maxing.
 
Barry Pollard (01:03:35.082)
Yeah, and a lot of the, well, not as many as I'd like. Some of them allow cache invalidation. So you launch a new version of your site, you can tell all the CDNs, both get rid of it immediately and take effect now. And if I don't launch a new post for a month, then that never gets invalidated. So you get the best of both worlds in that you can have a long SMAX age, as you say, and then also instant invalidation whenever you go there.
 
Dan (01:04:02.668)
But even if you, like you said, it's a feature that I think it's now available with most CDNs, by the way, and there's no reason why not to use it. But even if you're not using it, even putting a shared cache duration of, let's say, 10 seconds can potentially mean that 90% of your user or visitors get a cached version.
 
Barry Pollard (01:04:21.397)
Yeah.
 
Barry Pollard (01:04:27.122)
That's really important because if you've got, you know, you do that viral post that everyone loves and 10,000 people come to your website in the space of an hour. If you know caching at all and you say everything goes back, what? 10,000 people are all hitting your tiny little cheap old origin server that you launched whenever you first launched your blog and forgot to upgrade. So that's a big subbie, wham, and then suddenly half of it doesn't work and it's slow. You set 10 seconds.
 
So 10,000 people divided by, there's 600 seconds in an hour, 610 second things in an hour, suddenly you're only getting one six hundredth of that hitting through. And everyone else is getting the cast version. So you check it from Rio de Janeiro in Brazil, they've got a local copy that hasn't to go all the way back to your server. So you're right, setting a ridiculously low time, five seconds, 10 seconds, a minute, is rarely gonna cause a problem unless you are a writer.com or a...
 
BBC.com that really wants to do a news website and do that. And the impact can be amazing because as I say 99% of your traffic on a busy day suddenly will get that boost of having all cash at the edge and actually going there. So yeah those are the LCP ones. I've left some the CLS ones
 
Barry Pollard (01:05:56.354)
CLS, Back Forward Catch would be my big one on that. So to explain that, and again it affects SPAs less so, but for any website, whenever you load it, the browser gets all the resources, puts them together like a big jigsaw, things are shifting around while it's loading, the ads are coming in and then the page is done. And you might get to see a lot of CLS during that.
 
And let's say you're a newspaper article, you're BBC or whatever, and then you click on an article to read, you go to that, you read it, then you're probably going to click back and actually go back to the newspaper articles and pick another article to do that. When you go back with your cache control header, most of it will be stored in the browser, but it still has to rebuild that whole page. Yeah, it's slightly quicker because it's got all the resources to hand, but it's still...
 
particularly in JavaScript heavy websites, it's got to sit there and do that and you quite often see that CLS impact again. What the back forward cache does is it stores it in memory for a few minutes while you go away, not forever. And then when you click back, it just says, I've already built the page, here you go. And that can be a real game changer in both LCP and CLS, but particularly CLS where we saw the big impact whenever we launched it. So check that you're eligible for that.
 
Because by default you are, but there's a couple of JavaScript APIs you can use, or if you set cache control in those store, if you use un-node handlers, both of which we're trying to see if we can get around for the back-forward cache. But the minute that's the case, on desktop, if you use those, you won't be eligible and you'll have to do that whole rebuild again. And that can have a real impact to your CLS. Other than that, CLS is kind of normal stuff, just gives everything dimensions and makes sure that nothing's...
 
If something's loading late, make sure you've reserved space for it. Make sure your images have height and width and that sort of thing.
 
Dan (01:07:48.548)
So I would add two more things related to CLS. One, if you're using custom fonts, you probably want to preload them, and with high priority. Although they, by default, will have high priority for being fonts. And the second thing is be careful when using font-related CSS units.
 
Barry Pollard (01:08:17.146)
Yes, I agree with both of those. I would say that fonts... I say preload them anyway because I hate that whole inflation sort of thing that you get where it loads the fallback font and then suddenly the custom font comes in and it's kind of Yeah, it just annoys me that whole thing
 
Dan (01:08:34.246)
You can't win!
 
Barry Pollard (01:08:36.738)
But preloading it gets rid of that. But I will say, in general, those are usually quite small shifts. So what you're doing is you're kind of, a font might go, a headline might go to two lines, in which case it'll be larger. But usually it's kind of, you're getting smaller percentages. But yeah, it does cause a shift. Whereas an image not having height and width has a much bigger impact.
 
Dan (01:08:52.927)
Yeah, except if it's the...
 
Yeah, unless it's the menu at the top and the font loads and pushes the rest of the page down. So the shift is small, but it impacts the entire page.
 
Barry Pollard (01:09:06.378)
Yeah, because CLS looks at how much shifted and how much did it shift by. So you're right, if it's front of the top, it will shift the whole page or 90% of the page a little, which will be... but because it's a small amount, then it will measure up. It will be measurable, definitely, and it will potentially not get out of the good into the bad thing. But generally, fonts are, I find, are more annoying for that inflation effect than they do for the CLS thing.
 
Dan (01:09:31.963)
And another annoying thing that I see in this context is when the fallback and the main font, you know, a line breaks or doesn't break. And so it's two lines and then it becomes one line and then it pulls everything up and it's really annoying when that happens. Anyway, so we spoke about CLS and the last one is FID slash INP. What would be your recommendations there?
 
Barry Pollard (01:09:58.862)
stop using so much JavaScript. I'm sorry, I'm in the wrong podcast for this. But no, seriously, that's like, again, going back to those emails we talked about at the beginning, I've had a lot of panicky people contact us. And we have a lot of great developer documentation on IMP, but it's very low level developer documentation of, this is how you should write your JavaScript and so on and so forth. A lot of it's out in people's control, it's third party JavaScript. It's...
 
Dan (01:10:03.153)
For JavaScript Jabber!
 
Charles Max Wood (01:10:03.56)
Hahaha
 
Barry Pollard (01:10:27.45)
Google embeds sometimes, quite often, Google Tag Manager, Google Maps, Google YouTube, all those sorts of things, but you don't have any control about developing that. So I'm actually changing my opinion of this a little bit, and I think we need to change our approach there of, I think, and particularly also for site builders, and by that I mean non-developers. So if you're using WordPress or Wix or Shopify...
 
and you're building your website and you're maybe not the most technical person. You're not writing all this JavaScript. You're just trying to run a business. And you get these scary emails. What are you supposed to do? Um, I think for a lot of that it's looking at what you're loading on your website and do you need it? Do you have a tag manager with all your marketing tags from summer 2021? And do you just like to add tags or your marketing department likes to add tags, but they never liked to take them away.
 
Did you try 46 different plugins before you found the one that was right but forgot to remove the other 45? So is there a lot of junk loading your website that just isn't being used? So that's my biggest advice. Are you using the Google Maps SDK? Which is quite a chunky thing because you have a map on your Contact Us page but you're loading it on every single other page. That's overkill. Have a look and see what you're loading in each of your web pages and whether you need it. Is there old stuff? Can you do...
 
a spring clean. That would honestly be my first advice for IMP. It's just clean up your website and see what you're going for. For people who've been looking at this for a while, for the damn spears of the world and Chuck Woods, you know, maybe you've already done that and you've got a nice website and you need something a bit more detailed. We've great docs on that. But for most people and particularly non-developers, that's the first step. It's just do a quick run through, spring clean of your website, remove all the stuff that you didn't.
 
Charles Max Wood (01:12:08.309)
Hehehe
 
Barry Pollard (01:12:21.706)
You don't end up using, oh, that analytics product that I thought was going to be great, but I never had time to look at, do we take it away or whatever. And then, and then similarly, yeah, for your own website is the stuff, the code that you are in charge of is break up long tasks. That's, I mean, that's what IMP is all about. It's about stopping these big, huge, long chunks of JavaScript saying.
 
Charles Max Wood (01:12:39.477)
Mm-hmm.
 
Barry Pollard (01:12:48.386)
I am more important than the user, screw the user. I'm going to take as long as I like to do this. So can we put in more things there? And I think that's a responsibility of ourselves as web developers, but also the light JS framework developers, the libraries, the third party people, like the goals of the world, I think all of us need to get more friendly with that main thread and give up more of our time there and say, yeah, someone else can get a little bit of time on this thing.
 
JavaScript, by default, just sits there and it's greedy. It'll just hold onto that main thread until it's done. Whereas what you need to say is, eh, I've done enough, get someone else to go, and then I'll have another go in a minute.
 
Dan (01:13:31.787)
So if I can quickly add to that, and I know that we are running towards the end of our show, but I really wanted to add a few points to that. So if you're a developer and you've cleaned up all the third party scripts that you can, and you're still seeing INP issues or performance issues related to all these scripts, then you should look at Party Town.
 
Charles Max Wood (01:13:31.933)
Right.
 
Dan (01:14:01.343)
It's not an easy undertaking. It's actually can be quite challenging because it really depends on which, you know, pixels and scripts you're using. Some of them have kind of quote unquote box solutions. Many of them do not. But if you can get it to work, it can make a huge difference. So that's that. And the other thing is that I'm really happy about
 
is that a lot of the frameworks are coming out with newer versions that are tackling this long task issue head on by looking at reducing the amount of JavaScript being downloaded, splitting the execution of that JavaScript into smaller parts, smaller tasks, and generally being smarter about it. And
 
If you're using React, then the recommendation that I would give to you, and you need to be or you want to be using React, then the recommendation that I would be giving to you is be careful of re-renders. I'm seeing a lot of cases where doing that initial render phase or hydration phase, there are a lot of re-renders going on.
 
you know, tack on one after the other, and it results in a really long initial render stage.
 
Barry Pollard (01:15:34.974)
Yeah, I think IMP measures three things. So FID measures that input delay. IMP measures your input delay, which is other stuff is running and you can't get on that. It might be the exact same interaction because you double-click that thing that's taking forever or it might be your code, it might be some other code, it might be third-party code, but that delay is one part of it. Then there's your code. If your code takes too long, that's the easiest bit because you can quite often optimize that. Your code is running. How long is that?
 
And then the third component of IMP is the render stuff. And again, that you're right, that often gets forgot about of how long until you get that next paint, something has to happen. If it's a big, um, old school JavaScript framework from a few versions ago, might have to rerender the whole page and that might be really expensive and really complicated to do. So yeah, and like you, I'm really excited the way they were doing it. And this is kind of the point in Core Web Vitals is to drive this stuff. It isn't necessary to say all site owners have to fix everything.
 
say we all have a responsibility. So React has latest versions as it tries to chunk things up into 50 millisecond chunks, max. That's not saying you can't go in there and put a 300 millisecond thing or a one second code in there, so they can only do so much, but it's great they're starting to think about that and trying to take these lessons and do that, trying to be performant by default. So yeah, very excited about that.
 
Charles Max Wood (01:17:02.665)
Awesome. So I did have one more question. I know we're kind of getting toward the end of our time. And maybe you can just answer this quickly. And I think I kind of know part of the answer. But I've talked to a few people about Core Web Vitals, and they get frustrated. They're like, why does Google even care how my site performs? Don't they just care about the content and that it answers the person's question? So can you just answer that in like a minute or two?
 
Barry Pollard (01:17:32.955)
Yeah, I mean, I guess we want the web to succeed. And this is very much my opinion, by the way. There isn't a textbook answer to this that you're given when you join Google. But my opinion is we want the web to succeed. And I'm not sure the ranking impact of score web files or how much of it is. I think there's been a lot of talk about whether that was over egged at the beginning, if it's much smaller impact and stuff like that. But given two sites that can both give you
 
the black shoes I mentioned before, or the who won the football last night result. Of course we're going to want to give the one that's got the better experience as a higher version there. All else being equal. I think Gavia is, all else is quite often not equal and Barry's blog.com isn't going to be as authoritative as BBC or Wikipedia or whatever. So there's quite a lot to take in there. Having the most forward website doesn't mean you should write up to the top of the ranking there.
 
Charles Max Wood (01:18:10.101)
Mm-hmm.
 
Barry Pollard (01:18:28.802)
But in general, yeah, we want the website to succeed. And the more people use that, Google's in the business of the web, and it benefits us, and it's good for us. We also, as a product person, use the web a lot, and we get lots of complaints about Chrome's using up all my energy and stuff like that, because people with 500 tabs open up the most god-awful websites ever that are really hogging them in thread and killing your computer. So I think that's...
 
my opinion of why Google cares about this sort of stuff.
 
Dan (01:19:01.807)
Yeah, I agree. And given that I've never ever worked at Google, so all my perspectives are from the outside looking in. But it's kind of like how Google pushed the use of HTTPS over HTTP. You know, you might say, why does Google care if websites are secure or not secure, if they're hackable or not, if you get, you know, somebody spies on your traffic or not?
 
Google cares because luckily for us, at least that's the way I see it, Google's business interests often align with the success of the web, with the success of this platform that we're using. And we're lucky in that. And it causes Google to promote best practices that are good for all. So it's security, it's accessibility, and it's also performance.
 
Barry Pollard (01:19:57.39)
And some of those things cost Google in various ways. There was a talk of whenever that HTTPS was happening, way before I joined Google, the crawl times would take longer and Google search would take longer because there's an actual processing impact to using HTTPS rather than HTTPS. Google Ads often fights with us sitting there saying, make your ads more performant in the Chrome team or so. And so Google isn't one monolithic.
 
company there's lots of different interests there but in general you're right we want the web to succeed
 
Charles Max Wood (01:20:33.845)
Awesome. All right. Well, let's. All right.
 
Let's go ahead and do picks. Dan, do you have some picks for us?
 
Dan (01:20:48.995)
I have a pick and a half. So my pick is we're watching Silicon Valley, the TV show, the sitcom. And you know, it's kind of old. I mean, I think it's a decade old or something like that. But somehow I saw snippets from it, but never actually watched it in its entirety. And we're having a blast. We're up to season four.
 
Charles Max Wood (01:21:00.607)
Mm-hmm.
 
Dan (01:21:18.407)
And it's so good, not only is it funny and intelligent and engaging, but I find myself in so many situations looking at the scenarios that they're describing or the characters on the show saying, I know this person, or I've experienced that. So even some of the wackiest things, like the most nonsensical things that happen on the show, I turn to my wife and tell her, you know,
 
I've actually experienced this, or I've seen this happen. And it's so amusing in that regard. So yeah, so I'm really loving it. And I highly, if for any reason you've not seen it, I highly recommend watching the show, Silicon Valley. So that would be my main pick or my primary pick. And then my other pick, my other two picks are...
 
are the ongoing war in Ukraine, which is still ongoing, and help the people there, and the ongoing fight for democracy in Israel. You know, again, I don't know what you can do to help us, but if you can show support, please do. And those would be my picks for today.
 
Charles Max Wood (01:22:34.865)
All right. Um, I'm going to throw in some picks. So I'm going to pick something that I've picked before, but, uh, we were playing this last night, uh, me and my kids and my wife, um, it's a game called the crew, it's a card game, um, and effectively you get missions, which are just different cards that have to be, uh, taken in tricks by different players. Um, and they have like.
 
I think they have 50 missions that you can complete. And so yeah, anyway, you deal out the cards. There's a Trump suit. It's yeah, it's pretty straightforward. We were playing it with five players, which is actually a little bit harder. I think the best experience I've had is playing it with four. I've played it with three as well. You can play it two to five players. Each round takes anywhere from a few minutes up to maybe 15, 20 minutes.
 
Um, it has a board game rank of 1.96. So it's pretty simple game. Um, but it's a lot of fun. We've really, really enjoyed it. Um, it's rated for 10 plus. And that's probably about right. I have, I have a daughter that's almost eight and she like, she knows her colors and, and numbers, but some of the strategies just a little bit more advanced than what she really processes, right? Cause you.
 
You have to communicate to the other players. You have a token you can put on a card in front of you. So you put one of your cards down face up so people can see you have the card, and then you put a token on it to let them know if it's your highest, lowest, or only card of that color. And then knowing when to take a trick and when not to take a trick, I think those are the areas where it's just a little bit beyond where she's at. But the other kids enjoy it. I have an 11-year-old that he was playing, and he really liked it. And so.
 
Anyway, I'm going to pick that. And then I'm kind of, I've been talking to a lot of people about...
 
Charles Max Wood (01:24:45.181)
where they're at and what they're struggling with. And what I'm finding is kind of two things. And so I'm just going to talk a little bit about something I'm working on. And that is that people either look at the current job market and they're not super confident about where they could land if something went wrong. And they're not super confident they could get a job that they would enjoy, even though they may not be happy anymore where they are. And so people are kind of camping out. And so they're like, how do I find a job in this market? And the markets vary from.
 
where you live and all that stuff, right? So I'm painting it broad strokes, but I've found that there are certain things that people can do in order to help out. And so I'm putting together a group of people that get together on a weekly basis and network and learn new things and have people come present and stuff like that so that we can grow that. And I'd love to just get people's feedback. And so if you just go to topendevs.com,
 
slash group feedback. I'd just love to talk to you for 15, 20 minutes, maybe a half hour about what this is and what your problems are and how this could solve it or not solve it or do better or whatever. I just want the feedback so that I know that I'm kind of chasing the right thing, if that makes sense. And then I've also been enjoying the Women's World Cup. Now, I'm like four or five games behind.
 
So if you know who's won any of the matches in the quarter final, shut up. Um, and, uh, yeah, uh, I'm, I'm probably gonna watch one or two of the matches today, um, but yeah, I'm, I'm really, really enjoying it. So, um, even though the U S is out and Italy is out, um, anyway, so I'm going to pick that as well. Uh, Barry, what are your picks?
 
Barry Pollard (01:26:39.764)
It's the time of the year so I'm going to pick holidays. It's a bit of a generic one but I actually at the minute when I'm meeting people and I get them out of office and they're out for a week or two weeks I'm actually, I get a big smile on my face, I'm like good. We all work very hard throughout the year and it's great at the moment to actually see so many people away and actually going away and enjoying life. Live to work.
 
Don't, no sorry, that's the wrong way around. Work to lose, don't live to work. So big time of year for everyone going away and I would encourage everyone to go out there, take your time off, turn your phone away, turn off your work email and actually just chill and you've got family, play with them and relax. So big shout out to holidays in general. And then the other one I'm gonna say is,
 
Dan (01:27:24.832)
Totally.
 
Charles Max Wood (01:27:26.226)
Yeah.
 
Barry Pollard (01:27:32.542)
All the people that do try to move the web forward. Again, a bit of a generic one, but I've been doing stuff on standards and I do a lot of work with the web performance working group and Dan's part of as well. I got my first commit to HTML spec last week. I'm very minor tweak, but pretty excited about that. But no, there's a cool wealth of people out there. Not all of them work for the big companies at Google or get paid at Google wages or something like that.
 
Dan (01:27:51.461)
Oh, congratulations.
 
Barry Pollard (01:28:02.37)
really try and commit and make the web better. So big shout out to everyone that raises bugs and tries to improve things rather than just moaning, shrugging and turning around and moving on. Particularly working in Google, I see that side of it now a lot more than I used to.
 
Dan (01:28:14.323)
Yeah.
 
Dan (01:28:19.039)
I feel really good about myself whenever I submit, let's say, a bug on the Chromium bug repository. If I can provide a good example, then even more so. So for sure, I'm totally with you on that. Some people are really doing amazing jobs.
 
pushing the entire framework forward for all of us.
 
Barry Pollard (01:28:49.61)
Yeah, and I think everyone can commit that. You don't have to be able to know C++ and be able to write browsers. Just raising bugs in a decent test case is 90% of the problem. That percentage is totally made up. But you know, everyone can contribute to that. And whenever I see people doing that, again, it makes me happy. You get another little smile there.
 
Dan (01:29:02.859)
Ha ha!
 
Barry Pollard (01:29:10.779)
Those are my picks.
 
Charles Max Wood (01:29:11.201)
Awesome. All right. If people want to follow you online, where do they find you?
 
Barry Pollard (01:29:17.738)
I'm still hanging around Twitter to the skeleton before it dies its last death. I refuse to call it X. Um, so I'm action the web on Twitter. I'm pretty much action the web on most other platforms, but Twitter is the one that I've still, um, hanging onto dear life.
 
Dan (01:29:21.797)
X.
 
Dan (01:29:36.955)
I'll say something about this, you know, it's funny. So for a while, my about me slides and talks started expanding and growing. Like in addition to the, uh, my Twitter handle, I added the Mastodon handle and then the blue sky handle, and I never actually got around to threads, but, but then now it seems to be contracting back again because I'm not really active.
 
Charles Max Wood (01:29:36.968)
Okay.
 
Dan (01:30:05.347)
on those other platforms. Somehow I'm still mostly or almost totally only active on Twitter slash X. So hopefully it survives. What can I say?
 
Barry Pollard (01:30:18.318)
Yeah, not so hopeful, but well, I'm hopeful but not... No, I'm hoping but not hopeful, if you know what I mean.
 
Charles Max Wood (01:30:18.362)
Alright.
 
Dan (01:30:26.208)
Hahaha
 
Charles Max Wood (01:30:27.173)
Yeah, it, there, there are a lot of things that go into that. I don't know. It seems like the Elon Musk has some other idea for what he wants it to end up being. And I think, I don't know. I've seen a lot of people say a lot of things that are, you know, Elon said this or did this or whatever, and I don't like it. And so I'm going to quit Twitter, but it's where everybody is. So unless I think he makes it into something that nobody wants to use.
 
I think that's just where people are gonna be.
 
Barry Pollard (01:30:58.386)
I do think there's a lot of people leaving it, which is a real shame. I think I owe my job to things like Twitter and I met a lot of people online and being able to speak to the people who work on the project, working specs and stuff like that, and people are very open with their time. It's fantastic. And it's a shame.
 
Charles Max Wood (01:31:06.47)
Yeah.
 
Dan (01:31:15.38)
The way that I see it, the way that I see it, I'm interacting on Twitter with people that a lot of them are my friends or people that I find interesting. I don't interact on Twitter with Elon Musk. So I don't really care so much about what he says or does in this context. I'm more worried that he might break the platform.
 
Charles Max Wood (01:31:42.675)
Right.
 
Dan (01:31:43.143)
by doing something unfortunate. But the other things that he's doing, I get why some people are really upset, but at the end of the day, Putin, he's not. So let's put things in perspective and proportions. And you know, I'm still there.
 
Charles Max Wood (01:32:05.969)
Yep. All right. Well, I'm going to go ahead and wrap this up here. And until next time, folks, Max out.