Powered by RedCircle

Upcoming Performance Metrics for the Web - JSJ 542

  • Guests : Annie Sullivan Michal Mocny Yoav Weiss
  • Date : Jul 26, 2022
  • Time : 1 Hours, 16 Minutes
Today we have three guests on the show, Annie Sullivan, Yoav Weiss, and Michal Mocny, all of who are engineers who work for Google on the Chrome Web platform.  Looking forward to Google’s new developments for measuring web performance, we dive deep into upcoming performance metrics Largest Contentful Paint (LCP) and Interaction to Next Paint (INP), a full page lifecycle metric.  We discuss what user page interactions we can measure successfully and which we cannot.  We discuss the challenges of single-page applications when looking at core web vitals.

Sponsors


Links


Picks


Transcript:
Charles_Wood:
Hey everybody and welcome back to another episode of JavaScript Jabber. This week on our panel we have Dan Shapir.
 
Dan_Shappir:
Hi from sunny and hot Tel Aviv.
 
Charles_Wood:
AJ O'Neil.
 
Aj:
Yo, yo, yo, coming at you live from Waffle-Topia.
 
Charles_Wood:
We also have Steve Edwards.
 
Steve:
Howdy from a sunny and then rainy and then sunny and then rainy Portland.
 
Charles_Wood:
I'm Charles Max Wood from Top End Devs, and this week we have a few guests. We have Annie Sullivan back. Annie, do you wanna say hi?
 
Annie_Sullivan:
High from Michigan.
 
Charles_Wood:
We also have your vice.
 
Yoav_Weiss:
Hi from Southern France.
 
Charles_Wood:
We also have Michael Machney.
 
Michal_Mocny:
Good to meet you folks. It's a foggy one up in the North Bruce Peninsula.
 
Charles_Wood:
So yeah, so you all, do you all work for Google?
 
Annie_Sullivan:
Yeah,
 
Michal_Mocny:
We
 
Annie_Sullivan:
we're
 
Charles_Wood:
Let's
 
Michal_Mocny:
do.
 
Annie_Sullivan:
all
 
Charles_Wood:
just start
 
Annie_Sullivan:
engineers
 
Charles_Wood:
there.
 
Annie_Sullivan:
on Chrome Web Platform.
 
Charles_Wood:
Okay, good deal. And this is kind of a follow on to the episode we had you on before, Annie. Do you wanna kinda, I'm not sure exactly where we left off or where we wanna start here. So I'll kinda let you take the reins and lead us out and then we'll see where we end up.
 
Annie_Sullivan:
Yeah, so last time we talked a lot about Core Web Vitals and the goals behind
 
Charles_Wood:
Mm-hmm.
 
Annie_Sullivan:
them and where they came from. And I also thought it would be really fun to talk about where they're going. And so I invited Joav, who's leading our efforts to integrate single page applications into Core Web Vitals and Michael, who's working on our new responsiveness metric, Interaction to Next Paint, to come on and talk about those things.
 
Charles_Wood:
Very cool. I feel technically inadequate with you guys on here.
 
Steve:
I agree. This is one where I'm going to just be sitting back and listening the whole
 
Dan_Shappir:
Yeah,
 
Steve:
time.
 
Dan_Shappir:
but this is totally my
 
Charles_Wood:
Right?
 
Dan_Shappir:
jam. This is like, this is what
 
Charles_Wood:
So, yeah,
 
Dan_Shappir:
I live
 
Charles_Wood:
so
 
Dan_Shappir:
for.
 
Charles_Wood:
I guess we talked a little bit about Core Web Vitals and where things came from and where they're at now. One thing I'm curious about is as you start looking at, okay, well, where are we gonna take this? Like, how do you make that decision? Right? Because it seems like there are a million metrics you could use, right? There are a million ways you could make that decision. And so, how do you make that decision?
 
Annie_Sullivan:
Yeah, there's a couple of things that went into it. I'll talk about responsiveness first. So when we look at the original Core Web Vitals, they were really intended to be those three metrics and it's supposed to be like a three legged stool. We have this page load metric, largest contentful page. And people were concerned, like what if they try to cheat by like throwing up pieces of content really fast and cumulative layout shifts, which stop, you know, it like measures content shifting around, can really backstop that. just throw some stuff on the page willy nilly and have it kind of shift all around because you have cumulative layout shift. Similarly, there's a big concern. What if you throw the content up really quick, but then you have a ton of JavaScript and it just freezes the page? And first input delay is meant to address that. But the consistent feedback that we're getting from web performance experts and also concerns from people who work on the web platform is that first input delay is not a strong enough backstop, that there's just tons of JavaScript. down the user experience. And we need to do something more about it to really be measuring the user experience. And so Michael's been working on this new metric interaction to next paint. That has kind of two differences between first input delay, which is our old metric. The first one is that it measures the whole time until the next paint from the user input until the paint. First input delay just measures the time to process the event. So that's just the kind of main thread busyness. Second, it measures all of the interactions on the page. So that we think is really capturing. And when we look at the numbers, we compared, for example, first input delay versus interaction in next paint. We looked at the HTTP archive. And we took three million sites that had these metrics and that they also had lab data from Lighthouse. And we found the interaction in next paint things like total blocking time that show that there's a lot of JavaScript running and blocking interactions. So we think the interaction in the next paint is a much better leg for that three-legged stool that measures the slowdowns that people get from JavaScript.
 
Aj:
So could you rephrase that, the meaning of that one more time for me? Because I think I understand it, but the words don't, on their own, quite don't make sense of the name of it. In the interaction to next paint.
 
Annie_Sullivan:
Yeah, Michael, if you want to jump in, you
 
Michal_Mocny:
Sure.
 
Annie_Sullivan:
can. Michael, propose a name.
 
Michal_Mocny:
So
 
Charles_Wood:
It's his fault.
 
Michal_Mocny:
yeah, throw me under the bus. So I think it's easy to make these things sound more complicated. So I want to start by just focusing on the goal. And then we can go back to some more details. So when you interact with the page, you should see the feedback. You should see the result of that interaction quickly.
 
Aj:
Yep.
 
Michal_Mocny:
So it's from the interaction, from the time screen from the time you type on the keyboard until the next frame is actually shown on screen, until the pixels appear on the screen. And those pixels have to have the feedback of that event. We're constantly animating things and so it's the first frame that actually had that interaction content in it.
 
Dan_Shappir:
I apologize for barging in, but how do you know? I mean, pixels get drawn all over the place, and especially with JavaScript, you know, can do whatever
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
it feels like
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
in response to an interaction. How do you know that a particular set of pixels are a response to that interaction and not, I don't know, some video playing or some animated gif
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
or something?
 
Michal_Mocny:
Great question, Dan. So we're building on work that has been in the works for years and years. And this is work that we needed for metrics like LCP and a whole bunch of other responsiveness metrics that measure presentation times, paint times. So in Chrome, we instrument side effects, whether they come from main thread or compositor thread. And as you make changes, we sort of pass data structures along port sort of feedback when the pixels arrive with those changes. So in this case, we have an event is handled by the browser process. It gets sent to the renderer process. It gets queued up on the main thread. The events are actually dispatched, actually run by JavaScript handlers that the developer registers. At some point when those are finished executing, finished processing, we will eventually schedule a new main thread rendering task. which is the first opportunity on Chromium browsers to take all of the effects that happened on the main thread, commit them over to the compositor thread. And then eventually the compositor still has more work to do, eventually has to submit that to the GPU process, and eventually those pixels have to get to the screen. And then finally, when all of that work happens, we sort of pipe along presentation feedback. So there are just simple data structures along this whole rendering pipeline, which took a lot of time to get right, kind of improving it and making it better. But in this case, I think your question might be, how did you know that the pixels in that update were directly feedback for that response? And in this case, we don't. In this case, this is the first opportunity that it could have provided feedback. And that is what INP is measuring. So we don't get confused by video playing alongside It will be the frame that had the result of the event handlers running, as well as all of the default browser styling work. If you click on a link and it changes from blue to purple, or if you type in a text box and you get the character appearing inside the text box, this is sort of default browser styling. Those effects will be included in that first visual feedback. But we can't guarantee
 
Dan_Shappir:
you
 
Michal_Mocny:
that all of the JavaScript that was a reply to that interaction.
 
Dan_Shappir:
So, I would actually ask or use
 
Michal_Mocny:
you
 
Dan_Shappir:
an example of
 
Dan_Shappir:
an issue that I would encounter with the FID metric and I wonder if you could describe what would happen with the INP or INP in this context. So, one thing that I actually saw with some JavaScript heavy pages was that they took download the JavaScript before hydration, that hydration actually not only ran long, but also kind of ran late.
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
And what would happen is that people would actually interact with the page before the hydration even started, not while the hydration was running, but before the hydration actually started. So thanks to SSR, you actually had content on the page. For those of you who are listening in, SSR means server-side rendering, which JavaScript usually running on the backend, React or whatever, or some static site generator provided the initial HTML so the page was visible with content from the get-go fairly quickly, but it's not interactive until a whole bunch of JavaScript, let's say React or Vue, whatever, actually is downloaded and executed on the client side, and that could take some time. the user interacted with that visible content before the JavaScript even finished downloading. That means before it actually even started running, which means that the CPU was actually totally free. It would actually react, the FID would come out as being excellent because you click the button that let's say or a link that was not wired to anything and it responded instantly by doing nothing.
 
Michal_Mocny:
Yep.
 
Dan_Shappir:
So there would be no visual change, no nothing. page itself, like, I don't know, sometimes you might click an area in the browser window just to make sure that the focus is on that window and not some other application running on your desktop, and that would count as an excellent FID even though nothing actually happened. So, my question is, in those scenarios where you click something and nothing actually happens What would imp indicate? By the way, should I say imp or imp?
 
Michal_Mocny:
INP, INP,
 
Dan_Shappir:
Imp, okay.
 
Michal_Mocny:
let's not go within. So great question. I think that first we can focus on the metric definition, and then I want to add to that scenario a little bit. So the metric INP, in the case that you interact so quickly that there has not been, the JavaScript hasn't downloaded, handlers haven't been registered, nothing happens, the component doesn't feel interactive, and P in that case. I'm still assuming the scenario you set up here where rendering is yielding, there's nothing happening on page. We do not judge the quality of the interaction. There's no way we try to assess whether this was a useful feedback or whether this element was interactive. That would be too difficult to assess, if that makes sense. The UX experience of whether this was working
 
Aj:
Well,
 
Michal_Mocny:
or not working.
 
Dan_Shappir:
So just to
 
Aj:
couldn't
 
Dan_Shappir:
verify
 
Aj:
you?
 
Dan_Shappir:
that I understand, sorry, I just want to verify that I understand what you're saying. Let's say, for example, I'll give a concrete example. You
 
Michal_Mocny:
sure.
 
Dan_Shappir:
have a mobile application with a hamburger, and it's actually an example of something that used to exist within Wix, where I used to work, where
 
Michal_Mocny:
Mm-hmm.
 
Dan_Shappir:
the hamburger menu that was used by the mobile application, by mobile view of many websites was originally dependent on hydration.
 
Michal_Mocny:
Mm-hmm.
 
Dan_Shappir:
And the hydration used to take a bunch of time. And as a result, if a visitor clicked the hamburger menu fairly quickly, nothing happened because the hydration
 
Michal_Mocny:
Sure.
 
Dan_Shappir:
code hasn't even finished downloading yet. And so that's the worst possible user experience. You clicked the hamburger menu, no response whatsoever. Not a delayed response, no response, nothing.
 
Michal_Mocny:
Sure.
 
Dan_Shappir:
But the FID was excellent.
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
Because the thread was free, it was still waiting for the resources to download, and hence responded really quickly. If I understand correctly, what you're saying is that that particular INP measurement would still be good even though nothing was rendered because
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
if something would have been scheduled to render, it would have rendered really quickly.
 
Michal_Mocny:
Yeah.
 
Dan_Shappir:
the next animation frame would have happened really, really quickly.
 
Michal_Mocny:
Precisely. So the second part to my answer is that that one interaction would have been measured as performant. We're not judging the value to the user of that interaction. This is just whether or not we know for sure performance was affected. There was rendering could not have proceeded within 200 milliseconds is the threshold that we're using for good for MV. would have scored well in that one case. But we continue to measure every single interaction with this page. And so at some point in the future, if there was another interaction that took longer because it did more, it would overwrite sort of those things. So in a long-lived session where the user does eventually wait for these things to load, INP is not affected in the same way that FID is. So that's, you
 
Dan_Shappir:
So
 
Michal_Mocny:
know.
 
Dan_Shappir:
basically,
 
Michal_Mocny:
But.
 
Dan_Shappir:
you're saying that because FID is like this one shot, this page would get a pretty good FID or this session would get a good
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
FID because that first interaction, but with INP, maybe
 
Michal_Mocny:
Yeah.
 
Dan_Shappir:
that first measurement would have been great, but eventually all that JavaScript would arrive, which would likely result in a lengthy hydration. And then when the user tries to interact with the page then, probably rich clicking because and nothing happens with that menu,
 
Michal_Mocny:
trying to get it to work right.
 
Dan_Shappir:
then you would measure and report the higher
 
Michal_Mocny:
Yep.
 
Dan_Shappir:
INP value. That makes total sense.
 
Michal_Mocny:
There's more to this answer as well. So the scenario you describe, I think, is more likely to happen with INP, because frameworks nowadays have been optimizing for FID for so long that all of the heavy hydration, you presented a scenario where it's before hydration even begins. But we're busy loading. We're blocking rendering, perhaps. We're getting in the way of the ability to even schedule a rendering task. But it comes out that the input delay not affected. Website authors have optimized for that metric to make sure that the event handler does get dispatched quickly. So FID would score well, but even in this case, if you're heavily loading early on, INP might still have not done. Even though the interaction isn't doing what you expect it to do, it might still also not be performing well. So this is worth testing and trying it out. But perhaps the quote unquote, crux of your question, Dan, is Is it worth measuring an interaction that is broken? And I think it's difficult to judge. If you have a buy button on a website and you click that buy button and it performs well, it gives you feedback, the JavaScript has loaded, you get something, but the item doesn't get purchased and it doesn't arrive, the user would consider the site broken. But that is incredibly, it is an impossible problem for a performance metric to judge, it's not our role in this case. it is a trend in
 
Charles_Wood:
Thanks for watching!
 
Michal_Mocny:
the framework space to be more accessible, to have more progressive enhancement. So on a framework like Remix, all your links, all your buttons will use forms. And before hydration takes over to turn a server rendered site into a client site, site with interactivity and JavaScript, it will continue to work. That button will still provide an accessible action that might just do a server post or some useful fallback. it is difficult to judge, you know, the JavaScript handlers haven't been registered. Does this interaction do what is reasonable or not? There's no clear answer. And so we measure the performance only.
 
Dan_Shappir:
So, in this context, I have to mention Noam Mozenthal, who recently
 
Michal_Mocny:
Mm-hmm.
 
Dan_Shappir:
joined you guys at Google. He actually gave a talk at the recent React Next conference in Israel, and he talked about how to overcome the problem of hydration and responding quickly to user input, basically could be properly interactively handled before hydration actually even happens, like you mentioned, by basically either letting the browser do it or using much lighter weight JavaScript and CSS handlers that you kind of custom create to execute before the hydration even starts. And finally, this is actually based on a real world case, also worked at Wix, where going back to that specific example that I gave with the hamburger menu, he, I think it was him, but it might have been somebody else, but basically what they ended up doing was handling that particular UI element using a web component that was, that its JavaScript was totally independent of the hydration and the larger framework so it could download and execute much, much earlier and be able to actually display the menu when the user clicked it. So that's an example of that. I totally agree that the Remix, the approach that Remix uses, and I don't think they're the only ones, of falling back to the built-in functionality, be it buttons or forms or links, to let them handle Even if it means that instead of working as an SPA, you work as an MPA for that initial interaction, I think that's definitely the way to go.
 
Michal_Mocny:
What a segue, Dan.
 
Dan_Shappir:
Hahaha
 
Charles_Wood:
Mm-hmm.
 
Michal_Mocny:
So that would be an excellent segue to talk about MPA and SPA. But we are only scratching the surface of INP. And I wonder if we should just exhaust that one first.
 
Dan_Shappir:
Yeah, for sure.
 
Michal_Mocny:
So.
 
Dan_Shappir:
So you mentioned that you measure throughout the entire session, not just the first interaction. Then
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
what is it that you report? As I recall, when you were experimenting with it, you were looking at possibility of reporting the average or the median or the largest. What did you end up choosing?
 
Michal_Mocny:
Yes, so first on the topic, this is a full page lifecycle metric. That's just kind of the phrasing we tend to use or like a full runtime metric, I think we use different terms, but much like CLS, where it is not just measured during load, it is measured from load until page unload. So every single interaction with the page matters to users. a bunch of different ways to do those measurements. We evaluated whether, you know, is there a budget? Like, do users not notice responsiveness under a certain amount? And so it only matters kind of like long tasks above a certain threshold. We evaluated, should we take the sum of all of the time they're spending waiting above their budget? Should we look at average? And everything has flaws. But so we found that one of the simplest definitions, which I'll give in a sec, was very, worked well. But it also had other properties, properties that are useful for end developers. It was the simpler you can get in your definition, and it actually passes the tests like it does well as a metric. It represents the user experience effectively. It just has, it's hard enough to define these things on their own. And so what we ended up settling with is it's a single interaction. So one interaction, however long you had to wait for that interaction, that's your score. from when you interacted until pixels appeared on screen. And I use the word duration the way the event timing API uses the word duration. And what we look at is the worst interaction with the page with a little bit of a caveat there, and I'll describe that. But users, the one worst scenario that they spent with a single page load session is how they kind of tend to remember the site. And so that's what we're focusing on. Now, in practice, what we ended up settling with is the 98th percentile interaction. So we pick the worst one, unless you've interacted with the site more than 50 times, which is super rare. Very few sites interact more than 50 times. But if you're there, if you're, you know, the Gmail's of the world or the YouTube's of the world, and you interact hundreds of times, you leave that tab open, long lived, You will eventually have all of the stars align against you. Garbage collection is happening, the user had the tab backgrounded, power saving mode enabled, and etc. And so this interaction could be a blip in the system. If a user is truly spending that much time on your site and interacting that much, maybe they really love the site and the performance is doing fairly well. And so we judge a little bit down from the single worst outlier, if that makes sense. But that detail is only important if you're really diving into the weeds. it's easy enough to just think of the single worst interaction for like 99% of use cases.
 
Dan_Shappir:
That's actually really interesting because what I found is that core vitals, while they're an awesome set of metrics for let's call it e-commerce websites and obviously landing pages and blogs and whatnot, they're not necessarily such great metrics for dashboards. Because with dashboards, it's usually not so much about the initial load time. more about how your experience over time, how it re… Sometimes I joke that people open a dashboard and they go get a cup of coffee. They're fine with it taking
 
Michal_Mocny:
Hm,
 
Dan_Shappir:
a while
 
Michal_Mocny:
hm.
 
Dan_Shappir:
to load, but once it's loaded, they want it to be responsive. And I think that IMP is actually potentially a great metric in this context based on your description because… And it's a metric that I'll certainly try to look at in the context because at Next where I work, we actually have a bunch of these dashboard-style web applications. And it seems that the INP could be a great metric in this context, especially if it works well in a single-page application type scenario, which I guess we'll get to soon. Definitely, that's great. I do, on the other hand, have to mention that while we were talking, I was looking at the dashboard that Rick Viscome, who also has been on this show, has created, which I highly recommend for everybody to look at. And I was comparing the various different frameworks. And by the way, I even recently wrote an article for Smashing magazine looking at the core vital scores of the various JavaScript frameworks. And all the JavaScript frameworks score phenomenally well for FID, but are fairly abysmal for I For FID, they're all in the approximately 95% range, so it's like everybody's acing it. For INP, they're all in around 50%, all of them. Even the quote-unquote fast ones, like, I don't know, like a Svelte or Preact or these. So that means that, I don't know, when you plan to do the switchover, I don't know if it's decided already that you will be doing the switchover, I'm expecting to see the old green co-invite ratio drop like a rock when that happens, especially for all the websites that are using JavaScript frameworks.
 
Charles_Wood:
Don't hurt Vio, it'll hurt Steve's feelings.
 
Dan_Shappir:
Hahaha
 
Steve:
Darn
 
Annie_Sullivan:
Yeah,
 
Steve:
right.
 
Annie_Sullivan:
we do expect the pass rates on Core of Vitals to go down. I think what's really interesting though, is you look at any individual framework and you look at the distribution of IMP scores for sites, and you see like kind of this bump at good, and then like this super duper long tail. So I don't think, and as you said, it's kind of like some frameworks have like more distribution of like good, and then like shorter long tail, with a longer long tail, but overall it doesn't really seem to be necessarily the framework itself that causes the performance problems, but just the fact that people can very easily load a lot of third parties, load a lot of libraries, make their JavaScript bundle larger, start running more and more stuff. So I think that it's not necessarily a problem with individual frameworks, but more so like a problem of just including too many things and trying to do too much.
 
Dan_Shappir:
I wouldn't necessarily give the frameworks a pass. If I also look at the all metric, which is an aggregate of all websites, it's noticeably better than any framework. And I assume that third party exists on websites that don't use frameworks like WordPress sites, for example. And you know, it looks like frameworks have a problem.
 
Michal_Mocny:
So.
 
Dan_Shappir:
but that's the feeling I have.
 
Michal_Mocny:
So I think there's one pattern in particular that's worth calling out. And then until this pattern is addressed, we should hold judgment, I would say. So the reason SPAs are so popular is because I think we all know this, the interactions tend to be faster if you do them right. If you, a good SPA is valuable and that's why it's a pattern that is adopted. Okay? There's some smickering in the room, I think. If you compare doing a hard navigation, an MPA reload, to another page URL, another route, we would assess that with FCP, LCP, which is in the 1 and 1 half, 2 and 1 half second range. That is the amount of time we expect users want to target to feel fast and smooth. With INP, we are targeting 200 milliseconds for that interaction until the next paint. being on the same document and doing the navigation client side, you are being assessed with a 200 millisecond duration instead of a 2 and 1 2nd duration, if that makes sense. So a user might be using an SPA. They might click. It might take half a second to see visual feedback. And they would say, wow, that was a fast load as compared to 2 and 1 2 seconds LCP.
 
Dan_Shappir:
Yeah...
 
Michal_Mocny:
So this pattern, the direction that many frameworks are going towards is to have, and this has been in their documentation for a long time. They've been doing tech talks and conference talks about this for a long time, but it's becoming a first-class citizen big time. Our transitions, where you unblock the very first part of feedback. You don't do a giant component re-render, in one step and then wait for browser to render all that layout style, like decode all images and then do one giant rendering update, which could take a long time before you see anything change. But it all pops in nicely. But instead the
 
Charles_Wood:
Hmph.
 
Michal_Mocny:
direction is an initial critical set of feedback first and then progressively add the rest. And there's tons of different ways to do this in different frameworks, but it is a direction that we're heading. And so what you see is you first routing as being in the next upcoming version. And so every single route change you'll have a progressive visual update. And so INP will just explode, I think, an improvement performance on those types of sites. That's my
 
Steve:
Now, correct
 
Michal_Mocny:
feeling.
 
Steve:
me if I'm wrong, but I was listening to another well-known tech podcast this past week and they were talking about transitions being added into the browser. Is that
 
Michal_Mocny:
Mm-hmm, mm-hmm.
 
Steve:
correct?
 
Michal_Mocny:
This is unrelated, but that is an incredibly exciting feature as well. The Web Transitions API and Jake Archibald's
 
Yoav_Weiss:
Shared element
 
Michal_Mocny:
thing, yeah.
 
Yoav_Weiss:
transitions,
 
Michal_Mocny:
Shared element
 
Yoav_Weiss:
if
 
Michal_Mocny:
transition,
 
Yoav_Weiss:
I remember
 
Michal_Mocny:
thank
 
Yoav_Weiss:
correctly.
 
Michal_Mocny:
you.
 
Yoav_Weiss:
Yeah.
 
Aj:
Wait,
 
Yoav_Weiss:
So.
 
Aj:
what does that mean?
 
Yoav_Weiss:
Essentially, so my take is slightly different than Michael's.
 
Michal_Mocny:
Hehe.
 
Yoav_Weiss:
I think that SPAs are popular like for two reasons. One is like developer oriented reasons that I won't go into. But the other one is that as a platform, we've been providing like the SBA experience. when done well is better than MPAs in terms of enabling transitions between one page and another instead of going from blank screen and then re-rendering the next page or the better scenario of not going blank screen but still not smoothly transitioning between one page and another. And that as well as across different pages provide, like basically there are. user experience facilities, user experience advantages in using an SPA when done right over MPAs, over old school multi-page applications. And shared element transitions is one feature in a larger set of features that is aiming to close that gap and enabling the same experience in a multi-page app. you would be able to have multiple pages, multiple HTML pages that are independent, but have a smooth user experience when the user is clicking from one to the other. And a lot of the advantages of SPAs in terms of performance are done by the browser by the fact that you're loading significantly less JavaScript can, for example,
 
Dan_Shappir:
Thanks for watching!
 
Yoav_Weiss:
use code caches and other ways of just,
 
Dan_Shappir:
you
 
Yoav_Weiss:
you know, making sure that that transition from one page to the other is fast.
 
Dan_Shappir:
I have to say, only half jokingly, that I identify modern web applications by the fact that they have multiple spinners on the screen at the same time.
 
Michal_Mocny:
Thanks for watching!
 
Charles_Wood:
Ha ha ha
 
Dan_Shappir:
So yes, you click and it responds quickly by showing you a whole bunch of spinners and then
 
Charles_Wood:
That's
 
Dan_Shappir:
gradually,
 
Charles_Wood:
a parallel upgrade.
 
Dan_Shappir:
yeah, and then gradually every spinner disappears to be replaced with a bit of content.
 
Charles_Wood:
and not
 
Dan_Shappir:
But
 
Charles_Wood:
found.
 
Dan_Shappir:
yeah,
 
Aj:
a bit of
 
Dan_Shappir:
or
 
Aj:
placeholder,
 
Dan_Shappir:
not fun.
 
Aj:
which is later replaced by content.
 
Dan_Shappir:
Yeah. But yeah, I'm totally with you, Av, on that. First of all, it's worthwhile noting that already the experience of transitioning between pages in a multi-page application is way better than people recall it being like, I don't know, 10 years ago when you had that flash of white, because now the previous screen is retained much longer, and you can create some sort of instant reaction in the page remains visible for a while until it's replaced with the next page. But what gets really interesting with the transitions is that you can create, like you can use CSS rules to animate from the previous page to the next page. And I know for a fact that a lot of websites chose SPAs originally, again, putting aside developer reasons, But if I'm looking from the actual user experience perspective, just for the transitions, just to be able to do a more sophisticated transitions like you, you know, people might expect in more modern type applications, especially on mobile. And now you'll be able to do that with CSS between distinct and discrete pages in a multi-page application. And yeah, that will be really, really interesting. packs your measurements of core vitals.
 
Michal_Mocny:
Absolutely. Well, I will say I'm happy that Joav and I can have reasonable disagreements and argue over a beer as to which approach is best. I am in team JavaScript. I have been developing client-side rendered sites. And I'm super excited about
 
Charles_Wood:
He just
 
Michal_Mocny:
transitional
 
Charles_Wood:
froze.
 
Aj:
We lost you.
 
Michal_Mocny:
apps.
 
Dan_Shappir:
Yeah, I think we know who the podcast software agrees with in this case.
 
Charles_Wood:
Right?
 
Yoav_Weiss:
Yeah.
 
Michal_Mocny:
Let's
 
Charles_Wood:
I
 
Michal_Mocny:
see,
 
Charles_Wood:
can pause
 
Michal_Mocny:
is
 
Charles_Wood:
him
 
Michal_Mocny:
it
 
Charles_Wood:
too.
 
Michal_Mocny:
better?
 
Yoav_Weiss:
Yeah.
 
Michal_Mocny:
Is it better, is it better?
 
Dan_Shappir:
No,
 
Charles_Wood:
Yeah,
 
Dan_Shappir:
you're
 
Yoav_Weiss:
Yeah.
 
Dan_Shappir:
back.
 
Charles_Wood:
you're back.
 
Dan_Shappir:
You're back.
 
Michal_Mocny:
Great, there goes all the enthusiasm.
 
Charles_Wood:
Ha
 
Dan_Shappir:
Ha ha!
 
Charles_Wood:
ha ha!
 
Aj:
So you kick him back having a beer and then.
 
Michal_Mocny:
Okay, let me take that again. I was gonna say that. I'm still seeing AJ's internet is weak. I don't know. Am I out again?
 
Dan_Shappir:
Não?
 
Charles_Wood:
You're weak. Nope,
 
Aj:
My
 
Charles_Wood:
you're good.
 
Aj:
internet is strong!
 
Dan_Shappir:
AJ, you've got guns!
 
Steve:
He does? Where?
 
Aj:
HWAH! HWAH!
 
Charles_Wood:
This is America.
 
Aj:
MWAH! MWAH! HWAH!
 
Charles_Wood:
Anyway.
 
Aj:
I was-
 
Dan_Shappir:
Thanks for watching!
 
Steve:
I
 
Charles_Wood:
Okay
 
Steve:
would
 
Aj:
I was-
 
Steve:
throw
 
Michal_Mocny:
I
 
Charles_Wood:
now
 
Michal_Mocny:
don't
 
Steve:
in
 
Charles_Wood:
it
 
Steve:
a
 
Charles_Wood:
just
 
Michal_Mocny:
know.
 
Steve:
rim
 
Charles_Wood:
got
 
Steve:
shot
 
Charles_Wood:
weird.
 
Steve:
there Chuck, but I don't have that right now.
 
Michal_Mocny:
riverside
 
Charles_Wood:
Oh right,
 
Michal_Mocny:
UIs
 
Charles_Wood:
cause you're not
 
Michal_Mocny:
all
 
Charles_Wood:
the...
 
Michal_Mocny:
over the place.
 
Dan_Shappir:
Yeah, well, we can hear you at least, so keep on going.
 
Michal_Mocny:
OK, yeah, so what I was going to say is I'm quite happy that Joav and I can have different perspectives on this problem. I would consider myself team JavaScript. I would consider myself I've done a lot of CSR rendered sites, and I'm incredibly enthusiastic and interested in the direction of transitional apps or one app. Or there's a bunch of very exciting stuff happening in the JavaScript space with edge functionality as well. The cool thing about Core Web Vitals and our program in general is that we are entirely agnostic with how a site is built. We focus on what the user sees, what the user experiences, and it doesn't matter how you get there. We also focus on budgets. It's not a race necessarily to the bottom of who can get 10 milliseconds better on this or that. If it is good enough, you might choose to start optimizing other metrics like the cost for your server costs or whatever it is. As long as the user experience is sound, and Core Web Vitals is just one way to help you draw attention and focus to get there, we are very happy. And we're glad to see improvements across the board and with so much focus here. So yeah. It's
 
Dan_Shappir:
Yeah.
 
Michal_Mocny:
fun to speculate. It's fun to argue. But at the end of the day, there's room for every approach as long as it meets,
 
Dan_Shappir:
you
 
Michal_Mocny:
as long as it does what's reasonable for the user.
 
Dan_Shappir:
I said that to Annie last time, you know, when she was our guest in that previous episode talking about the history of Core Vitals, that one of the things that I love and appreciate about Core Vitals is how user-focused they are. I mean, obviously, no metrics can be perfect, and especially when you're only using three and you're trying to keep them as simple and understandable as possible. But it's really great because in the past, we've had metrics that were less focused on actual user experience.
 
Charles_Wood:
Mm-hmm.
 
Dan_Shappir:
So in that regard, Core vitals are great, but it's also still worth saying that if you're optimizing Core vitals without actually improving your user experience or potentially even degrading your user experience, and I've seen cases of that, then you're certainly doing more harm than good. You know, especially given the fact that Core vitals are kinda tied to SEO, I've seen, you know, situations try to quote unquote cheat the metrics or cheat their customers that they're improving their metrics if it's like, you know, consultants or whatnot. And if and it will get you nowhere because even if you're somehow able to pull a fast one on the Google search and maybe get slightly more traffic, then you lose that, then you lose that to bounce rate and much more if your user experience gets worse. always at the end of the day about the quality of the user experience.
 
Yoav_Weiss:
Yeah, and there have been like the bit that I like most or worst, depending on your point of view. Colin Bendell from Shopify had this tweet where there are plugins paid for, plugins that people install that cheat. And basically when when you're being tested in Lighthouse, they document that right the entire page into nothing. So you basically get a blank page in Lighthouse, which gets a good Lighthouse score, which cheats no one.
 
Charles_Wood:
Thanks for watching!
 
Yoav_Weiss:
This doesn't help you in any way other than cheating yourself. And people pay for those kind of plugins.
 
Dan_Shappir:
You don't even need that. Just make your server that much slower and you'll time out the Lighthouse score and you'll get a perfect result. So basically when you're running Lighthouse, especially after purchasing a performance plugin, look at the screenshots. If what you see is blank, then get your money back. That's
 
Michal_Mocny:
Yep.
 
Dan_Shappir:
all I can say. But definitely, definitely. So talking back about INP, is that a done deal? experimental now. You're already collecting information into crux. As I mentioned, you can already see the data in the report. So is it just a question of time until you replace FID with INP? Or are you still tweaking things? Or is it still in discussions? Where is it?
 
Michal_Mocny:
So it is not a core WebVital. We understand the limitations of FID, and this is the metric we have the most hope in replacing FID. We are still working through various kinks and feedback, and so there is nothing done deal about it. I think there are several smaller open questions about how we want to define exactly which interaction we exactly which parts of each interaction count or don't count, some small details might still change in order to be the best version of this metric. So I think we are on the path towards optimizing the best version of this metric. I think the bigger picture of we want to measure the initial feedback by focusing on the time from interaction until that next paint, I think we've stood enough of a test of time and gotten enough positive feedback And there's enough room for improvement. Like it has so many positives, especially as an improvement to FID, that I would be surprised if there was a last minute giant change in that department. Now in terms of the timeline of these things, I think there is at least a six month lead time before we make any change and when it's announced. And we haven't even made that announcement. So there is at least six months from now and probably more. I don't know, Annie, if you have more details on that part.
 
Annie_Sullivan:
Yeah, so we haven't made a decision over whether to include it as a search ranking signal But again like for any like new metric or metric deprecation or change in threshold There will be some sort of announcement and then a six month timeline before that became active in search
 
Dan_Shappir:
I will say this and going back to the whole UX points that we made. And the jokes that I previously made about spinners. You can improve INP without improving UX or even potentially degrading it. If all you're doing is just drawing some pixels somewhere quickly and then taking your time to actually fulfill the request. before, because you put in that mechanism to create some sort of quick feedback, you're not necessarily doing good. Let's put it this way. And I think also, again, I mentioned Alex Russell before, I know that he raised a potential issue about the fact that if you're like typing and you just have a 200 million millisecond delay all the time, or 199 milliseconds. So you're still better than
 
Michal_Mocny:
Yeah.
 
Dan_Shappir:
200 milliseconds. It's still a pretty poor user experience. Now, my reaction to him is, given the results that I'm seeing with pages already, giving the initial data that you're collecting, I wish that would be our problem. But yeah, I don't think that even getting a good INP is a guarantee of good UX. I do think that it's already a significant improvement over FID, which has become effectively a meaningless metric because everybody gets such a great score.
 
Annie_Sullivan:
Yeah, I
 
Michal_Mocny:
Yeah,
 
Annie_Sullivan:
think
 
Michal_Mocny:
I think.
 
Annie_Sullivan:
we always need to continually improve the metrics, right? Like it's not like, okay, then we're gonna introduce IMP and then we'll be done once and for all, right? I think it's a clear next step where like, obviously there's like another step after that in measuring like asynchronous interactions and understanding those which is more complicated and harder to understand. But I think IMP is a step in the right direction. One of the things that we've been seeing a lot, reports where we can get anonymized traces from users. It looks like kind of a DevTools timeline. And one thing that we're seeing a lot is that somebody will interact, and then it's so slow. Nothing happens, and they interact again. And so for your first point about, OK, you have this quick update, but then the task isn't done because it's asynchronous, I think it will usually. If the user is not seeing that the task is happening, they'll click again. that IMP will still reflect that, at least for most cases.
 
Dan_Shappir:
I do want to push us to the discussion about single page applications because we're starting to run long in this episode.
 
Charles_Wood:
Yes, we are.
 
Dan_Shappir:
So maybe we'll talk about that. So quickly, why are single page applications a problem, especially in the context of Core Web Vitals as they currently stand?
 
Yoav_Weiss:
So I can probably take that. Historically, so let's start with the first version of CLS, of Communitive Layout Shift. Historically, it has been the shift that is accumulated throughout the entire lifetime of the page. That was the first version of CLS before it was improved to be a more balanced metric. And that resulted in long-lived pages significantly worse scores, even though, like, if you have, let's say, one small layout shift per minute, if you have a long-lived page, that would accumulate into a very high score, whereas if you have a lot of short-lived pages, that would not accumulate. So that created the first initial tension between, you know, SPAs that are typically of shorter-lived ones. Then as part of the full page lifecycle metrics effort, CLS was improved. And INP is also modeled based on a similar model where we don't just accumulate scores over time, but we pick the worst experience throughout the lifetime of the page. Still, you have, like for SBAs, the way The worst score throughout a very long lifetime will potentially be worse than the worst score of a very short lifetime that MPAs typically have. So there's still some tension there. And there's also...
 
Dan_Shappir:
Also, if I may interrupt, MPAs, because they push computation to the server side, they could potentially get really great scores for scenarios where, for INP scores for scenarios where single page applications would be severely penalized. Like if you click something to transition between, effectively transition between a page in a multi-page application, because nothing literally blocks the transition. The INP score would be great even though from the UX perspective nothing happens for a long time, just the spinner in the browser's toolbox starts rotating. Whereas with a single page application, if you show nothing until and you run a whole bunch of JavaScript in order to handle that because you do the processing client side, you would potentially get a really bad score even though the UX would appear to be essentially identical.
 
Yoav_Weiss:
So first of all, at least from my perspective, the browser signaling is an important signal to the user and is a signal that almost all users recognize as something that's happening where thinking about it will give you a reply shortly. So I don't think we should discount the browser signal of the pages being loaded.
 
Dan_Shappir:
Point taken.
 
Yoav_Weiss:
Yeah, but at the same time, yes, if we're talking about, like Michael said earlier, for MPAs we have FCP targets for a second and a half where responsiveness is 200 milliseconds. And for that, in order for us to be able to expose the same levels of data for SPAs as is how long, you know, start recounting LCP after SBA navigations and provide the same kind of thresholds and provide the same accountability to soft navigation routes versus just, you know, assigning everything and attributing everything to the original landing
 
Dan_Shappir:
Thanks for watching!
 
Yoav_Weiss:
page URL. In order for us to do that, we need to be able to recognize soft navigations.
 
Dan_Shappir:
Can you define soft navigations for listeners?
 
Yoav_Weiss:
So essentially, navigations that are being done by single page apps where the URL is modified and the content is modified as a response to a user click.
 
Dan_Shappir:
So essentially, it's a navigation that's handled by a client-side router that's built into the ones that are built into most frameworks rather than by the browser itself dealing with the server.
 
Yoav_Weiss:
Exactly. And because it's being done by JavaScript and not by the browser, historically, it is not something that the browser know things about and therefore cannot attribute metrics to. And essentially, I'm working on changing that.
 
Dan_Shappir:
So what, you're going to basically look at the interaction, then notice that the URL has changed, and then look at the next paint after that, something like that.
 
Yoav_Weiss:
something along those lines. So basically in order to do that, I had to build some missing infrastructure in order to be able to tie the user click to, you know, URL changes and DOM changes that happen after that. Cause in many of the frameworks that work is done asynchronously. And even if like, funnels where the new navigation API, that work happens in different tasks, in different microtasks or tasks. We need to be able to tie all that back to that user interaction in order to have reasonable heuristics that don't just assume that any DOM modification is related. I did some work to create that infrastructure for task attribution and being able to say this task is a descendant of this other task. And now I'm building a soft navigation heuristics on top of that, that essentially do that. So essentially, see that the user click has happened and then see that DOM modifications and a URL change happened as a result of that. And then say, okay, a soft navigation has happened. and now we can start
 
Dan_Shappir:
you
 
Yoav_Weiss:
doing things as a result of that.
 
Dan_Shappir:
So to put it into practical terms, currently, like you said, when there's a soft navigation, it's wholly ignored in the context of core vitals and other web performance metrics. So FCP is not calculated again, LCP is not calculated again, CLS is not reset, it keeps on going. So what is your end goal to actually deal with soft navigations like they were hard
 
Yoav_Weiss:
Yes.
 
Dan_Shappir:
if I cause a single page application to use the local, as I said, the local JavaScript router to change the URL, that would calculate a new FCP and a new LCP and reset the CLS. That's the end goal.
 
Yoav_Weiss:
That is the end goal. It is a hard and long stretch, but it's, yeah, that is the end goal. We want to be able to report those, like basically as Michael said, we want to be tech agnostic and client side navigations versus server side navigations should be the same as long as the user experience is good on both. And we want to be able to measure Um, you know, those user experiences similarly in both tech stacks.
 
Dan_Shappir:
It'll be really interesting to see the data once you start gathering it, because historically a lot of the framework advocates have been complaining that core vitals are shortchanging single-page applications. That single-page applications generally pay a higher price up front for better experience down the line. And they will need to show that that's actually true, core vitals for those soft navigations given the higher cost of that initial view.
 
Yoav_Weiss:
That's the goal, yeah. And once we'll be able to know, like once we know how long soft navigations take, then we can start to come, like maybe they are better. Maybe they are a better way of building websites, or maybe they're slower and we need to address that, or maybe some of them are great and some of them are bad, and we can split that bimodal distribution and figure out which ones are bad characteristics and how can we fix them. So essentially until we have that kind of measurement, at least from my perspective, we are shooting in the dark and we should try to stop doing that.
 
Dan_Shappir:
So basically, your goal, I guess, for this stage is to come up with some sort of LCP star metric in addition to the current LCP metric that will also measure that. And then you'll be able to graph one versus the other based on real user sessions that are data that's collected into crux, something like that.
 
Yoav_Weiss:
That's a slightly more advanced goal. Initially, I just want to test that my heuristic is roughly accurate. And this is something that I started collecting data internally, but at the same time, I'm still playing with the heuristic in order to fix obvious bugs that I already found. And then the next step after that is to run an origin trial with run folks who collect performance metrics and have awareness into their SBA soft navigations, and basically run an origin trial and try to get folks that are trialing to tell me where I'm wrong, or to figure out where they're wrong, or like basically compare my heuristic to what and try to find, for example, pages that are using frameworks but have zero soft navigations according to my heuristics. So that would mean that my heuristic is wrong and I need to fix it in those specific scenarios. So that's the first step. And then the step after that is some sort of a, yeah. an experimental internal metric as well as maybe an experimental LCP value to expose to run providers. And then we can see what it would mean for us to shift from one to the other.
 
Dan_Shappir:
This is really cool. I really love
 
Charles_Wood:
Mm-hmm.
 
Dan_Shappir:
the process that you're describing and all the hard work that goes into these metrics. So this is not just like a mandate from heaven or something like that. These
 
Michal_Mocny:
See you next time.
 
Dan_Shappir:
are all
 
Michal_Mocny:
Bye.
 
Dan_Shappir:
the results of blood, sweat and tears trying to get all these metrics to actually correlate with real world user experiences. This is awesome.
 
Steve:
I thought coming from these people was the same as heaven. Did I miss that?
 
Dan_Shappir:
Hahaha
 
Charles_Wood:
Hahaha
 
Dan_Shappir:
So when do you think, you know, is there any sort of expectation about when you'll start doing all that work? All that gathering of information?
 
Yoav_Weiss:
Yeah, it's funny that you used the word expectation.
 
Dan_Shappir:
No pressure, no pressure.
 
Yoav_Weiss:
No, no, it's all good. Basically, I'm hoping to go to origin trial for the basic, like a very early origin trial with a made up API shape just to test the heuristic very soon. Then yeah, other than that, we'll see how that goes. It all depends on whether the heuristic matches what is out there
 
Dan_Shappir:
you
 
Yoav_Weiss:
in the field or not. I'm, yeah.
 
Dan_Shappir:
Is there any association between what you're doing in the new history APIs or is this like independent of the of the API's use?
 
Yoav_Weiss:
It is independent for now. So I'm hoping that I can come up with a heuristic that will work for existing content out there today. It is possible that this will not work out. The History API has some advantages of being able to mark the end of the session from the developer's perspective. So it is possible that we'll need some sort of a developer opt-in into a well-lit path that will make it easier to measure. If that would be the case, that opt-in is very likely to be the navigation API. So I'm still hoping that it won't be a requirement, but it is possible that at some point,
 
Dan_Shappir:
I
 
Yoav_Weiss:
life
 
Dan_Shappir:
don't necessarily
 
Yoav_Weiss:
will hit me.
 
Dan_Shappir:
think that such a requirement would be so bad because I'm assuming that most frameworks will adopt the new navigation APIs fairly rapidly. So even if users don't actually deal with it, they'll get it for free as part of frameworks. I'm guessing I'm hoping. I guess I'm hoping.
 
Yoav_Weiss:
Those that upgrade, maybe. But yeah, it's always better to cover more content than less content. I will, like... Yeah.
 
Charles_Wood:
N-
 
Yoav_Weiss:
I'll see if there's a... Like, if it's a necessity, then this is the route
 
Dan_Shappir:
Yeah,
 
Yoav_Weiss:
with...
 
Dan_Shappir:
let's put it
 
Yoav_Weiss:
Or
 
Dan_Shappir:
this
 
Yoav_Weiss:
like...
 
Dan_Shappir:
way. Websites that don't update, they probably care less about core vitals anyway. Yeah. I mean, if they're not investing in updating their frameworks, then probably not investing that much in their websites anyway.
 
Charles_Wood:
Yeah, there is definitely that element of things, right? I mean, even Core Web Vitals being part of SEO, you know, or influencing SEO, it only the people who care about those particular metrics, right? Where they're, you know, they're ranking in Google makes or breaks their business or at least care about that kind of a thing. And then it's folks on our end who are looking at it and saying, Hey, I want this to be performant. I want it to be a good user experience. I want, you know, these kinds of things, you know, and if you're not on either of those things on the user experience or on the SEO. Yeah, I don't see that that's going to be something you worry about. Similarly, yeah, if you're not upgrading your frameworks, then you're going to miss out on some of these optimizations. But I'm glad that this is being put out there. I'm glad people are talking about how we measure the end of the day. does allow us to provide a better experience for people on the web.
 
Annie_Sullivan:
And I do think from our perspective, at least the broader of an understanding we can get over as many pages as
 
Charles_Wood:
Right.
 
Annie_Sullivan:
possible,
 
Charles_Wood:
Mm-hmm.
 
Annie_Sullivan:
we can learn a lot more about, even basic things are really unknown right now. How many SPA transitions are there for every MPA transition?
 
Charles_Wood:
Right?
 
Annie_Sullivan:
Are frameworks getting better and worse? Maybe we find out that there's some framework out there where the next version has significantly slower SPA transitions
 
Dan_Shappir:
you
 
Annie_Sullivan:
than the old version. And so a broader measurement would be able of things.
 
Dan_Shappir:
It's my guess, but I'm thinking that we will actually potentially be surprised by how few soft navigations actually happen in real life. I think that a lot of websites are being built these days using frameworks as SPAs, even though they are generally very shallow and there are really relatively few navigations within them. So I wouldn't be surprised at all if the numbers are actually fairly low.
 
Charles_Wood:
It'd definitely be interesting to know it for sure.
 
Dan_Shappir:
Yeah, future
 
Charles_Wood:
So.
 
Dan_Shappir:
statistic for the next for an up for future web almanac, I guess.
 
Charles_Wood:
Yep.
 
Annie_Sullivan:
Yeah, this is one of the things that actually surprised me the most, working on Core Web Vitals and getting web developer feedback from day to day. I realized I was a little bit biased on single-page apps, where I only... Like the ones that you can feel it, right? Like you're browsing and you're like, oh, this is a single-page app. That was my idea of what a single-page app is. But when we got feedback from broader raise-the-partners, I realized I was using single-page apps every day and just assuming they were multi-page applications. So I'm really interested to know, I'm really excited about you all's work and getting the some numbers out there because I I feel like I can't predict what it's gonna say
 
Dan_Shappir:
Yeah, for example, again, going back to my sort of my previous employer, in the case of Wix that's built using React, actually all Wix websites are single page applications. And by the way, one of the main motivations was the whole transitions thing to enable a person who's using the Wix website builder to specify like cool transitions between the different pages. It'll be interesting to see what if and what they do with the transitions API once the if and when these land and gain cross-browser support. But yeah, that's a perfect example of using an SPA for a particular reason, even though navigation might be fairly shallow in most cases.
 
Yoav_Weiss:
Yeah, and you mentioned WebAlmanac as the way to expose soft navigation. So I think crux would be a more interesting avenue to expose that, because it basically depends on the user behavior. You could have pages with many links that are theoretically deep, but in practice, no one ever click or two. So yeah, definitely I will. Yeah, it would be definitely interesting to also see how we can expose that info in crux.
 
Dan_Shappir:
Yeah, I'm all for crux. I love crux.
 
Charles_Wood:
Cool, we seem to be winding down a little bit. Is there anything else that we need to
 
Dan_Shappir:
you
 
Charles_Wood:
make sure people know about before we do our picks?
 
Michal_Mocny:
Can I add one more topic
 
Charles_Wood:
Yeah.
 
Michal_Mocny:
on INP? This could be an open-ended
 
Charles_Wood:
Mm-hmm.
 
Michal_Mocny:
one to leave as a challenge to the audience, but INP, I mentioned it several times, that measures from the moment you interact, like the hardware timestamp of the event, until pixels appear on screen. And there's so much focus on JavaScript and main thread and handling processing. But that is just one of many things that can get in the way of that time. We talk about input delay sometimes and what gets in the way other work on main thread focusing on long tasks. We talk about event handlers themselves. But you can have tasks that are queued up and run before the next rendering task. You can have a request animation frame that takes a long time. You can throw a lot of DOM updates that take a lot of style and layout, all of that on main thread. You can also throw work at the browser CSS or et cetera, that just takes a long time to render and makes the GPU bog down to rasterize and to get pixels out. All of that affects the time it takes to get feedback out. And I think there will be a long process of tooling updates, documentation, education, and like mutual, like even the best of us as experts are constantly stumped and learning in this department. And so as browsers become more sophisticated, to improve performance, more of the rendering pipeline becomes a bit of a black box that is hard to understand. And we're going to have to figure those things out collectively as a community. And I think a lot of the questions with INP right now, there's some low-hanging fruit, you know, why is my JavaScript getting the way? How do you define this? How to think? But then once you get past those low-hanging problems, you get into bigger problems where even the experts can be stumped of why did this take so long? Why was this so delayed? Why did delivered. And so we'll get there and I think that's an open challenge. It'll be an interesting couple years or however long it takes and then we'll drive improvements to these numbers. But I just want it to not leave today without at least mentioning that part of the problem which will be interesting.
 
Dan_Shappir:
Yeah, for example, just a concrete example of these sort of things, I'm currently investigating why certain interactions are taking a really long time in our application, especially on Android devices. And it's wholly unclear, based on the field data, whether the problem is network-related or CPU-related or both. And using RAM data to determine that is very challenging. Now, obviously, I can try to simulate these type of environments and see what happens in lab tests, but I can never know for sure that I'm actually simulating the scenario that most of the people experience, especially as I move to the higher percentiles.
 
Michal_Mocny:
Yes.
 
Dan_Shappir:
So yes, it can be really challenging. Yeah, the browser is an amazingly complicated beast, for sure.
 
Michal_Mocny:
Yeah, go,
 
Yoav_Weiss:
Yeah.
 
Michal_Mocny:
go, go ahead, y'all.
 
Yoav_Weiss:
Michael, no, I want you to talk about user flows.
 
Michal_Mocny:
So it is true that for all full page lifecycle metrics, taking your field results, which are coming from all users in all geographies, on all devices, in all sorts of conditions, and they might have followed any particular user flow in their long-lived session, can be difficult to replicate. And CLS does tend to lean more heavily towards loading type issues. Clear cut how to replicate, but even then we have issues sometimes replicating field CLS in the lab. But INP is an even bigger problem. And so there's a whole slew of new issues we're going to have to solve, which is we have these reports, how do I replicate locally in the lab? And one way to do that, perhaps, is to just, you know, with crux, all you get is your scores and you get your distribution of scores. You don't get insights. own event timings or layout instability using the performance timeline, you can report attribution and so you could get more insights and so you could report a whole session, the user clicked here then they waited a while then they clicked here then this network request came in and then right around this time on this type of interaction against this node coincided with this long task and therefore there was a long interaction like you could do things like that and project there in terms of helping transition from field data to lab reproduction. But what I was referring to is, even if you have a lab repro, even if you can see that this interaction is taking me a while under these particular conditions that you carefully set up, why? Where did the time go?
 
Charles_Wood:
Mm-hmm.
 
Michal_Mocny:
Even that is a problem. And so all of those things are going to be very interesting and exciting. And we will be on Chrome team trying to evolve tooling to make that as easy as possible, also through documentation and et cetera. But that's why those numbers that you see now in field data, I'm still excited that we will drive those. We will improve collectively by focusing on this problem. We will improve it. And that excites me.
 
Dan_Shappir:
I love the Chrome DevTools. I live in the
 
Charles_Wood:
Yep.
 
Dan_Shappir:
Chrome DevTools.
 
Charles_Wood:
So handy.
 
Yoav_Weiss:
Yeah, and yeah, it's like on this week's web performance working group call, Michael will be expanding more on that concept. So yeah, tune in if you're
 
Michal_Mocny:
I'm going to go ahead and turn it off.
 
Yoav_Weiss:
so inclined,
 
Michal_Mocny:
I'm going to go ahead and turn it off.
 
Yoav_Weiss:
then we can probably link the, like send the link to the presentation later.
 
Charles_Wood:
Cool, picks. All right, well I'm gonna push this to...
 
Dan_Shappir:
Thanks for watching! Thanks for watching!
 
Steve:
to I
 
Charles_Wood:
Steve,
 
Steve:
have picked.
 
Charles_Wood:
do you have some pics
 
Steve:
Do
 
Charles_Wood:
for
 
Steve:
bears
 
Charles_Wood:
us?
 
Steve:
walk in the forest? No, I was kidding. So actually all I have this week is, and you'll have to
 
Dan_Shappir:
you
 
Steve:
excuse the heavy machinery outside my window here. Their timing is impeccable.
 
Dan_Shappir:
you
 
Steve:
The usual dad jokes of the week. And sorry, I don't have the rim shot access chuck, so you might have to step in for me.
 
Charles_Wood:
I think I've got
 
Steve:
All
 
Charles_Wood:
it.
 
Steve:
right, so.
 
Charles_Wood:
Did that go off? It didn't play.
 
Steve:
No, you had to be in live mode, so that might be the problem down at the bottom. Anyway,
 
Charles_Wood:
I am in live mode anyway.
 
Steve:
yeah, I wasn't working earlier today, so we'll see. Anyway, did you hear about the missionary who went around sharing laxatives to people?
 
Dan_Shappir:
you
 
Steve:
He started a religious movement. Thank you.
 
Charles_Wood:
It's not working, I'm sorry.
 
Dan_Shappir:
Thanks for watching!
 
Steve:
And that one is good enough anyway. So I knew somebody who was a longtime smoker friend of mine and he recently gave it up, Cold Turkey. He's doing better, but he's still coughing up feathers.
 
Dan_Shappir:
you
 
Charles_Wood:
I actually smiled at that one.
 
Steve:
And then finally, I was looking to buy some furniture and I went to a furniture
 
Dan_Shappir:
you
 
Steve:
store and the furniture salesman told me, this sofa will seat five people without any problems. I said, where the heck am I gonna find five people without any problems? So
 
Dan_Shappir:
Yeah.
 
Steve:
those are my picks for the week.
 
Dan_Shappir:
Chuck, can I go next? Because
 
Charles_Wood:
Yeah.
 
Dan_Shappir:
my browser tells me our security team has decided that my browser will restart itself in approximately five minutes. So
 
Charles_Wood:
Do it.
 
Dan_Shappir:
yeah, I'll be bumped off. OK, so my first pick is actually one of your Google colleagues, which is Felix Arnets, I think his name is, You know, our topic for today was performance, and he's part of the WordPress Performance Core team. I'm really happy that WordPress, you might say, finally, is putting significant effort into improving performance. Being such a significant part of the web means that any improvements that they can make have wide-ranging impact on so many websites and so many users, and I'm really happy about that, and I've spoken with him a bit, that they're going to be putting in. They've already started doing stuff, mostly collecting data, I think, for now. But they already have ideas of things that they want to do. And it's definitely a challenging problem because that ecosystem is so open and so diverse. It'll be interesting to see what they are able to achieve. So that would be my first pick. My second pick is watermelons, or actually in general. This is watermelon season in Israel and they are amazing. They are just so delicious. They are seedless. They are sweet. I can't get enough of them. You know, if you're really into fruit, then Israel is the place to be. It's a bit expensive, but it's just so delicious. So that would be my second pick. And my third pick is that same pick I pick each and every ongoing war in Ukraine. It's not winding down. It just keeps getting worse and worse and worse with all the atrocities that we're seeing. So I'll keep picking that. And I don't know. Hopefully, it will eventually I'll be able to stop picking it, but it doesn't seem like it's going to be anytime soon. So those would be my picks for today.
 
Charles_Wood:
Yep, absolutely. And I think we all just keep an eye out for ways that we can help. AJ, what are your picks?
 
Aj:
So I actually don't have a big list this week or anything in detail. I don't think, there was something that we talked about at the beginning of the show that I know I've got to talk about on creedsofcraftsmanship.com, but now I can't remember what it was. But I was gonna pick this thing and now I don't
 
Dan_Shappir:
D-L's keyword,
 
Aj:
remember.
 
Dan_Shappir:
maybe?
 
Aj:
Were we talking about that on the show? I don't think we were talking
 
Dan_Shappir:
I
 
Aj:
about
 
Dan_Shappir:
think
 
Aj:
that.
 
Dan_Shappir:
it might be a start topic for a future episode.
 
Aj:
But yeah, sure. We could, we could pick a Matt Ryers things I never use. Um, so go is already a very simple language. It is, it is extremely small. It is smaller than, than JavaScript when JavaScript was small. And, uh, Matt Ryer has a talk in which he talks about things that he thinks are excessive and superfluous. And one of them is the else keyword. using else it's a signal it's kind of a code smell there's there's very few situations where you else really makes sense.
 
Dan_Shappir:
Just to give context,
 
Charles_Wood:
Yeah,
 
Aj:
So sure.
 
Dan_Shappir:
the alternative,
 
Charles_Wood:
you pick America.
 
Dan_Shappir:
like I said,
 
Charles_Wood:
Okay.
 
Dan_Shappir:
we'll probably talk about it in a future episode, but
 
Charles_Wood:
Uh...
 
Dan_Shappir:
small functions with early returns is one possible alternative to else statements.
 
Aj:
Yep. I did, but I don't remember. I think
 
Charles_Wood:
Alright.
 
Aj:
I had two things, but anyway. Happy Fourth of July,
 
Charles_Wood:
Oh, you
 
Aj:
everybody.
 
Charles_Wood:
sounded like you had something
 
Aj:
Oh, let's
 
Charles_Wood:
else.
 
Aj:
a little wait for that. Yeah,
 
Dan_Shappir:
America.
 
Aj:
I do pick America.
 
Michal_Mocny:
As a Canadian,
 
Charles_Wood:
Alright.
 
Michal_Mocny:
I'm gonna let
 
Charles_Wood:
Haha!
 
Michal_Mocny:
this one fly, but okay.
 
Aj:
As a Canadian, you're going to be supportive and cheerful.
 
Dan_Shappir:
I saw this tweet like happy treason day you ungrateful colonials.
 
Charles_Wood:
Yeah, I had a picture of King George III. Yeah,
 
Aj:
Yes.
 
Charles_Wood:
pretty funny.
 
Dan_Shappir:
Something like that.
 
Charles_Wood:
Yeah, I saw a few others that were a little bit more politically charged that I won't share. I'm gonna throw out a few picks here. I always pick a board game and this weekend we were down at my mother-in-law's house and she had this game that she was playing with all the kids and then she played it with us with all the adults and it was it was a lot of fun. It's a party game so I have to say I'm not typically a big fan of the party games in the sense that they seem to be kind of light. I don't know. I like the games that really make me try. Point zero five. Now on Board Game Geek, it has a weight of one, which means it's a real. And this one, it's kind of one of those mental word games, which has made it fun. It's called Just One. Really simple game. And effectively what you do is you have cards that it, you know, You get a stack of 13 cards, you take turns putting one up in front of you, and each card has five words on it, so then you say one, two, three, four, five, right? And then everybody tries to give you a clue, a one-word clue, to get you to guess the word. And it's a cooperative game, and so if you fail to guess it, then I guess the game gets the point. you get the point. And so, you know, if you get seven of them, you technically won. But if you pick the same one word clue as somebody else, then both of you have to take your clue out. And you can play with up to seven people. So anyway, it gets kind of tricky because you don't wanna go with just the obvious answer because if you do and somebody else does, then your guess is out. And anyway, it was a lot of fun. enjoyed it, enjoyed playing a party game, go figure. But yeah,
 
Aj:
you
 
Charles_Wood:
so
 
Michal_Mocny:
Thanks for watching!
 
Charles_Wood:
I'm gonna pick that and then I'm just gonna encourage folks to go check out topendevs.com slash conferences. We've got a bunch of conferences coming up over the, toward the end of the year and I would love to, you know, help In JavaScript, so yeah, the JavaScript conference, I think is in September. And then we're gonna have conferences for all of the different frameworks. So React, Vue, and Angular, because we have shows for those, and know a lot of people that can come and share awesome stuff. So anyway, those are my picks. Annie, do you have some picks for us?
 
Annie_Sullivan:
Yeah, so from the technical end, I wanted to call this presentation from 2018. It's by this guy, Halvar Flake, who at the time was at Google Project Zero. It's called Security, Moore's Law, and the Anomaly of Cheap Complexity. And he's talking about it from the perspective of security, the fact that machines are getting so capable that it's actually easier from a developer perspective to do something that's
 
Charles_Wood:
awesome.
 
Annie_Sullivan:
more complicated for the machine. And
 
Charles_Wood:
Alright,
 
Annie_Sullivan:
I really love
 
Charles_Wood:
Joav,
 
Annie_Sullivan:
the presentation
 
Charles_Wood:
what are your picks?
 
Annie_Sullivan:
it's really applicable to performance and it made me think a lot and especially you know in this world of frameworks and everybody including more things I think it really helped me empathize better with web developers as well so I thought it was a really cool presentation That's it for me.
 
Yoav_Weiss:
Sure. So my first pick is just no meetings week, which
 
Michal_Mocny:
Yeah
 
Yoav_Weiss:
relates to 4th of July and Canada Day. I work mainly with Canadians and Americans, and I actually had time to code in these last few days. Between Friday with the Canadians out and yesterday with the Americans out, I managed to land a few
 
Dan_Shappir:
Thanks for watching!
 
Yoav_Weiss:
meaningful patches. Happy about that. So I highly recommend
 
Dan_Shappir:
I recently
 
Yoav_Weiss:
it.
 
Dan_Shappir:
tweeted,
 
Yoav_Weiss:
Maybe you're...
 
Dan_Shappir:
Joav,
 
Yoav_Weiss:
Ah.
 
Dan_Shappir:
I recently tweeted a tweet that got some interesting reactions, which basically said if
 
Yoav_Weiss:
Yeah.
 
Dan_Shappir:
you're a dev and you're in meetings all the time, then you're not a dev.
 
Charles_Wood:
Ha ha ha
 
Yoav_Weiss:
I saw that it imprinted my feelings. Yeah, so my problem is a mix of like, it's typically not meetings, it's mostly like emails and reviews and making sure that, so yeah, but writing your own, like yeah, writing code for your own and getting stuff done that you wanna get done is fun. And my second pick, I guess, is the return of real life conferences, which is exciting. So
 
Charles_Wood:
Mm-hmm.
 
Yoav_Weiss:
we have TPAC, which is the W3C annual conference that hasn't happened in real life for the last three years. And it's very much needed. And it's coming back this September. So I'm super excited about that.
 
Dan_Shappir:
you
 
Yoav_Weiss:
And then there's another conference coming up this fall, I think end of October in Amsterdam called performance.now, which is always,
 
Aj:
you
 
Yoav_Weiss:
yeah, the best conference of the year, every year when it, you know, when we used to go outside of our own homes. So I'm looking forward to
 
Charles_Wood:
Awesome
 
Yoav_Weiss:
getting back there. Then those are my
 
Charles_Wood:
Alright,
 
Yoav_Weiss:
picks.
 
Charles_Wood:
Michael, what are your picks?
 
Michal_Mocny:
I love Joav's picks, so I'll just start with that. But I have a lot of hobbies outside of tech, and so I figured I could share those. So we're on a podcast. I like listening to podcasts. And so one of my hobbies is sailing. And probably my favorite sailing podcast at the moment is by Matt Rutherford. He's got a podcast called Single-Handed Sailing. Matt Rutherford is a bit of an extreme person. and just started doing transatlantic crossings on his own, single-handed in a tiny sailboat, and then did the world's first ever nonstop round the Americas, where he spent 300 days on a sailboat and got flipped upside down and ate space food. And anyway, what an interesting person. And yeah, when I pick up hobbies, I like to do them pretty extreme. But this is off the deep end. And I'm trying to hold off from dropping everything and just moving on to about myself, we'll see how that goes. The other hobby I have is woodworking and I figured I'd share a fellow Canadian YouTuber and podcaster, Samurai Carpenter, which I find very entertaining. And maybe that's enough for now,
 
Charles_Wood:
I'll have
 
Michal_Mocny:
but
 
Charles_Wood:
to check that out.
 
Michal_Mocny:
those
 
Charles_Wood:
Yeah.
 
Michal_Mocny:
are my picks.
 
Charles_Wood:
I really get into the woodworking stuff as well. And my daughter wants me to help her fix her desk, which I think is gonna involve rebuilding it. So. But yeah,
 
Michal_Mocny:
usually the easiest.
 
Charles_Wood:
I fix all kinds of stuff around here and I really enjoy making stuff and it's
 
Michal_Mocny:
Yeah.
 
Charles_Wood:
different, right? It's different making stuff in your garage with your tools than it is making stuff on your computer. And
 
Michal_Mocny:
Absolutely.
 
Charles_Wood:
I don't know, just working with my hands, it's funny because yeah, I'll get frustrated when stuff doesn't quite go the way I want it to, but it's just so relaxing to kind
 
Michal_Mocny:
Yeah.
 
Charles_Wood:
of exercise the other muscle, so.
 
Michal_Mocny:
Yeah, my wife wanted us to buy a desk for my son from IKEA. And I said, why spend 300 bucks when I could build one in two weeks and $600 later?
 
Charles_Wood:
Yeah, right. All right, we'll go ahead and wrap up here. Thanks you all for coming. This was awesome.
 
Michal_Mocny:
Yeah, thank
 
Charles_Wood:
Alright.
 
Michal_Mocny:
you.
 
Yoav_Weiss:
Thanks for having us.
 
Charles_Wood:
All right,
 
Annie_Sullivan:
Yeah,
 
Charles_Wood:
till
 
Annie_Sullivan:
thanks
 
Charles_Wood:
next
 
Annie_Sullivan:
a lot
 
Charles_Wood:
time,
 
Annie_Sullivan:
for having
 
Charles_Wood:
folks,
 
Annie_Sullivan:
us.
 
Charles_Wood:
Max out.
 
Michal_Mocny:
Wait, I have a question. Oh, sorry.
LTR RTL