Practical Strategies for Web Optimization: Using Chrome DevTools - JSJ 635

Jack Franklin is a Senior Software Engineer at Google. They dive deep into the world of performance optimization. They explore the sophisticated capabilities of Chrome DevTools, focusing on the performance and insights panels. Jack shares invaluable tips on utilizing tools like Lighthouse and the flame chart to prioritize and analyze web performance, along with practical advice for maintaining a clean environment for accurate profiling.

Special Guests: Jack Franklin

Show Notes

Jack Franklin is a Senior Software Engineer at Google. They dive deep into the world of performance optimization. They explore the sophisticated capabilities of Chrome DevTools, focusing on the performance and insights panels. Jack shares invaluable tips on utilizing tools like Lighthouse and the flame chart to prioritize and analyze web performance, along with practical advice for maintaining a clean environment for accurate profiling.
Join them as tehy decode the intricacies of debugging, from handling long tasks and layout thrashing to understanding the context of flame charts and network requests. Plus, they discuss the collaboration efforts between Chrome and Microsoft Edge, valuable educational resources, and even touch on topics like involvement in local politics and upcoming movie releases. Whether you're a seasoned developer or a tech enthusiast, this episode is packed with knowledge, humor, and practical advice to help you master web performance optimization. Tune in now!

Socials


Picks

Transcript

Charles Max Wood [00:00:05]:
Hey, everybody. Welcome back to another episode of JavaScript Jabber. This week on our panel, we have Dan Shapiro.

Dan Shappir [00:00:12]:
Hello from a warm and sunny Tel Aviv.

Charles Max Wood [00:00:15]:
I'm Charles Max Wood, and I ducked out a standup to come to this, so I'm excited. We have a special guest this week that is Jack Franklin. And, Jack, you're based out in the UK. You wanna tell us a little about yourself?

Jack Franklin  [00:00:27]:
Yeah. Hey. Thanks, Hudmi. Yeah. My name is Jack. I work at Google on the Chrome Dev Tools, focusing on the performance tooling. It yeah. Based just outside of London where it is sadly not so sunny as it is for Dan.

Jack Franklin  [00:00:39]:
A little bit cooler here.

Charles Max Wood [00:00:41]:
Yeah. We we get that all the time. It'll be like, you know, minus 800 degrees here in Utah. And he's like, yeah, it's like 60 degrees. It's like, okay. Just go away and come back next

Dan Shappir [00:00:54]:
spring. Yeah. There's a downside to that, which is how hot it'll get around July and August. So you know?

Charles Max Wood [00:01:02]:
Yeah. Yeah. It gets hot here, but it's a dry heat. Like, I lived in Italy for 2 years, and, boy, it gets muggy there. It's like, yeah, it's a 100 degrees, but it's a completely different 100 degrees.

Dan Shappir [00:01:15]:
So Yeah. That's for sure.

Charles Max Wood [00:01:16]:
Yeah. Alright. Well, let's dive in and talk about, Chrome Dev Tools. Jack, before we get too far in, I just wanna get a little bit of a story here. Like, how do you wind up being a Chrome DevTools person?

Jack Franklin  [00:01:31]:
Yeah. A lot due to good timing, I think. So I've been working in the front end space for, all all my career really since leaving university, and then I've been at Google now just over 4 years, and it was sort of four and a half years ago to a conference, in the Netherlands. I did a talk on how the company I was at before Google, we were using, React and building components and just kind of rearchitecting our front end as a component driven manner, talking about kind of design systems and the reusable reusability and all that kind of stuff. And someone sat in the audience happened to be working at Google on dev tools and had a space, on their team looking to hire someone in the London office. And so he spoke to me afterwards, and I applied and kind of went through the the interview process and all that stuff and, thankfully, came out the other side, intact and with a job offer. So that was that. And then, yeah, 4 years 4 years later, I'm still still here.

Charles Max Wood [00:02:27]:
Oh, wow.

Jack Franklin  [00:02:28]:
Yeah. Yeah. It was yeah. It's been an interesting ride for sure.

Dan Shappir [00:02:33]:
So your background is all web tech because, you know, obviously, when you think about somebody who's working effectively in the Chrome browser, you also think about, hey. That person needs to be coding in c plus plus or stuff like that.

Jack Franklin  [00:02:48]:
Yeah. So I have I have a computer science degree, but after that, the majority of my experience has been front end. I've certainly never done any c or c plus plus or anything in that region. I I was a Ruby developer for a while. I've kind of dabbled with various languages, but professionally, yeah, the majority of my time has been, JavaScript, TypeScript, and and various front end, tech. The thing with with dev tools is there's actually a web application itself. So the entire Chrome DevTools is built with HTML, CSS, and, TypeScript, and it gets embedded into Chrome. So although I don't have the background to dive into the sort of Blink Chromium back end side, I'm I'm very able to be very productive on the front end, you know, and work on dev tools.

Charles Max Wood [00:03:31]:
So, when I was a new developer, one of the my mentor, basically, we we would joke that we were gonna write an application that it was code quality, not performance, but, it was gonna be a really simple app that you would submit your code and it would just come back and say, this code is terrible. Right? Because everybody's code is terrible. So so how

Dan Shappir [00:03:54]:
do you not Isn't that correctness problem like an NP complete problem?

Charles Max Wood [00:03:58]:
I don't know. But but how do you write a performance tool that's useful and not just your performance is awful?

Jack Franklin  [00:04:08]:
Yeah. Yeah. That that's the challenge. I think I think as well, a lot of performance is about contextualizing it. So, you know, depending on the type of app you have or website and how it's used and the average user, what technology they use to access, The the sort of what we mean by good or bad performance can vary a lot. So I think the constant challenge for us is trying to strike that balance.

Charles Max Wood [00:04:30]:
Right.

Jack Franklin  [00:04:31]:
And, also, I think that the balance we have is we hear the feedback all the time. If if you load up you know, go to a big website and run a use the dev tools performance panel to, you know, reload the page and record a trace, you get presented with this this flame chart and this stuff, and there's just boxes and different colors everywhere, and there's various lines and markers and, all sorts. And it it is very overwhelming if you've never used the tool tool before. Even if you have used the tool before, it can be like, oh my goodness. Where do I even begin to start with this? So, yeah, that's kind of what you're alluding to is is similar to the the battle we have of we want to show you all this information so you can debug and dive in and understand truly why your website is, let's say, loading slowly. But equally, we don't wanna we'd want to try and not overwhelm you with all this information upfront, and that's kind of the the never ending tension that we we have. And I don't think right now we've kind of struck the right balance, but it's something we're we're trying to resolve and work on.

Charles Max Wood [00:05:25]:
Yeah. It's funny you said Flame Chart. My brain immediately went to there's a fire here, and there's a fire here. The fire there?

Dan Shappir [00:05:33]:
Well, in the case in the case of dev tools, it's an upside down flame chart. The fire is burning downwards.

Jack Franklin  [00:05:39]:
Right. Yeah. Yeah. Yeah. So yeah. And and other tools will show it the other way with the, you know, flips. There's a lot of different ways you can slice the the data, and I think, yeah, one of the challenges as well is how to how do we get people to understand and interpret what we're showing them correctly. Right.

Jack Franklin  [00:05:58]:
And that is never ending.

Dan Shappir [00:05:59]:
But before we we dive into the details of of this or that, dev tools performance, well, dev tools panel in general, I want to, first of all, point out what an amazing tool dev tools are. And the people and the people today kind of take them for granted. We've actually had an an episode about dev tools. It's more in general. I I forget who came on the show to talk about this from somebody else from your team who'll need to check afterwards and put the same

Jack Franklin  [00:06:32]:
Mikael Hablisch, I I think.

Dan Shappir [00:06:33]:
Yes. Yes. It was. Yes. It was. Thank you for reminding me. I I still remember the days you know, I'm old enough to remember when we didn't have such tooling and, you know, how we would debug things with alerts. And and afterwards, we got it

Charles Max Wood [00:06:51]:
for desktop. What are you talking about?

Dan Shappir [00:06:54]:
And and how yeah. Well, at least now you can console log, which is at least a bit better. And then we got it for desktops, but initially didn't have it for mobile. And then we got the ability to actually do it on mobile. So I think to a great extent, the modern web as we know it today could not exist without, dev tools. And of all the dev tools in all the browsers, I like Chrome Dev Tools or, you know, Chromium Dev Tools, let's say, the most. So kudos for that. Also, the other thing I wanted to mention is that if we look at the panels in Chrome DevTools, like, half of them, like, have to do with performance in in one way or another.

Dan Shappir [00:07:42]:
Like, the network tab, the performance tab, the performance info tab, the lighthouse tab, the memory tab keeps on the list goes on and on.

Charles Max Wood [00:07:50]:
Mhmm. Yeah. I have to say. Just just to add to this, you know, initially, when I got into and realized that there were dev tools in there. Right? And I think the first ones I saw were on, Firefox. Right?

Dan Shappir [00:08:03]:
And I

Charles Max Wood [00:08:03]:
was like, oh, this is handy. Right? And so then I'm starting to use some of the same tools on on Chrome once I start using Chrome. And then somebody showed me the network tab, and my mind was blown. Right? And then somebody showed me one of the other tags. I don't remember. And, you know, it was like, woah. This does so much more. And so, you know, I I have to admit, I haven't really deeply used the performance tab, so I'm hoping you're gonna tell me, yeah.

Charles Max Wood [00:08:28]:
Well, you've been missing out, and here's here's what it does.

Jack Franklin  [00:08:32]:
Hopefully. Yeah. I think that the first developer tools I remember using were, Firebug, I think it was called. I think it was an extension to Firefox, like a third party thing.

Dan Shappir [00:08:42]:
You need to install it.

Jack Franklin  [00:08:44]:
Yeah. Yeah. And and the first time, I think I think I was debugging some CSS issue. And the first time I realized that using Firebug, I could adjust the CSS in the tool and have it real time update was just incredible to me. Yeah. And and, obviously, the tools have come a long way since, but Firebug will always I'll always remember that whenever we talk about dev tools in any any situation, I think.

Charles Max Wood [00:09:06]:
Yep. Had the neat little bug icon in the toolbar. Yeah. So so you wanna give us kind of the 10,000 foot view on the performance, per panel? And what I I guess what I'm looking for is sort of the, a, how do you expect developers to use it? And also well, let's just start there because I can't remember what the other piece was.

Jack Franklin  [00:09:30]:
Sure. Yeah. So, really, I think when we talk about performance or when people think about web performance within the context of Chrome and and Google and all of that, we're really talking about, the core web vitals, which are these metrics that that Google has that we want that we think represent kind of, better performing websites and that provide better experiences to users. So we expect most people using the performance panel would be debugging their core web vitals scores. 2 of the core web vitals are broadly in fact, one, sorry, LCP, which is largest contentful paint. We can dive as deep into what these mean or or don't mean, but LCP really represents how quickly or not your page was loaded and visible to the user, and they could kind of read it and and see kind of what the content is. And so that's really all about loading speed. What is delaying loading your website? And that's one thing that the performance panel is really good at.

Jack Franklin  [00:10:22]:
You you will record your website loading. It shows you screenshots of the process as various elements loaded and were rendered. It shows you the network requests, and it shows you all the JavaScript that was happening at the same time. So that becomes a very good way to see, oh, there's this bundle of JavaScript that was 5 megabytes in size that took 10 seconds to download. That that I can see visually that stops, you know, my page rendering, and showing. Then you also have CLS, which is layout shifts. That's content shifting around the page as it's loading in. The classic example here is a user goes to click a button, and as they do it, another button loads in, and they click the wrong button, which can probably be frustrating.

Charles Max Wood [00:10:58]:
I've never had that happen to me before, and it didn't make me really, really angry either.

Jack Franklin  [00:11:03]:
I've had it. I've had it before, like, a banner has popped into the top. It has pushed the jackpot down by 50 pixels. It's very frustrating on especially

Dan Shappir [00:11:11]:
It's the banners. It's mostly the banners.

Jack Franklin  [00:11:15]:
Yeah. That's what we'll do most of the time. But, again, that's the kind of thing we'll we'll show you, and you again, using screenshots, you can see kind of before and after those and, try and figure out what element was responsible for that and go from there. And the final web vital is is called INP. It stands for interactions next pane. Broadly, it measures how responsive and interactive your page is. So when the user clicks on, let's say, a button, how long is

Dan Shappir [00:11:38]:
it until the UI updates to show

Charles Max Wood [00:11:38]:
them that, you know, your your app has

Jack Franklin  [00:11:39]:
dealt with that button click and is processing or has registered it or is is doing whatever. I'd say typically, like, because that's newer and debugging sort of interactions where users have to interact with your page is harder, Debugging loading speed is a bit easier because normally you can run the page load. You might change something locally, rerun it again, and you can kind of, replicate that fairly straightforwardly. Interactions where you have to try and, you know, click a text box, type in at a certain time to hit some edge case which is causing slowness can be trickier. But those are kind of what the when we think about what features we should be building and working on in the performance panel and what kind of user journeys we want to make easier, it's those web vitals that we're kind of motivated by really.

Charles Max Wood [00:12:26]:
Gotcha. So how does that actually show up in the panel? Right?

Jack Franklin  [00:12:31]:
Yeah. So the the main panel is made up of a series of what we call tracks. And these are kind of horizontal rows that go all the way across the screen. These show specific categories of events or things that happen. So an obvious one is the network track. This will show you all the network requests that happened during that time frame, and it shows them as where these are representative time. So the the wider the rectangle, the longer that particular request took. Yeah.

Jack Franklin  [00:12:56]:
That's worth noting. Simplification because there are there are additional bars on either end which represent different parts of the network, request in its own cycle.

Dan Shappir [00:13:05]:
I wanted to to you mentioned it. It's for those you know, it's kind of duplicate information with the network tab, in the sense that you can in both cases, you can see the rectangles, the kind of the waterfall that represents the network activities. If I'm just analyzing the network itself, and and often that's where I start, Then to be fair, I prefer doing that on the network tab initially simply if for no other reason reason that it's much less busy. Yeah. And but like you mentioned with the bars at the end, once I start analyzing the overall activity of the page and start correlating the downloads with the JavaScript activity because, let's say, a JavaScript is triggering a fetch request. And then when that fetch request finishes, then JavaScript is triggered again to process the results of that fetch request. So when I do that holistic analysis, that's where the performance panel really shines. And and you might want to explain, you know, either now or later those lines at the end.

Dan Shappir [00:14:24]:
Because like you said, there's the actual bar itself that has, like, a, like, a lighter color and a darker color for the but it also has the bars at the beginning, like the mustache, I think you call it, at the beginning and the end.

Jack Franklin  [00:14:39]:
Yeah. We call them whiskers. Oh, whiskers. Okay. But yeah. Same same concept.

Charles Max Wood [00:14:44]:
I love it.

Jack Franklin  [00:14:46]:
And and yeah. So I think you're you're right. If you're debugging purely network issues, the network panel, I mean, just real estate wise has more room because it's only showing you network things, whereas the performance panel, we have to get the network in alongside all the various other things that that we show. And and, yeah, the key thing with the performance panel, panel, I think, is what Dan is touching on is that you've got all these these tracks that have different types of information in, but, crucially, they're all aligned based on time stamps. So you can look vertically down and see exactly what all of these different you know, what stuff was happening at this particular time stamp. So what network requests were happening, then we jump to a track that shows a screenshot of your page at that given time. Okay? So this is what the page looked like. Then we jump to any layout shifts that happened at that time, and then it's, main thread activity, which really is what heavy JavaScript was running.

Jack Franklin  [00:15:31]:
Let's say re rendering, you know, a Angular component or React component or whatever else it it may be. So where the performance panel I think is power comes from is at any point in time, you can see across a load of different categories what what the browser was having to do to provide the user with the your website or web application.

Charles Max Wood [00:15:51]:
You know, it's funny you're talking about all of this, and I can imagine why it's so hard for you to figure out how to give people a concise picture of the thing.

Dan Shappir [00:16:00]:
Yeah. Oh, yeah. For sure. The the performance panel is, I think, by far the busiest of of all the panels to the extent, I think, that you created the performance insights panel to provide a slightly less busy view as it were. Yeah. But on the other hand, once you grasp it and you master it, I think it really becomes a superpower. Like, it it definitely takes you to the next level in terms of your abilities to analyze and understand what the web page is doing?

Jack Franklin  [00:16:37]:
Yeah. And so the the to touch on the performance insights panel, so this is a distinct panel from the performance panel, very confusingly named. So I'll give you that one.

Dan Shappir [00:16:46]:
We

Jack Franklin  [00:16:46]:
launched this, I think, last year. It's very much an experimental panel. The the goal is to kind of explore how we could without without hiding away all this detail, which as Dan said, once you kind of understand it and what it's representing is is really powerful and really lets you dive deep. But for users maybe newer to the sort of this topic is, very overwhelming. So the performance insights panel was an experiment in how can we pull out what we call insights and try and guide the user to, hey. Here's here's a problem that caused your page to load, in a way that didn't need them to dive into all the nitty gritty, detail. It was, I think, semi successful. The feedback was at a good mix of, hey.

Jack Franklin  [00:17:27]:
This is really useful, this sort of high level view to, hey. This is cool, but because I I want another level of detail, I will never use this. I will always use the performance panel. And long term, we didn't really ever want to maintain 2 distinct panels for the foreseeable future. So actually, the performance insights panel will eventually be removed, but what we learned from that in terms of insights will, be coming into the performance panel. There's a there's a blog post which, we can link to, I guess, in the show notes so I can put on chat where we kinda talk more about these plans. But Oh. Yeah.

Jack Franklin  [00:17:58]:
We're we're trying to figure out a world where we can give people these useful kind of insights like, hey. This particular network request was the source of a lot of problems for you in this this page load, but also provide all the data so people like Dan can dive in and and find exactly what they're looking for.

Dan Shappir [00:18:14]:
So first of all, it's news to me that, performance insights is kind of a temporary thing. So, thank you for highlighting this. And you know what you should do. The answer is AI. Have a

Jack Franklin  [00:18:32]:
chance. Yeah. We're we're still figuring out exactly how AI slots into all this. But, yeah.

Charles Max Wood [00:18:38]:
Yeah. You're you're at Google. So I was gonna say, you handed off to chat GPT, but you handed off to Gemini and say, here's the output. What do you what do you make of this?

Jack Franklin  [00:18:49]:
Yeah. Something like that. It's so hard as well because I I I really learned this. When I started working on building the performance panel and understanding, behind the scenes, the thing that powers the performance panel in these flame charts are what are called trace events. These are really just objects of data that Chrome emits during a page load.

Dan Shappir [00:19:09]:
Mhmm.

Jack Franklin  [00:19:10]:
And you begin to see, okay. So every time I have this event, I'll always have this other event that is named like this, and and you can try and build relationships between these events. But then you release this feature into the world and, you know, a 1000 people use it across a 1000 different websites, and they just blow all your assumptions out the window because there's such a variety of technologies and approaches and and all the rest of it. And so the the challenge of any AI would be useful whilst not kind of, pigeonholed, I think, into a few certain categories. But, yeah, who knows? I'm sure it'll be a space we'll be exploring, along with the rest of the Internet at some point.

Dan Shappir [00:19:47]:
So one of the going back to the details of the performance panel, I think one of the things that confuses people the most about it is the fact that it simultaneously shows 2 timelines. That at the very top, you have a timeline that encompasses the entirety of the recorded period, while slightly below it, you have, like, a a partial timeline.

Jack Franklin  [00:20:14]:
Yeah. So we have the the mini map, what I would call it, at the top, which shows sort of the activity over the whole, trace period. But then what we let people do is there's 2 kind of handles which you can drag to select a a subset, And then the rest of the panel is scoped to that subset. The idea being that you can, quickly zoom in on on a period of time that is particularly interesting to you. That's one of the things we shipped, I lose track of time earlier this year, is the ability to once you've zoomed in, you can now actually click a button and kind of zoom in again and and save that zoomed in state. But one of the things we're thinking about, it was something we wanted to do with the performance insights panel. I didn't ever get around to it was, could we, for example, automatically detect areas of interest and provide you with shortcuts to jump to particular time spans that we think contain the most relevant information. So if you've got a page that takes 5, 6 seconds to load, clearly that's not good and you'd like to get that down.

Jack Franklin  [00:21:10]:
But, normally, there'll be 3 or 4 culprits within that 6 seconds where where your attention is best focused. Well, you

Dan Shappir [00:21:16]:
you do kind of do it already, don't you? I mean, already when you record the page load, you record the page load, you often, like, zoom in to a particular period of interest initially.

Jack Franklin  [00:21:26]:
We will, but that's not because it's we've identified that period as interesting. It's because we've identified the rest of the trace as effectively dead space. So this is the example here is, say, you record for 10 seconds and all the activity happens in the first 3 seconds, we'll try and zoom into those 3 seconds. So really, we're just trying to trim if there's dead space at the start or end of your timeline. We're trying to get rid of that. We're not really identifying, like, this is where there was a slow network request, and so we should draw the user's attention to that. That's kind of something I think on our radar to to explore that that idea generally of can we can we highlight areas of of interest. And it's something that in Lighthouse, if you do a Lighthouse report on a page, we'll also show you estimated savings for a bunch of the audits.

Jack Franklin  [00:22:12]:
So Lighthouse will say, this image that you loaded was, you know, x megabyte big. If you convert it to whatever and and, optimize it, you might say one second on this network request. And so what we could do is we could begin to rank, you know, things we find based on how much potential savings we think there might be to help the developer prioritize, as well. Because I think, you know, in a perfect world, developers have endless amounts of time to spend optimizing every, millisecond of their page load, but, you know, we all know that's not true. And sometimes this work is prioritized against other work, and you don't get to spend as much time as you'd like on it. So, like, can we direct you to the most impactful, things sooner? So that if you only have a few hours or a day to to work on this thing for your your company, you can have the most impact.

Dan Shappir [00:23:01]:
By the way, you kind of mentioned here how you can maybe cross pollinate between Lighthouse and the dev tools by bringing Lighthouse insights into the the the performance panel. I would like to mention that one cool feature of the Lighthouse that is built into dev tools is that you can jump from Lighthouse into the, performance panel for that Lighthouse session.

Charles Max Wood [00:23:32]:
Oh, wow.

Jack Franklin  [00:23:33]:
Yeah. Yeah. So you can click a link and it will take you into the performance panel with the recording of the the page load that Lighthouse used, so you can dive in. And it's actually, something we're working on, and this is in our blog post from earlier this year, is we are literally working on integrating Lighthouse's kind of analysis tools deeply into the performance panel. So the long term vision is that, there won't be a distinct Lighthouse panel within dev tools. We were not because we're removing that functionality, but it will kinda be combined into the performance panel. The exact details obviously to be ironed out, and there'll be a lot of experimentation on how we do this, but that is something that we're kinda working on. And and just to be very explicit, that isn't lighthouse going away.

Jack Franklin  [00:24:15]:
It's it's meaning so if you if you want to work on performance in dev tools, you go to the performance panel, and you don't have to decide between the performance panel or the performance insights panel or the lighthouse panel. We're trying to kinda collate the best of everything into into one panel, like, that is the place to go.

Dan Shappir [00:24:32]:
So, when we when we record a session within the performance panel, one the obvious way is to simply just click the record button, which starts a recording, and then you can do whatever you want, including reloading the page, and then you explicitly stop the recording when you want. The other option is to click the reload button, which automatically instigates a reload of the page with the recording enabled and then also automatically stops the recording when loading is complete. How do you determine when to stop the loading, by the way, in that scenario?

Jack Franklin  [00:25:11]:
Oh, you're pushing my memory of the code, but, I'm pretty sure it's after a particular event, like, a particular page load event, I think we wait a few more seconds. I think that you you I'd have to, I'd have to look up the exact implementation. But I think it's it's something fairly straightforward, like, wait for this particular event and then allow a few seconds of grace, for anything else to flow in. But, yeah, the idea with that one is is we expect people to use that when they're explicitly debugging their pages load time. So we wait for when the event that basically tells us that the page is loaded, give a few extra seconds just in case, and then and then stop if I remember rightly. But don't quote me on that.

Dan Shappir [00:25:53]:
So usually, when I'm loading a page, you know, in some if in some cases let's put it differently. When it works, I like to use the reload button simply because it's the faster option. I have run into some cases with some websites where it stopped too soon, and then my only recourse was to actually click the record explicitly, reload explicitly, and then stop it explicitly when it's done. But but, yeah, when when the reload works automatically, it's the cleaner it's the nicer option. Now as long as we're talking about that, though, another thing or another suggestion that I have or or a gotcha to watch out for is, a lot of people just analyze the loading performance in a regular tab in the regular browser, and that means that they're also kind of profiling all their extensions, which may not be what they want to do. So very often, what you really want to do is to try to analyze it in, you know, in the mid at least to begin with, in in the most in the cleanest environment possible. And in that context, I wanted to ask you because there are really two ways to achieve it. One way is to simply open up an incognito window and and, you know, do the recording in that.

Dan Shappir [00:27:21]:
Another option is to create a profile just for debugging purposes. So, basically, just create a local profile, has no extensions in it, and use that. Which would you recommend?

Jack Franklin  [00:27:34]:
I don't think it makes too much difference. I think I'm trying to remember if people would have to think about caching if they had a distinct profile, because also you need to often you want to make sure in dev tools that the, in the network panel, you've you've deselected or ticked rather disable cache in order to get kind of a fresh page load. So I'm trying to remember if incognito effectively but it wouldn't matter if you tested the same website a few times in the same incognito tab. So I I think it's really a matter of personal preference. But, yeah, that is a common kinda gotcha that when you're reloading the page, you are, you know, reloading the page within your your active Chrome profile, and we can't do anything to to stop all those various extensions and and whatnot running. So, yeah, we do recommend one of those approaches ideally for for traces if you're trying to be the most accurate. I think I think another kind of on that same theme, we see people testing a lot, on their very fast Ethernet connection in in their office, in the center of some big city, which has amazing, connection speeds. Generally, I think people need to consider if they should be throttling the the network, which is something you can do from within the panel before you start a recording to try and emulate, say, a 3 g connection or or something slower if your if your users maybe are typically on mobile a bit more.

Jack Franklin  [00:28:53]:
Or if generally your users just aren't on a big Ethernet, you know, connection.

Charles Max Wood [00:28:58]:
So one other thing with mobile is, and I think I've seen people do this, but I'm not sure exactly how. And I guess this is more a general dev tools question than a performance question. But if I load the app on my phone, sometimes it does things a little differently anyway. Yeah. So is there a way to hook up my dev tools on my computer so that I can watch the performance on my phone?

Jack Franklin  [00:29:23]:
Yeah. There is. So you you would plug in your phone via a USB lead, and then there is there's dev tools remote debugging. I don't quite remember the steps to get it working, but there'll be documentation. But, yeah, that isn't an explicitly supported use case. So you can do it, I think.

Dan Shappir [00:29:39]:
Yes. You definitely can. There's just one really important caveat, which goes to the episode we recently recorded with Bruce Lawson. And that's the fact that Chrome on iPhone is not Chrome. So so if you want to do if you want to debug Chrome on a mobile device, you can debug Chrome on an Android device. But if you want to debug Chrome on an iPhone, that's kind of a problem because it's not really Chrome.

Charles Max Wood [00:30:09]:
Yeah. Well, that's just another reason since it's a different engine and everything. Right? Because it's all using the WebKit engine.

Dan Shappir [00:30:16]:
Exactly.

Charles Max Wood [00:30:17]:
Then then I definitely want to be checking it out. Right? Because it may it may do something entirely completely different or do it in a different way.

Dan Shappir [00:30:27]:
It will. But the problem is that you I don't think you can use Chrome DevTools to debug, an an an Safari Safari. Right. And and Chrome on iPhone is effectively Safari. Yeah. So, you know, until we finally get Chrome on an iPhone, you know, fingers crossed, I you know, it's so for now, if you if you really want to do the mobile debugging rather than simulating a mobile device, you really have to use an Android device. Yeah.

Jack Franklin  [00:31:04]:
Yeah. Correct.

Dan Shappir [00:31:05]:
Now, so we we talked about the the tracks, the the star the structure of, of, the panel that on the on the at the top, you have, you called it the the mini view. How how did you call it? The mini map. The mini map. And and, you know, it's it's really a useful starting point, and you've added a lot of useful information into it. So, you know, you've got, the screenshots. You've got, the graph showing periods of CPU activity and even color coded to show, what the CPU is busy doing. I not so long ago learned that when it's thatched, it means that it's activity it's the CPU is busy, but it's off of the main thread. Is that correct?

Jack Franklin  [00:31:53]:
I I honestly couldn't tell you off the top of my head about checking. So there are there are kind of is this like the horizontal bars that appear, or is this kind of fetched in the sort

Dan Shappir [00:32:03]:
of Yeah. I'm I'm so I'm talking about, you know, the graph at the at the very top of the mini map, you know, where you see the yellow, graph for for the JavaScript, for example. And, occasionally, you would see, like, a thatched pattern. Yeah. And you also have those small bars that are either dark red or light red for to indicate long tasks as I recall.

Jack Franklin  [00:32:27]:
Yeah. Yeah. So they in in they indicate long tasks. I I don't remember the exact heuristic for why areas are thatched, but, yeah, it will be something problematic. But I I don't remember the exact criteria for what it is off to my head, but I I will take it out later

Dan Shappir [00:32:43]:
on. By by the way, long tasks or or or long frames, sometimes they're called, are are basically again, correct me if I'm wrong, are basically when the the main thread is busy and consequently, is unable to produce the desired frame rate. In other words, you it's check.

Jack Franklin  [00:33:08]:
Yeah. Exactly that. So the the browser wants to create a frame, but it is not able to. It's not here, create a frame basically means, you know, paint an update to the user, update the website. It's unable to because JavaScript is keeping the main thread busy, and therefore, browser doesn't get a chance to do it. So the best way to kind of deal with that is ideally split up your JavaScript into smaller tasks. There are now APIs. I don't remember the stage there, but they're coming, around scheduling.

Jack Franklin  [00:33:36]:
So there will be a scheduler API where you'll be able to in queue some code like a callback function to run but tell the browser what priority that callback function should have. So there's more powerful scheduling tools coming to to the browser. They may already be in there. I I don't quite remember. But the kind of the way other people do this is you can use the request animation frame callback. Historically, I think people use set time out and gave it a function and then with a 0, time, which really just gives the browser, you know, a chance, a gap to to do some other work. So that's that's the main thing that is important with long tasks. It's it's too much JavaScript running without a a break.

Dan Shappir [00:34:17]:
Yeah. I'm I'm kind of amused by that because, you know, we talk about ways to do splitting on idle callbacks, that time out like you mentioned, and other APIs that are coming. But realistically, I think, unfortunately, a lot of web devs are not really using these APIs directly anymore because we're working primarily inside frameworks, like, React or Vue or Angular. And, you know, they control where when our code runs, and and how execution is partitioned. So for example, in, in if you're using React, it's probably about using suspense and and use transition and stuff like that rather than, you know, manually scheduling code in most cases. I mean, you know, obviously, there are other different scenarios, but but that's for for better or for worse, what I'm seeing, that's the reality. And and more often than not, like a long period, for example, when using React, a long task would be the hydration. And, you know, a lot of a lot of developers aren't really, you know, sure what, you know, what they can do about it anyway.

Dan Shappir [00:35:34]:
So it's like bringing it up to the framework to resolve such issues.

Charles Max Wood [00:35:38]:
Yeah. I don't think trying to get my sister off the phone when I was a teenager.

Jack Franklin  [00:35:45]:
Yeah. I I I think some of those APIs I think some of the motivation is that they will be able to be used by frameworks. So I think some of the APIs around scheduling in particular, we so I wasn't involved in the design process of of them at at all. But it is thinking about how can developers use this to break up their JavaScript, but it's also thinking about how can framework authors use this to very easily introduce kind of batched or scheduling or scheduled updates, using what's built into the the web platform and the browser, which has a couple of benefits really. Firstly, it should be more accurate and bluntly better implemented unless you spend a lot and lot of time working on scheduled, kind of updates. They can be very fiddly. And it also means that those framework callers don't have to write code themselves to manage the scheduling and batching, or at least they have to write less because they can also lean on the kind of built in browser API. So some of the work in these APIs is not necessarily aimed at developers building websites.

Jack Franklin  [00:36:44]:
It's aimed at those building the abstractions that that a lot of people are building on on top of.

Dan Shappir [00:36:49]:
So you mentioned what is probably the most colorful colorful part of the performance panel, which is the flame chart. Mhmm. You know, obviously, it's kind of difficult to explain without showing, but can you briefly describe what it is and how it's how you read it as it were?

Jack Franklin  [00:37:11]:
Yeah. I'll I'll give it a go. This really is a picture paints a 1,000 words, but I'll I'll try and say fewer than a 1000 words. So the flame chart represents activity that was happening in in your page. This we're talking here JavaScript that ran. And the way to look at it is, you know, a rectangle on the flame chart was a a bit of code that ran for a certain period of time represented by how long or or short or wide, I should say, the rectangle is. What people will also see is there's a level there's nesting within the flame chart. So when you look at it, it almost looks like a a tree structure.

Jack Franklin  [00:37:43]:
So you can see that there are rectangles, then below that rectangle, there will be a smaller rectangle. Then below that, there could be many levels of these sort of nested rectangles. But if you look at the rectangle at the top and then go down the flame chart, all the rectangles below the first one will be, smaller or contained within its parent. So you think of this as a tree or a parent with a child and that child might have more children. And so what this lets you do is it lets you look at the top row which might say long task and it might say it was a second long. Then as you take that and you look vertically down the flame chart, you're gonna see all the functions and code that run within that long task that added up to make its total duration. The reason this is so helpful is you might have a long task that's a second long. If you look one level deeper at its children, you might see that one of its children took point 0 one of a second, so that's really not useful.

Jack Franklin  [00:38:32]:
The other child took, you know, 0.999 of the second. And so what that lets you do is it lets you see, okay, there was a long task. You can look down to see the culprits or the the causes of that long task and really dive in, to the weeds of it.

Dan Shappir [00:38:47]:
And and just to add, and potentially, it helps clarify. Think about the function a that calls function b twice. So you'll have a wider rectangle for a, and contained with it one level down would be 2 rectangles for the, you know, consecutive rectangles for the separate calls to be. And and in this in this context, it's useful to remember that JavaScript is essentially, single threaded language. You know, we have workers, but they effectively run independently of each other. So it's always contained within as it were.

Jack Franklin  [00:39:26]:
Yeah. Correct. Yeah. And and that you know, long task can be caused by 1 of 2 things broadly, either one function that took an absolute age to run or one function that doesn't take very long to run but got called, you know, a million times. And and so the the flame shell lets you figure out which one of those 2 it was. So, obviously, the the resolution of the way you fix that is is different. Yeah. Generally, it all boils down to can we do less work, can we run less less lines of code, but sometimes it can also be highlighting where you might have a function that you think you don't need to optimize because it it isn't very complicated.

Jack Franklin  [00:40:02]:
But if it's being called, loads and loads of times, then it may well need to be, have a bit more attention given to it.

Dan Shappir [00:40:09]:
Now it's called a flame chart, I guess, because of the shape, but it's also called a flame chart, I think, because of the colors, which are often yellow, red, purplish, and etcetera. That color coding, how does it work? What's kind of the logic behind which color, which were each rectangle gets?

Jack Franklin  [00:40:28]:
Dan, you're really pushing my memory of all these details.

Dan Shappir [00:40:32]:
That's what I'm here for.

Jack Franklin  [00:40:35]:
So, yeah, yellow, if I'm right, is is scripting most of the time. The the ones that are particularly interesting is purple is, like browser layout type events. This might be recalculate styles event, which is where something happened which made the browser have to do a bunch of work to check that its latest layout of the page is accurate. The kind of most common gotcha with those is if you call a method like, get bounding client recs on an element or if you read an element. Paul Irish has got a great, GitHub page with all the different ways you can trigger this. But there's if you read an element's width or height or scroll top and the various things, the browser before it gives you that value back has to make sure that what it thinks the layout is is what the layout is. So it will do some extra work to kind of verify that. And so what you want to do is minimize how often that happens because that can be quite expensive on a complicated page.

Jack Franklin  [00:41:29]:
So that's purple.

Dan Shappir [00:41:32]:
I'd I'd like to tell a short story about that. So, a a while back, a friend of mine contacted me about a problem that they were having with an application. It happened to be written in Angular, but it's neither here nor there in that regard, which was taking a very long time to render its in its primary view. And, when I talked to them and we looked at at the flame chart, it became very pretty obvious that it was, laying out a whole lot of time. Effectively, it had layout trashing. And because the the way that this, application, the UI was structured, it was structured as a lot of rectangles. And they built those rectangles in a way that certain bit amount of text and images needed to fit nicely within them. And now instead of using CSS to kind of properly factor those sizes, they actually use initially use JavaScript for it.

Dan Shappir [00:42:35]:
So they would, you know, put the content in a rectangle, get the dimensions for the rectangle, fix the content, and then do it for the next rectangle, then do it for the next rectangle. So, effectively, each rectangle required a a relay out. So they were changing some position and and and width aspects and then query the the width and height aspect. So for every rectangle, re layout and there were hundreds of such rectangles on the page. And that's what's known as layout thrashing, which you when you're forcing the browser to relay out multiple times in order to just to render that initial one view. And, basically, what I told them is one of 2 things, that the less ideal solution would be to put to have instead of one loop doing putting in the content, then getting the lay the the the layout information, putting in more more content, getting layout information. Do it as 2 separate loops, like, you know, or even 3 loops. Do all the positioning, get all the layouts, and then do all the fixes.

Dan Shappir [00:43:48]:
And that actually turns out to be faster even though it's potentially counterintuitive because the browser can do the the just the one layout instead of of the layouts rushing I just described. Or even better, just do it in CSS. Let the CSS do it for you, and then you don't need to do all those layouting at all. And that's what they ended up doing, and they and it became from, like, a 32nd load to being, like, literally like a second. So it it was like a huge win in terms of performance, and that's definitely something that you can see in in dev tools. And I think in this time, now you can even see it better because I think you you even provide some attribution to the, relayouts that occur.

Jack Franklin  [00:44:38]:
Yeah. We don't we can't detect every single instance, but if we can, we will draw arrows in the flame chart, which will take you from, one event that caused another event. So we can often attribute why a layout happened. Or similarly say you you write a code that has a set time out in when the function within the set time out gets called and you find that on the flame chart, we'll we'll draw an arrow back to the code that ran the set time out so you can see kind of why things happen. I think there's a lot more we could do there, but, yeah, we do try and draw the arrows to give you some some idea if you can. Yeah. I mean, a big a big theme when you look at the main flame chart and you see lots of stuff going on is is to ask, firstly, you know, I've talked about, can you spot functions that you could optimize or reduce how long they take to run? But it's also, do you even need to run that code at all? I think, Dan, there there are plenty of cases like the one you just described where actually, especially now, if you're targeting modern browsers, CSS has got a lot more powerful, particularly around layout. I think there are a lot of cases where you can actually get rid of JavaScript to lean on lean on CSS a bunch more.

Jack Franklin  [00:45:42]:
Like, if you can optimize a function, great. If you don't ever need to run it, that's even better.

Charles Max Wood [00:45:46]:
I'm I'm kinda curious as people come into this and they start, you know, because we've kind of talked about some of the, specific things that are in the performance tools, but I'm kind of thinking through my head. Okay. What's the scenario, right, where somebody's gonna, you know, maybe people are regularly checking this as they build their apps. But I'm also thinking, you know, maybe somebody is looking at things and realizes that they have poor scoring on some of their core web vitals. Right? So what what scenarios are you looking at? Is it one of those? Is it something else where you're finding that people are typically reaching for this particular panel in the Chrome to dev tools? And then what how do you how do you pull the information out and know what to do with it? Because it's it's one thing to say, okay, all this information is in there. And I think we've covered a lot of that, but, you know, okay. I've got this information now. How do I know that it, yeah, it's layout thrashing, or how do I know that it's, you know, something else or, you know, maybe it's something really simple and it's just, you know, I I load in an image and my whole page shifts, and it takes a long time for it to grab it and pull it out.

Charles Max Wood [00:46:53]:
So, you know, it's it's, you know, it's impacting 2 or 3 scores.

Jack Franklin  [00:46:59]:
Yeah. So I think I think most common use case is people who run their website through a lighthouse, report and see that something is is scoring badly, or it will be people who are getting reports from their users that, their website is loading slowly. That was often the motivation of my previous job was, it would normally be a member of staff was on a really rubbish connection and would moan that the site loaded very poorly. Okay. So I think that's mostly how we fair people to come into the tool is is some issue, and and, obviously, Core Web Vitals can impact search rankings. That tends to be the motivating factor for the majority of businesses who want to invest in this area. Yeah. In terms of understanding the information, that's really what the the experimental performance insights panel, that was one of the goals of that.

Jack Franklin  [00:47:45]:
So when you when you use that panel, you get a right hand sidebar that shows, I think we call them insights, and it will highlight things that were particularly, problematic. Now we we didn't do a good job of prioritizing those and helping people understand which ones are most important, and that's something we need to improve as we kinda bring that functionality into the performance panel, but it would highlight problems. So, for example, a network request that is render blocking, as in until the browser is finished with that request, it can't continue laying out the page. Mhmm.

Charles Max Wood [00:48:14]:
We

Jack Franklin  [00:48:14]:
would highlight that as an issue because if you could resolve that, the browser can can get rendering earlier, and therefore, your page will appear loaded to the user, much more quickly. I think really a theme for us this year is is is trying to, have actionable kind of insights we can provide people to to give them a helping hand to to figure out what was going on.

Dan Shappir [00:48:35]:
I think a key aspect and probably the biggest challenge around improving performance is attribution, understanding the root cause for why something behaves the way that it does. Like, okay. This page is loading, so or this block, you know, the the the the, like, the page is unresponsive. Why is it unresponsive? What what's actually running that's causing it to be unresponsive? What what is the browser

Charles Max Wood [00:49:07]:
actually doing? That's why.

Dan Shappir [00:49:08]:
Yeah. Exactly. And and I think and and it's a it's a it's a it's a it's a challenge. It's it's not easy to figure out the attribution. You know, I'm I'm on the, web perform the w three c web performance working group, and and a lot of the discussions are about how to try to figure out, like, the reasons the browser does what it does and then externalize this information and also how to do it in a way that in itself is performant

Charles Max Wood [00:49:42]:
Mhmm.

Dan Shappir [00:49:42]:
And also doesn't, you know, become a security or privacy issue. So there's a lot of problems around that. But part of it is that the browser is a sophisticated system, And and, consequently, understanding attribution is not a trivial problem. You you need to have a certain amount of understanding of how this thing works. So there you can make it, you know, just so simple, but probably not simpler. Now as you might expect, I'm also I also have the the performance panel open while we're talking. And I just noticed something that I've never seen before, so I'm going to take the opportunity and ask new Jack about it now. I noticed that when I'm floating over the various, the strips or the tracks as you call them, they now show like this, like a pencil at the left side that I can click on, and then I get this weird view with eyes and check marks.

Dan Shappir [00:50:47]:
What is that?

Jack Franklin  [00:50:48]:
Yeah. So this UI will change in the next release of Chrome, which must be out very soon. But this was or is a a UI to enable people to, reorder and hide and show particular tracks. So when when you record a performance trace that you get a bunch of these tracks and, obviously, that some of them are really like, network is obviously an important one. The main thread activity is important, but you will get others to say GPU activity, rasterization, any workers that were on the page and so on and so forth. Depending on your use case, those may or may not be useful to you. And one of the challenges of the performance panel is embedded within dev tools which not most most people don't have dev tools open full screen because it's within Chrome. So vertical space is tends to be at a bit of a premium for us.

Jack Franklin  [00:51:34]:
So this is kind of we've been playing around with how can we enable people to hide and show information. Yeah. But, yeah, we we're not I'm not thrilled with the the UI that landed will change. It was kind of a first pass on it, and we we've tweaked a bunch of stuff, for Chrome 126, which should be out soon.

Dan Shappir [00:51:52]:
So for example, so for example, if, you know, there's the animations track. So you're you're saying if I don't actually have animations on the page or I'm not currently testing the performance of animations, maybe I'd like to hide it just to save, you know, some vertical space.

Jack Franklin  [00:52:10]:
Yeah. Yeah. I I think what we need to do is is potentially could we do some of that automatically for you? Because also right now, I think if you wanted to hide all the non important tracks, you might have to click 10 or 15 of them to to get rid of. So I wonder if there might be a world where we can invert that and you can tell us the ones you want to keep around. I I don't quite know, but, yeah, what you're seeing is our sort of first steps into the water of allowing, I think, users to customize the the timeline they see better to suit that particular use case. Because depending on the context that you're what what the problem is you're trying to debug, there are certain things you may or may not care about, and we can't always know that, for sure. But we're we're trying to get better at at figuring out what you're trying to debug and and show you the right level of of information. But kinda going back to what you're talking about around attribution, yeah, that is the challenge, and what we're trying to figure out is there are sometimes where we can very confidently attribute a particular issue to a particular root cause.

Jack Franklin  [00:53:07]:
So for example, layout shifts are very common root cause of that is we load an image into the page, but the developer didn't set the width and height explicitly on the image. The browser kind of, guesses or reserves a certain amount of size. Then when the image loads in, it now knows the correct size of the image and it resizes it down, which can cause the pace to shift. So we can quite often very easily point to that, but other ones we're sometimes sort of guessing. We're we're we're trying to figure out how we can say to users this might be a problem that you should look at, but it also might not because we can't be a 100% sure, and that's a tougher kind of story to tell and and guide people on.

Dan Shappir [00:53:43]:
Now while we still have a bit of time left, so we were talking about the top part, which is the mini map. We were talking about the central part, which actually contains a flame chart. There's also a bottom part, which has the summary, bottom up, call tree, and event log. Can you briefly describe what that section is?

Jack Franklin  [00:54:04]:
Yeah. I'll I'll try. So the the summary tries to show you it will show you 1 or 2 things depending on what you've selected. If you've selected a time range in in the panel, it will show you some stats about how much of that time range is spent doing various activities, how much was spent, like the browser paints or rendering versus scripting effectively your JavaScript or not. If you select an actual individual event, say a network request, it will show you some extra information about the particular event, you've selected. Then the next 3, bottom up, culture, and event log, they're all based on trying to get a better understanding of the JavaScript and the flame chart. So bottom up, the idea there is you you get a view of all the all this sort of events, in the flame chart. You can filter them by name, and you can also sort them by how long they took and the duration that they took.

Jack Franklin  [00:54:51]:
The idea there being that it can sometimes be an easier way to dig into where your page spent most of its time executing. And Call Tree and Event Log are similar. Event Log, I never use, and I don't remember. I think Call Tree is very similar to bottom up, but whereas bottom up starts literally from the bottom and goes, upwards. So bottom up starts with the sort of leaf nodes of the tree if you think of the the flame chart. It starts with the things at the bottom that took the most time. I think Call Tree might go the other way, but, again, I'd have to I'd have to double check. Yes.

Dan Shappir [00:55:27]:
It does. The the main benefit of using these is that they because, you know, you might say, why do I need this information? It's kind of a duplication of the information that they have in the flame chart itself. The main benefit aside from filtering, which you mentioned, is the fact that it's also it also aggregates because, you know, the example I gave before where function a calls function b twice. So in the flame chart, you see each instant of of of the execution of function b separately. But in the bottom up view, it's it starts on b, and it aggregates all the time spent inside of b. So so like you mentioned, that a certain function may have a significant impact on the execution, on the loading time of a page might be because that function ran once but ran for a very long time, or maybe it ran really quickly but was invoked a 1000000 times. So in the aggregated view, in the case of that function being involved a 1000000 times, we'll show the total amount of time that was spent inside that function in all those, individual invocations.

Jack Franklin  [00:56:45]:
Yes. Thank you. Yeah. That's a good additional point. I didn't

Charles Max Wood [00:56:47]:
I was I was looking at this too because it also shows I I opened it up and just ran it on StreamYard, which is what we're using to record this. And it also showed, like, the garbage collection and, you know, some of the other system calls as well as well as, like, paint and animation frame fired and things like that. And so if yeah. It it really does give you a drill down. Right? If you're if you're doing something that is heavily memory intensive and is gonna trigger garbage collection on a regular basis and, you know, maybe that's gonna, you know, affect your performance one way or another. Right? Because I I mean, you know, I I would assume that most apps maybe don't have that problem, but I've seen people do some weird stuff

Jack Franklin  [00:57:28]:
on the web. Yeah. And so We actually hit that problem.

Charles Max Wood [00:57:31]:
Give you all kinds of information.

Jack Franklin  [00:57:33]:
Yeah. We we so because dev tools is a web app, we can debug dev tools using dev tools, which means we can profile the performance panel's performance by using the performance panel in another dev tools. So profile Inception. Inception. It gets a little bit confusing. But we the performance panel so the main sort of UI, all these tracks is drawn by an HTML canvas. And so that means every time the user scrolls or pans or zooms in or out, we have to redraw the whole canvas to represent what they've just done. And so that is a lot of a lot of function calls happening, very short space of time to to lay that all out.

Jack Franklin  [00:58:08]:
And we had a situation where we were passing to

Charles Max Wood [00:58:08]:
a function that got called, I

Jack Franklin  [00:58:09]:
think, for every single event on the timeline. So, potentially 100 of 1000 of times, if if not more. We we call the function with an object, you know, so we could destructure the arguments, from that object.

Dan Shappir [00:58:26]:
And you could

Jack Franklin  [00:58:27]:
Now every time you do that, that object that you pass in has to be garbage collected at the end of that function's life cycle.

Charles Max Wood [00:58:33]:
Right.

Jack Franklin  [00:58:33]:
So 99% of the time, 99% of developers probably, that's never a problem at all. It's it's not a concern.

Dan Shappir [00:58:39]:
Yeah.

Jack Franklin  [00:58:39]:
But when you call that function a 1000000 times in sort of a split second, that's suddenly a 1000000 objects and that has to be garbage collected. And and so what we had to do was change that function. So rather than taking an object with, say, 5 different keys and values, we have to pass each of those individually as an argument to the function. So it sacrifices in that case a small amount of readability, but it it massively improves the performance.

Dan Shappir [00:59:01]:
And,

Jack Franklin  [00:59:02]:
you know, when when you're talking about redrawing a canvas as the user is zooming or or scrolling, you you notice any slowdown in performance. So that's kind of we have to be very hot on that sort of thing.

Dan Shappir [00:59:13]:
There's another lesson here, and that lesson is that the performance panel is an amazing profiling tool, and you should never start to optimize stuff, especially micro optimizations before you profile and identify the problem areas. Because, you know, like you said like Jack just said, like you said, Jack, it it you sacrifice a little bit of readability. Well, you shouldn't sacrifice readability unless it makes, a measurable positive impact. You were able to prove that it does and that it's worth that small amount of readability's, you know, sacrifice in that particular case. And so so, yeah, that that's support. That's really that's really important, to make in that context. By the way, another important point to make that's kind of related to to what you said before that, Chuck, about whether or not that kind of activity is a problem in most web applications. You can also use all these tools, including the performance panel, to profile node based applications.

Dan Shappir [01:00:28]:
You can attach to node and profile node. And node let's say you've got a node service that's long running. For example, let's say it's not some sort of a Lambda or something. It's it's an actual long running node process that's serving a ton of, of, users. It can definitely have issues around memory allocations and stuff like that. So it's not just extreme client side situations, like the dev tools itself. It's, you know, it's your average node service that happens to be servicing, you know, 10,000 users.

Charles Max Wood [01:01:08]:
Yep. Makes sense.

Dan Shappir [01:01:10]:
I actually used it specifically. I I I a couple of months ago, I made a contribution to the Prometheus client for Node around just that, identifying a problem, like, it's kind of similar to what you described, Jack, of allocating a lot of objects and identifying that it could be done more efficiently. In that particular case, they were building a huge string by concatenating the strings into it, which in most cases is fine. But in that case, the it was literally millions of strings. So using a join instead turned out to be significantly more performant in that particular example.

Jack Franklin  [01:01:58]:
Yeah. We we've had loads of instances like this. There's there's a blog post on the developer tools blogs. We recently we recently did a big, under a rewrite of sort of some of the internals of the performance panel,

Dan Shappir [01:02:09]:
then

Jack Franklin  [01:02:09]:
we were profiling it. And I think there was we have some tests kind of traces that we can load into the performance panel that are absolutely massive, like, far bigger than anyone's average website would generate. And for one of those, I think we're able to get the time the performance panel took to process and load that trace, I think, down from something like 12 seconds to 2 seconds, just by recording with the performance panel and then going through and looking for the these problem areas. But, you know, we couldn't have done that optimization without looking at the performance panel because obviously we we'd written all the code that caused it to be slow and and none of you wouldn't look at any of those changes and think, oh, that's gonna be a problem here. That is going to be slow for us. Sometimes you can, but most of the time you do need to wait and actually profile it.

Dan Shappir [01:02:52]:
Yeah. Like, we Never never trust your intuition around performance.

Jack Franklin  [01:02:56]:
It's it's always wrong. Yeah. It's always wrong. Yeah. And that passing objects into functions and rewriting those as as arguments, we don't we don't follow that everywhere in the panel because it's only a problem in the code that redraws the canvas because that is our real hot path needs to be as optimized as we can, and we may have to sacrifice readability in order to do that. Then say the summary bit at the bottom that shows you information, that doesn't rerender every time the user scrolls. So in that panel for in that part of the code, we probably would still pass an object in because we know that that's not gonna be a path that's gonna cause us problems. So, yeah, the advice to to always profile and not prematurely optimize is is very accurate because it will always surprise you.

Jack Franklin  [01:03:38]:
And browsers are smart. Sometimes they can optimize things that you might think will be slow because they recognize what you're doing. Sometimes something that intuitively you think won't be slow will be slow for some very niche reason. So, yeah, it's very important and to use the tools available to to figure that out before you go diving in.

Charles Max Wood [01:03:56]:
So we're kind of at the end of our time. Are there places where people can go and dive deeper into this stuff?

Jack Franklin  [01:04:06]:
Yeah. So I think the the performance panel is, very heavily documented in the DevTools documentation online. In terms of what we're sort of planning, there's some blog posts on the Chrome developers blog that that have more information on this, which I can kinda send links to to all the things that I think are useful. That's probably the the best place to start. In terms of sort of keeping up to date with what's new in dev tools, I think following the Chrome developer YouTube account, or or Twitter and all that stuff is probably best. The Jesslyn, who's the dev tools, DevRel does regular videos. She's, yeah, she's amazing. She does regular videos and will highlight new features across the panel including in any of the performance tooling.

Jack Franklin  [01:04:48]:
So that's

Dan Shappir [01:04:48]:
By the way, the the what's new section, which is inside dev tools itself, it's in the drawer

Jack Franklin  [01:04:54]:
Yeah.

Dan Shappir [01:04:55]:
Part, actually contains the video showing what's new for the latest version. And from there, you can get to the what's new for previous versions as well. So it's you could it's really worthwhile to review these videos. They're short. They're sweet. They're very informative. Highly recommended.

Jack Franklin  [01:05:17]:
Awesome. Agreed.

Dan Shappir [01:05:19]:
There is one thing before we let you go, before we finish this episode. There's one request, which I think I submitted, like, years ago, via your, you know, feature request mechanism. And that's the ability so, obviously, what you can already inside the performance panel record have multiple recordings and and, like, flip between them. Yeah. What I'm really missing is the ability to compare recordings. So for example, if I made a change and I think it impact and it seems to have impacted performance, I would love to have, like, subtract one view from the other and only see the differences. That would be awesome.

Jack Franklin  [01:06:08]:
Yeah. The you're not the 1st person to bench that, and that is certainly floating around the backlog. I think it's something we it's it's hard because if you record to yeah. If you have the same website and you recorded 2 traces without changing anything due to the variability of the Internet and computers and whatever else your your computer was doing at the time, those traces are gonna be slightly different. So I think the challenge there is figuring out how do we represent what changed and try to figure out what changes because variability and what changed because you did something. So I think I think we can see the appeal and the the reason it would be very useful. The practical the practicalities of implanting it is is potentially slightly challenging along with how we manage the real estate on the screen. If you want to compare these two traces, the panel is already crammed full of stuff as it is.

Jack Franklin  [01:07:01]:
But, yeah, it it's it's floating around as an idea we want to explore for sure.

Dan Shappir [01:07:05]:
If it were easy, then everybody would be doing it. Well, it's like I will mention that the awesome web page test now has an interesting, ability to compare 2 recordings, which which is also not perfect, but it is pretty good.

Charles Max Wood [01:07:25]:
I was just gonna say if you estimate it in your estimation meetings as a medium and then you come back after you start working at it and go, actually, this is an extra large I didn't realize, then maybe we'll get it sooner.

Dan Shappir [01:07:39]:
In other words, lie.

Jack Franklin  [01:07:42]:
I'll I'll give it a go.

Charles Max Wood [01:07:44]:
Yep. Alright. Well, let's let's go ahead and do some, picks and wrap this up. Now let's get the dad dad jokes going first. Steve, do you have

Jack Franklin  [01:07:55]:
some picks

Dan Shappir [01:07:56]:
for us?

Steve Edwards [01:07:56]:
Well, I can look at that one of 2 ways. It's either let's hurry up and get them out of the way, or let's get

Charles Max Wood [01:08:01]:
to the best part. I did not say that.

Steve Edwards [01:08:03]:
Or the best part first. So I'll, I'll cut you a little slack there on that one, Chuck. Jack, this is always the high point of any of our episodes according to statistics and and con comment forms. But, sorry. I gotta

Charles Max Wood [01:08:18]:
analyze and statistics.

Steve Edwards [01:08:20]:
Live analyze and statistics. Right? Just a sec. I gotta get my, branding thing here. Okay. So the other day, my son, who's, I won't say how old he is, they had a bunch of coins. And so I had to take him to ER. And after about an hour, I asked the doctor. I said, how's he doing? He said, no change yet.

Steve Edwards [01:08:46]:
Almost 3 feet. There we go. Okay. So Who's

Charles Max Wood [01:08:49]:
just that funny?

Steve Edwards [01:08:50]:
I decided that, I'm gonna make some money off my dad jokes, you know, since I've been doing it for free so long. I have to write them all every day. It's a lot of stress. So I'm going to put out a cologne for men who like dad jokes. I'm gonna call it

Dan Shappir [01:09:09]:
pungent. That's nice. Right.

Steve Edwards [01:09:12]:
Now this one requires a little bit of thinking, but, I like I'm, you know, I'm the dad jokes for the smart person. So my nerdy friend, just got a PhD in the history of palindromes. Now palindromes, for those who might know off top of your head, are words that are spelled forward and backwards. They're the same when you do them forward and backwards. Now we call him doctor Awkward.

Dan Shappir [01:09:38]:
Yeah. You need to see it to get it, but it's it's great.

Steve Edwards [01:09:41]:
Right. Very good. So those are my dad jokes for the week.

Charles Max Wood [01:09:46]:
Alright. Dan, what are your picks?

Dan Shappir [01:09:49]:
Okay. I, to be honest, I don't have that much in the way of picks. So the the the only the the real pick I have is that, I was interviewed on another podcast. I was a guest on the contagious code podcast that Tejas, friend of our show. He's doing his own podcasting. He's had some awesome guests like, you know, Guillermo Rauch and and others. I I highly you know, if if you like tech podcasts, I mean, you're here, so you probably do. That's another one to look to look into.

Dan Shappir [01:10:25]:
So I was a guest. I we recorded an actually a pretty lengthy one, about oh, I think it was over an hour and a half because we really got into my tech history and my tech journey and, you know, things that I've done and how I got into the whole performance thing. And, well, I've had a long career, as you can probably tell by looking at me, for those of you who are actually watching the video.

Charles Max Wood [01:10:50]:
Beautiful, by the way.

Dan Shappir [01:10:51]:
Yes. I am, for sure. So so that's, that's one pick. The other pick is my new employer, Sisense. I've been there for about a week and a half. They welcomed me wonderfully. I'm enjoying myself. Funny thing, though.

Dan Shappir [01:11:07]:
So I was getting into the code base after, like, and I thought, you know, after just 3 days being there and I thought, you know, what could be a better way than, than to do a small pull request on on the code? So I I started looking at something to fix, ended up changing, 20 files and what some people defined as core functionality

Charles Max Wood [01:11:34]:
Oh, no.

Dan Shappir [01:11:35]:
Of the product and but, you know, it got it got accepted. It got merged. So, you know You're

Steve Edwards [01:11:42]:
gonna go. Go big.

Dan Shappir [01:11:44]:
Yeah. I did something right.

Charles Max Wood [01:11:46]:
Whenever I try that, it always crashes the signal.

Dan Shappir [01:11:51]:
You know, but at at the very least, you get noticed. And, and my final kind of an anti anti pick is that I watch the movie Atlas on on Netflix. And the only good thing I can say about it is that it that it's more watchable than Rebel Moon.

Steve Edwards [01:12:13]:
What is Atlas about with? I haven't. It's

Dan Shappir [01:12:16]:
a sci fi movie with Jennifer Lopez. No. She actually she's actually fine. It's it's the story that's kind of stupid, but, you know and the action, the the the final boss fight is not much of a fight in my opinion. And, you know, so it's it's kinda meh, which, like I said, meh is much, much better than Rebel Moon. I could not finish watching the first one. So obviously, it's like I didn't even try watching the second one. Alright.

Dan Shappir [01:12:57]:
Yeah. Anyway, those are my picks.

Charles Max Wood [01:13:00]:
Good deal. I'm gonna throw out a board game pick here real quick. Of course, I should have thought ahead and thought what I wanna pick. I think I'm gonna do another legendary expansion. So one of the ones that I've really enjoyed playing is the shield expansion. And I think a lot of it was based around, if you watch the legend of shield or not the legend of shield, the, the anyway, it was a TV show. The there was the shield agents of shield TV show.

Dan Shappir [01:13:45]:
Oh, yeah. There was such a thing.

Charles Max Wood [01:13:47]:
Yeah. It pulled in a lot of stuff there. One of the one of the optional things that you can do is you can buy, a shield agent as part of your, you know, one of your actions, if you have enough to recruit them. And this one gives you, basically, you you have special characters, and they're the characters from the TV show. But, anyway, it it's a fun, expansion to play with. Nick Fury is in the main set of heroes that you can play with from the main, game. And so, I'm trying to find it on board game geek. Give me a second here.

Charles Max Wood [01:14:33]:
Oh, here we go. So, board game weight on this one is 2.75. I think the base game's like 2.3 5. So, you know, it it it makes a little more complicated, but it's not terribly complicated. And it's it's a fun expansion to play with. And so, yeah. Anyway, I'm gonna pick that. And then, I have a couple of other picks.

Charles Max Wood [01:14:55]:
One of them is sitting on my desk. My 8 year old made me a card. So it says I love you dad. And, anyway, I walked in this morning. It was sitting on my desk. So

Dan Shappir [01:15:08]:
Don't you know just love it when they're still young and do things like that for you?

Charles Max Wood [01:15:13]:
Right. My 17 year old's like, what? You want something? Right? So Try me to work. So, anyway but yeah. So that that makes me smile for sure. And then, yeah. I mean, I've been pretty involved in the political political scene here in Utah. And I just wanna encourage folks if you're, you know, if you're out there and you care about what's going on in your society, I'm getting ready to launch a podcast about this stuff, but I really feel like a lot of the things that make the difference in the long term are what you're doing at home and then what you're doing in local politics. A lot of the folks that are running for congress or senate here in Utah or governor, they all kinda came out of local politics and then, you know, kind of moved up a level and then another level.

Charles Max Wood [01:16:01]:
Right? And so if if you're not happy with your politicians, a lot of these folks kinda start at a a smaller level and move their way up. And so you've gotta support the right people at the at the local level because, many of them will wind up being your people at the at the higher or more more broad levels, I guess. Or I don't want to say higher levels because in a lot of ways, the further away from you they are, the the less impact they have. It depends on the issue, obviously. But, you know, you see the dysfunction at the at the federal level here at the in the US. Right? And you're just like, you know, these knuckleheads that, you know, wind up in Congress and some of them really good and some of them really not. And, you know, they came up the same way. Right? And so go support your local folks that that believe what you believe and value the things you value.

Dan Shappir [01:16:55]:
I do have a a comment about this, if I may. Yeah. First of all, I totally agree with everything you said. And I think, you know, being involved and and, you know, doing things for for causes that are important to you are is definitely a good thing. But, there's a certain, responsibility, I would say, that goes with it. That that you you do need to kind of do a bit of research Yes. And and and do some thinking and some analysis and verify that the causes that you support are actually, you know, correct and proper and righteous causes, and that you're not just buying into some demagoguery because, you know, it it seems cool or or, you know, it it's the the the thing du jour or whatever. That you actually, you know, that you properly make up your mind and not just go based on, you know, what somebody has said.

Dan Shappir [01:17:56]:
That's you know, make it whichever way you think is best and, you know, according to your own principles. But but do the do the thinking, do the research.

Charles Max Wood [01:18:07]:
I absolutely agree. Here in Utah, we have a caucus convention system. And, you know, so you have the local caucuses, you elect delegates, the delegates go and do a lot of that kind of a thing. But that doesn't mean you can't be involved if you're not a delegate. But, yeah, you you should be doing the same kinds of research and the same kinds of having the same kinds of conversation with these folks as the delegates and being okay. You know, this is an issue. Where are you on it? The you know, this is a principle that I hold to. How do you feel about it? And then talk to other people and, you know, inform yourself.

Charles Max Wood [01:18:43]:
Okay. They said this thing. Now do they does it actually bear out in the way that they voted or, you know, done their job as a, you know, a representative in your local government? But, yeah, I entirely agree. And, you know, the longer you do it, the more you, I guess, get better at sniffing out who really is where you want to be and where not, but they're also terrific organizations. And it's the same kind of thing where, as Dan said, you know, sometimes you get somebody that's, you know, the person at the the top of the organization. It it turns out that they were on an ego trip. Right? And so they got a bunch of people to believe like you believe to back them up, and then it turns out that there's some issue. You know? There's something rotten at the core of it.

Charles Max Wood [01:19:28]:
And it's not because your principles were wrong. It was because you backed the wrong person or because, you know, there was no way of knowing that this person was a problem. And so, yeah, you definitely have to do your homework because you you want to get in and push where it's gonna make a difference and make sure that you're you're you're backing these things. And and I don't bat a 1,000 on this stuff. Right? I I don't always get it right. But, you know, more often than not these days, I'm able to figure out, oh, you know, there's something here that's making me uncomfortable, and sometimes it's just a gut feel. And so I'll put my effort elsewhere because yeah, and then it turns out that a lot of times that gut feel is right. Anyway, I'm starting a podcast to this effect.

Charles Max Wood [01:20:12]:
It's it's probably gonna be focused mostly on the US. That doesn't mean that it can't apply to other, countries, but it's it's gonna be called America's Destiny. And it's essentially, yeah, it's this idea. It's like, okay. Well, you know, as these issues come up, how do you address them at home, you know, with your kids? And then how do you address them with your neighbors? And then, you know, how do you get involved and understand how these things are gonna affect you at the local level? And then how do you kind of find and reason through, okay, this is a good person to support at the local level. And then how do I encourage them when the right time comes along to go and make a larger impact in a, you know, in a different arena. So, anyway, then I get myself in trouble sometimes for speaking up, but that's the way that that goes sometimes. So, anyway, those those are my picks.

Charles Max Wood [01:21:04]:
Jack, what are your picks?

Jack Franklin  [01:21:06]:
Yeah. I'll throw in a couple quickly. To briefly pull us back to the performance panel, we we shipped something in Chrome recently called selector stats, which for recalculate styles event lets you dive into where that time was spent, the CSS selector is causing it. The reason I wanna shout this out is because, we didn't build it. It was built by the Microsoft folks who work on the Edge Dev Tools team, and they upstreamed it into Chromium, and so now it's available to Chrome and edge users, which I think is just really nice that, you know, 2 big companies able to collaborate and both share kind of features. So I thought that was really nice. I'll follow the board game theme as well. Latest addition to to our board game collection is a co op game called Sky Team, which is, I think fairly new.

Jack Franklin  [01:21:51]:
I think I found it via a YouTube channel that does board game reviews. It's a crop to 2 player game where you're piloting a plane and you have to, roll dice and make strategic decisions to land the plane successfully. But what's really quite cool about it is once you've rolled these dice, which you then have to use in certain slots on the board to achieve things, you and your person you're collaborating with, you're not allowed to talk to each other and the dice are hidden. So you've rolled 4 dice with such numbers. Your other player has rolled 4 dice and have got four numbers. And in silence, you have to decide 1 at a time where to put these dice to to achieve things. But, you know, it's classic. If you put 2 dice here that sum up to a number greater than 10, that's bad.

Jack Franklin  [01:22:29]:
But if you put 2 dice here that sum up less than 3, that's also bad. So you'll it's kind of fun collaborating, but also with a bit of silence. It's it's it's enjoyable and normally ends up with you looking at each other aghast with horror as your your partner puts the wrong dice in the wrong box, which you you never would have done. So that's that's been fun. So, yeah, it's called Sky Team.

Charles Max Wood [01:22:50]:
Yeah. It reminds me a little bit of some of the other sort of, blind collaboration games like, Hanabi, which is where you have your cards facing out. So everybody else can see what cards you have, but you're the one that has to play them. And so you you're giving hints, you know, you you have some means of communication, but not a ton. So I just looked it up on board game geek. It has a weight of 2.02, and I keep telling people that's kind of your average, casual gamer that with the game that has enough complexity to make it fun and interesting. It says it runs in about 15 minutes. Does that sound about right?

Jack Franklin  [01:23:26]:
Yeah. I think it's the first one took a bit longer as we figured it out, but, yeah, it's pretty snappy, which we we we have a new child in the house. So 15, 20 minutes is about the max time we get to to play a game. So, yeah, that's about right.

Charles Max Wood [01:23:39]:
Yeah. And then, it's rated for, kids aged 12 plus. The community says that 10 plus can play it. So, you know, this is something that, you know, maybe I coach my 8 year old on a little bit, but my other kids could probably play fine. I'm I'm very into board games. I really enjoy them. So thanks for that. I'll check it out.

Jack Franklin  [01:23:59]:
No worries.

Charles Max Wood [01:24:00]:
Alright, Jack. If people wanna find you on the Internet, where do they find you?

Jack Franklin  [01:24:04]:
Yeah. So, Twitter is jack_franklin. Other than that, it's jackfranklin dotco.uk for links to various other websites and blog posts and all that kind of stuff.

Charles Max Wood [01:24:15]:
Awesome. Alright. Well, we're wrapped up here. Yeah. Thanks for coming. Till next time, folks. Max out.
Album Art
Practical Strategies for Web Optimization: Using Chrome DevTools - JSJ 635
0:00
01:24:25
Playback Speed: