Next-Level Web Performance with Patrick Meenan - JSJ 608

Patrick Meenan works at Google Chrome. They explore the latest techniques in web performance and optimization. They dive deep into the world of asset compression and delivery optimization. They also explore the challenges and considerations when it comes to bundling, caching, delta updates, and many more!

Special Guests: Patrick Meenan

Show Notes

Patrick Meenan works at Google Chrome. They explore the latest techniques in web performance and optimization.  They dive deep into the world of asset compression and delivery optimization. They also explore the challenges and considerations when it comes to bundling, caching, delta updates, and many more!


Sponsors


Socials


Picks

Transcript

 

CHARLES: Hey folks, welcome back to another episode of JavaScript Jabber. This week on our panel we have Dan Shapir. 

DAN: Hey, coming to you from Israel at war. 

CHARLES: Charles Max Wood from Top End Devs. I got some exciting stuff coming through right after Thanksgiving, so stay tuned. We have a special guest this weekend. That is Patrick Menon. Patrick, do you want to introduce yourself, let people know who you are and why you're famous? 

PATRICK: Sure. I don't know if I've been so far as famous, but so I spent a fair amount of time working on Chrome in particular, but web performance, web loading, that kind of stuff, I guess, most likely people know me for creating webpagetest.org. sort of a diagnostics tool that gets used to care a bit in the performance world. But I spend my day job trying to make the web faster. 

 

Hey folks, this is Charles Maxwell. I've been talking to a bunch of people that want to update their resume and find a better job. And I figure, well, why not just share my resume? So if you go to topendevs.com slash resume, enter your name and email address, then you'll get a copy of the resume that I use that I've used through freelancing, through most of my career, as I've kind of refined it and tweaked it to get me the jobs that I want. Like I said, topendevs.com slash resume will get you that. And you can just kind of use the formatting. It comes in Word and Pages formats, and you can just fill it in from there.

 

DAN: I have to pause us here, just to say that I really, well, I wouldn't go as far as to say worship, but I very greatly appreciate what you've done for the web platform with Web PageTest. It's an amazing tool, one that if anybody cares even a bit about performance should be familiar with and using. And it's made such a ton of difference. And it's kind of part of the platform now, what with it being integrated with HTTP Archive. So it's really an amazing contribution to the web from my perspective.

PATRICK: Awesome, thanks. I mean, it's a lot of fun to work on too. So that's the main reason I spent so much time with it. 

CHARLES: Cool. 

DAN: Do you wanna tell the people a little bit about what it is? Sorry, Chuck for... 

CHARLES: I was gonna ask what it was, so go ahead.

PATRICK:  Oh, sure. Yeah, I mean, WebPagetest, I guess in its most basic form is a website you go visit to. You give it a URL for a page you want to test and get to pick from a location and the device that you'd like to run the test from and it'll load the page for you, gather all sorts of diagnostic information, record video of the page loading. And then I guess the sort of superpower is when it reports the results to you. It gives you a film strip of the page loading at like 100 millisecond intervals synchronized with the network waterfall of the resources loading with JS, parse, and eval times in the waterfall and all sorts of details to help you sort of diagnose loading performance and make things faster. And I mean, we even use it a fair bit on the browser side of things to validate like largest contentful paint and some of the newer metrics that have been coming out, we usually validated it against something like web page tests with the filmstrip to make sure that the times we're recording elements painting are actually the times elements painted on the screen. 

DAN: So somebody who's interested in web performance, but you know, maybe not an expert in it might be asking at this point, like, what's the benefit over just using the Chrome DevTools? Why? you know, the network, specifically the network tab inside Chrome DevTools. Why do I need an external tool that seems to be doing something, well, almost identical as it were?

PATRICK: Yeah, I guess the two big benefits of running it through WebPagetest, one is the physical location. So running a real device in a real place, not in your house, on your network. And so if you want to do a test from another country or something like that, or on a phone, but the real performance reasons for comparing it to like the traffic shaping and dev tools, for example, where you emulate a mobile connection. One of the things WebPagetest has always done is try to be as accurate as possible and it uses packet level traffic shaping that makes it look like an actual 3G or cable connection. Oh, wow. And including things like TCP slow start and everything else will behave properly. So you can actually test like CDM configs and things like that. Whereas doing it in DevTools is a little bit of a stretch. You can do it locally on a Mac if you use the connection emulation stuff that the Mac provides. But it's sort of an extra level of fidelity beyond what you get in DevTools or Lighthouse. 

DAN: Yeah. I have to totally agree. I mean, I found the network maps that I got from the web page test to appear to be much more accurate and reflective of what actual users from the various locations on various devices actually experience compared to what I see in Chrome DevTools or even in Lighthouse. So I greatly appreciate the effort that you put there like you said, with actually emulating the different types of network and obviously running from the actual different physical locations around the world. I mean, if you've got users coming to your website, say from Southeast Asia, then you really should be testing what people in Southeast Asia are experiencing and not just, you know, trying to guesstimate what it feels like. 

PATRICK: Yeah, I mean, the real fun ones are behind the Great Firewall in China, for example, where you can actually run tests from inside of China and see if you're including like a Google third party tag or something that will completely destroy your performance, that kind of thing. But CDN configs, all of that kind of stuff are useful for testing in real physical locations. 

DAN: And as I recall, to be honest, I've not played with it a whole lot, but I recall that in recent years, or even year, the tool added capabilities like the ability to modify the JavaScript or the download order or stuff like that, like run all sorts of experiments without actually physically changing the code or configuration of the website, right? 

PATRICK: Yep. So it's always had sort of the ability to override the host and you can point it at your own backend. So like if you want to test cnn.com, but rewrite it, you can re-host the HTML somewhere else and it'll load it as if it was cnn.com. That was in the last, I want to say two years, expanded to give you a pre-baked set of things that it will do automatically for you, where it can try optimizing the order of your JS CSS. It can try removing things for you.  I think one of them is even like apply fetch priority automatically to your LCP image and see what and so you can do sort of what if experiments and see what it would look like if those things were done without having to do the actual dev work. 

DAN: Yeah this could be really valuable at times. I remember where we one website that I was working on we were certain that inlining the CSS would be beneficial and so we did, and it turned out to be actually detrimental. You know, it doesn't really matter why, but you know, you can't argue with the actual numbers. And you know, being able to run an experiment like that could have saved us work.

PATRICK: Yeah, and the other thing that gets used a lot for is, there's a blocking capability where you can block specific requests from being fired. And so, A lot of sort of the first reaction is to blame third parties for performance issues. So one of the common use cases is going and block your ads or block your third parties or block a specific third party and run the test with and without. And you can have a side by side impact and say, okay, this is how much faster it would be without that third party or not be before going and talking to that third party about the issue.

DAN: I think that Chrome DevTools is actually gaining some of these capabilities. I think I've seen features like that in recent versions. 

PATRICK: Yep. 

DAN: Well, like I said, I'm really appreciative of this tool that you've built. Are you still actively working on it? 

PATRICK: A little bit. So I do help run the HTTP archives as well. And as part of that, we run a web page test at scale. And probably, I want to say it's probably by far the biggest scale deployment. We run probably upwards of 15,000 VMs. running crawls every month. And so yeah, so I still contribute fixes and things to mostly to the agent that does the testing. Catchpoint took ownership and responsibility for running the main web page test website and building it into their suite of products. 

DAN: Maybe 

CHARLES: It's not a Google thing. It's a

 PATRICK:  No, I mean, it was and still is an open source thing. I had originally built it when I was at AOL pre-Google even. And so I had always run it on the side until probably about three years ago, Catchpoint took over and it's now running it.

DAN: So maybe, you know, we did have Rick Viscome like a while back on the show to talk about crux and stuff like that. But I think it might be beneficial to repeat again or explain again what HTTP Archive actually is.

PATRICK: Sure. Yeah, so the HTTP Archive, it started out as part of the Internet Archive, I want to say 2012, give or take, with Steve Souders wanting to keep an archive of how web pages were built rather than just what they looked like. And so it started out with a fairly small number of pages that we would test, like the top thousand or something like that. And it would store the network waterfalls. It got its name HTTP archive from the... It would store the HAR files of the HTTP loading of all of the pages that we were testing, as well as the videos and that kind of thing. And so when Rick and Google, to some extent, took over running it. It's expanded a lot since then. And so now, I'll say probably the last two or three years with the Chrome User Experience Report, the Core Web Vitals, all of that kind of thing, we've expanded the HTTP archive to crawl all of the pages that are all of the origins that Crux has in the data set. So something on the order of 23 million origins plus one page deep of a crawl within each of those origins. So we don't just hit landing pages. And we effectively load all of those in web page test once a month. And we store the results in BigQuery as JSON like all of the request data, response data, headers, performance data, payloads. So we can run analysis on it. Or anyone can run analysis on it. It's an open data set on BigQuery. Although it can be fairly expensive to query if you're not careful.

DAN: So basically what happens is that maybe it's also worthwhile to mention a bit of info about Crack. So basically, whenever you use Chrome to visit a website. Chrome sends performance information, unless you opted out. Chrome sends performance information, anonymous, of course, about the experience that you, the user, had to the Chrome user experience support database, where that information is stored. And it can also be queried. And actually, Google even uses it as a ranking signal for the search engine. In addition to that, that kind of synchronizes into the sister database, which is the HTTP archive, where you take the same websites and then run various synthetic tests on them, one of them being, like you said, web page tests. So for each one of these websites, you have both the field data that's gathered by the Chrome browsers and synthetic data that's collected by various tests. such as web pages, correct so far? 

PATRICK: Yep, yeah, and I mean, part of what we also collect is we run a web analyzer detection on, as part of the HTTP archive crawl to extract what technologies we think the page uses. And so that powers a lot of the core web vital report by technology. So if you wanna see the reports where React versus view LCP pass rates is one of the public dashboards that gets shared a fair bit. All of those... 

CHARLES: What was that? A Wappalizer? 

PATRICK: Wappalizer. It used to be open source. It has since been forked, but it's very similar to built with. Basically runs a bunch of checks on the pages to see and extract information about the technologies that are used on the page. And so it powers a lot of the Chrome user experience report, technology-based reports. Yeah,

DAN:  I've actually contributed to that one in the past. It basically looks at which files the website loads, the actual even names of the files. It looks at stuff like various meta tags generated with and stuff like that, and tries to figure out which technologies a web page actually uses.. So and I actually use all this data. I actually gave a talk at several conferences Where I compared the performance? Well, it's not exactly compared the performance It's more compared the likelihood of building a fast website Using various frameworks based on existing data So so yeah, it's really very useful information, but yeah, I was mostly looking at the crux data, not the web page test data, because I was really mostly interested in the field data and the actual user experiences. But the segmentation was done based on WebEllizer. And one of the things that I was looking at is also if I could see correlation, for example, with the amount of JavaScript being downloaded and stuff like that, and that kind of information, I recall, does come from web page tests and stuff like that. 

PATRICK: Yeah, all of the details about how the pages are built and what's on them comes from web page tests. And then the real user field performance data is what comes from Prex. And so both being together in the same data set makes it really easy to do that kind of join.

CHARLES: Right. Now, I'm just curious, like who's who performs best? Is it quick or solid or react or somebody else? 

DAN: Well, it depends. Uh, based on the last time that I looked, no, look, the thing is this, uh, what you, first of all, it's important to note that correlation does not mean necessarily causation, 

CHARLES: right 

DAN: And but the key diff, but there are a couple of points to remember. First of all, the number of websites built with the different technologies vary a whole lot. You know, they are like React has as many origins or websites as all the other frameworks put together. So, and on the other hand, in the top 10 million websites, you've got all of something like 50 websites built with quick. So it's kind of difficult and problematic to compare, you know, hundreds of thousands of React websites to like a handful of quick websites. Also with quick, you assume that, you know, people who are using quick or on the bleeding edge are probably more performance minded than the average, you know, company using React. So it's kind of difficult to compare, but the situation is the last time I checked, WIC has pretty good results for those handful of websites. They perform really well. React, it's not so good. I think it's something like somewhere between 13%, 40% of React websites actually have good core vitals. And yeah, it's kind of unfortunate, but it's the reality. Look, there are also a lot of React websites where they don't really necessarily care so much about performance. I mean, you know, if you're building some sort of a dashboard, then, you know, you may not actually care about performance, which you would if you're building, let's say, an e-commerce website. So, your needs, your mileage will vary based on your needs. But that's not what we're here to talk about. I mean, it kind of is. No, I mean it kind of is because it all ties into performance. But we're here to talk about mainly about something that you recently spoke about at the perfnow performance conference in Amsterdam, right?

PATRICK:  Yeah, I mean that's the big one the latest hot mess compression dictionary transport stuff 

CHARLESOkay, my brain just got its eyes crossed

PATRICK: Yeah, it takes a little bit to wrap your head around, but it's sort of cracking a nut we've been trying to solve for 10, 15 years, give or take. I'll say back in the day when privacy and security aren't a problem.

DAN: When was that? 

PATRICK: Okay, we're less of a free specter and crime and breach attacks, I'll say. side channel attacks weren't a problem. We used to have effectively delta compression at the HTTP level for doing HTML with custom dictionaries and SDCH. And so that got killed because it opened up HTTPS connections to side channel attacks. And so over the years, we've been trying to find ways to bring it back. unsuccessfully, hopefully until now. And sort of part of that sort of evolution, we went through HTTP2 launched, and we did the whole, let's unbundle all of the things so we can do updates of just one module instead of a whole web bundle, for example, when like one import gets updated. That ended up having two critical problems with performance. The first one was compression. So compressing a thousand separate files results in much lower compression rates than one large file with everything bundled together. And so we think we've solved that part of the problem. The other part that it also had was there, it turns out there's an awful lot of overhead in the browser when you request a thousand different things versus when you request one thing. There's a lot of IPC checks. There's a lot of like even just having to check the disk cache for a thousand things takes a significant amount of time. And so- 

DAN: If I can pause you for just a second, because I think we ran through a lot of stuff really, really quickly. And I think it might be worthwhile, at least from my perspective to back up a little bit. Because for example, some people might be asking about Why even did HTTP to change stuff? I mean, you know, what was it about HTTP two versus HTTP 1.1 that even made the difference? I mean, you know, I think even that's worth exploring a little bit before we dive into all the technicalities that you've just spoken, been talking about. So what's the big change from HTTP 1.1? that was by the way around for like the majority of the existence of the web. Like I think we switched from HTTP 1.1 to HTTP 1.1 within months or something like that of the web. And then we got stuck with it for something like 20 years. And only then did we get HTTP 2. So what was the big change that in that transition that really impacted the way that you download files and in particular JavaScript files?

PATRICK:  Yeah. So I guess the big thing, the big win for HTTP2 was multiplex. So with HTTP1, you could effectively only request one resource, wait till you get it, and then you have to make another request on a given connection. Make another request for the next resource, wait for it, etc. You could in theory do pipelining, although that never worked in the wild And then they would come back, but they would still come back in the order they were requested. But the web in general is kind of broken with middle boxes, and so pipelining never ended up working. And so browsers sort of worked around some of the round trip overhead and waiting until you get one response before being able to even send the next one by opening a bunch of connections in parallel. And so browsers generally would open up six connections to each origin so they'd get some level of parallel activity and not waste round trips. Because otherwise you request one JavaScript file wait and each time you have to do that, you waste a round trip with the connection going idle and everything else before you get the next one.

CHARLES:  Right. So what- It was the same for images and CSS files too, right?

PATRICK: Yeah, JavaScript is sort of the real painful one because it's all, at least head JavaScript is all render blocking and so you can't do anything until you get each and all of them done. And so with HTTP2 and multiplexing, you can send all of the requests. HTTP2 had its own issues with prioritization, but it had a priority scheme where the client could tell the origin the order or the rough order that it wanted the responses back. And then the origin would do its best to try and deliver those resources in the order that it was requested. Um, but if it doesn't have data, uh, for a given one, it would just pick the next lowest priority one. And so it would always make sure that the pipe was full and it could do out of order responses and pipeline and all of that kind of, 

CHARLES: okay.

PATRICK: The theory was you have a thousand JS files on your page or whatever, you could request them all and they would just stream in as one big blob effectively, uh, with no extra overhead. 

DAN: And the advantage here being that you theoretically don't need to bundle anymore. I mean, you know, we, we tend theoretically, I mean, 

PATRICK: yeah. Uh, I'm assuming this looks like a good idea. Yeah. 

CHARLES: Let me tell you how much I love those build systems.

DAN: Yeah, that's the thing. Nobody loves the build systems. Everybody loves to work or prefers usually to work with relatively small files, like file per component or stuff like that. And so you in the original source code, you've got like a million separate JavaScript files, but you need to go through something like a webpack or an ES build or roll up or whatever, in order or beat in order to, you know, effectively get transform all these small files into one big huge file that gets downloaded all at once or Let's say in the HTTP 1.1 days Six files or less if you wanted any form of parallelization in the download, right? And now with HTTP2 people were like saying hey, you know multiplexing We can download a million files over a single TCP connection. Why not do it? Why not just use the files in the original format, especially now that we've also got the JavaScript import statements, so they can import each other and just work in the original JavaScript structure like Brendan Eich originally intended or whatever. And turns out that it didn't quite work.

PATRICK:  Well, then I guess technically it works. It's just not fast. 

DAN: Yeah, it's kind of slow.

Raygon helps thousands of consumer-centric software teams detect, diagnose, and resolve performance issues faster. Raygon's powerful error and performance monitoring tools make it easy to get you diagnostics you can act on for your web and mobile apps. When there's an error, Raygon shows you exactly what's going on, who's being impacted, and how to fix the root cause down to the exact line of code. See how your users are experiencing your website and app in real time and ship better code faster knowing that Raygon will alert you to any new issues or regressions. Never miss a runaway error. Make sure you are quickly notified of the errors, crashes, and front-end performance issues that matter the most to you and your team. Visit raygun.com to learn more and start your 14-day free trial. That's raygun.com.

DAN: Uh, so one problem, obviously you already mentioned is compression. The fact that tiny files, uh, don't compress as well as, you know, larger files. Why is that by the way? 

PATRICK: Mostly just the context in the window. Um, the types of things that you see in one JS file tend to be repeated from another JS file. And so when you have them all together, the compression can back reference to those other things that it saw in the other files and get much more effective compression.

DAN: Yeah, so because lossless compression is basically built on identifying repeated patterns and encoding them and then basically just sending the code instead of the entire pattern. So the more patterns you can identify the more compression you can achieve and in order to identify more patterns you actually need bigger files.

CHARLES More stuff. Yeah.

DAN: More stuff. Exactly. The other thing I guess would be the waterfalls, right? The fact that if you've got one thing requiring the next thing requiring the next thing, rather than just downloading it all at once. 

PATRICK: Yeah, and I guess that that's sort of the import maps thing or preload. That's as far as JavaScript modules and imports go native by themselves. Yes, it's a problem that you don't know that a.js requires b.js until you've already loaded a.js and see the import statements. In theory, that problem becomes a non-issue if you just happen to have, if you either preload b.js or you have an import map in the market that has all of the imports that you're going to need.

CHARLES: And you invoked his name, so I'm just gonna say it. This is the method that David Heinemeyer Hansen, DHH, prefers and is built into Rails. 

PATRICK: Yeah, and I mean, so that solves the discoverability problem. It doesn't solve the overhead problem.

CHARLES:  Okay. 

PATRICK: And that's both on the compression overhead and the per request overhead. And it's a balancing game. When you're talking dozens of files, the overhead's maybe not that big of an issue. When you're talking hundreds of files, You're talking like the change from one second to load them all to now taking three seconds. And so it starts to become significant. Yeah, 

DAN: And one more point. So we mentioned kind of one point about the benefits of not bundling, which is the fact that you're just staying closer to the original structure. So there's less complexity with the build process. But you kind of mentioned another one before Patrick, and in case our listeners kind of missed it, which is the fact that you reduced the amount of cash invalidations and avoid the need to download the same stuff over and over again, right? 

PATRICK: Yeah, I mean, so that was part of the hope for unbundling was if you update an import or a third party dependency or something like that you could deliver just that update instead of having to redeliver the entire bundle for a one line change. Right. So it's, especially if you're doing releases multiple times a day, having to send the megabytes or whatever of JavaScript for a one line change is kind of painful.

DAN: By the way, when I was at Wix, we used to do multiple releases for the Wix platform every day. If you look at the Wix platform in its entirety, as I recall, they were talking about the change every two minutes.

CHARLES: Oh, wow.

DAN: A deployed change every two minutes. Now obviously, you know, that kind of depends on which parts of the platform you're actually using and whatnot. But just the basic Wix software that every Wix website is using, I don't know what's going on there now. I'm not there anymore. But it used to Be updated like once or twice a day every day 

CHARLES: Yeah, Geo in the comments on YouTube also is mentioning that the bundlers in a lot of cases Would do things like tree shaking and you know remove parts of the code you don't need and shrink your overall size that way Too it might speed things up in that way 

DAN: Yeah, but if you don't bundle then you don't actually and to keep the file small Then if a file is not actually ever needed then it's also not actually downloaded. 

CHARLES: 

DAN: Right. But yeah, tree shaking can also even get you a lower than file size reduction. Like, you know, you might be moving actual single function even theoretically from the code. So yeah, bundlers can do a lot of magic.

CHARLES: Yeah, it sounds like that's what Patrick was saying in a sense that, yeah, depending on what your problem is, you know, import maps might do you a whole lot of good Or, you know, the bundler might do you a whole lot of good, just depending on, yeah, a lot of these variables on how often you're deploying or how big the change is or where the change is or, you know, whether or not you want to deal with source maps or, you know, any of the other things that come into play. And so, you know, one may work out great for you. One other one may work out great for somebody else. And there may be a combo or some magic formula that you kind of work out. That says, okay. 

I bundle these things together, they tend to not change so much. And so, you know, I can get away with having one last round trip or four last round trips, but then I import maps, a lot of the other stuff because it's small. And if it changes, then it, 

DAN: And it gets to be a maintenance nightmare because code changes, code moves around and you need to start, you know, and, and, and if you don't watch out, then your maps can get you know, out of date, out of sync, and you'll potentially end up doing more harm than good. But now going back to the whole compression issue. So we spoke about the fact that one of the big problems with unbundling into really small files was that you kind of lost out on the compression of protocols like GZIP or Brotli, and ended up downloading a lot more data then you other over the wire than you otherwise would have had you bundle. I understand that that is what you were looking to solve with, with this proposal, right? 

PATRICK: Sort of. Um, I think probably better to say that this proposal solves being able to do the Delta updates while keeping the compression benefits of the bundle. And so, um, I guess at its core the compression dictionary transport lets you use a previous version of a file or any file on the user's machine that has advertised itself as being able to be used as a dictionary for a future request. So the common case for that is if you're bundling your JS, for example, and we'll call it app.js, and you hash them and you have versions in the URLs or whatever, but you can say version one of app.js can be used as a dictionary for any future version of app.js using this path spec. And so when the browser goes to request version two of app.js, it can tell the origin. By the way, I have app version one of the app.js in my cache already. Just send me the delta or send me the dictionary compressed version of the file. So the origin can send down just the Delta compressed patch, if you would. And so it basically gives us patch loading for resources on the web. 

CHARLES:That's really slick. 

DAN: And does the bundler need to be cognizant of that?

PATRICK: I mean, the bundler doesn't necessarily need to be cognizant of that. And there are a couple of different ways to do it, but what on the bundled side of things, it's probably best to have something like a post build step that takes your bundled assets for one release and the bundled assets from X number of previous releases and generates artifacts that are dictionary compressed versions of the new bundle compressed using previous versions as with the dictionary. So you get the Delta artifacts for each one. And so at certain time, you can pick, hey, the client said it has version one, the hash of version one is a dictionary. Let me send the artifact that is the Delta compressed version against version one instead of the full resource. You don't have to do anything dynamically, but at build time, you can generate the artifacts. For that case, there's also an HTML dynamic use case as well.

DAN: So the web server needs to be kind of cognizant of this whole thing. 

PATRICK: I mean,

DAN: it gets a request and it needs to serve different things based on not the URL, but based on like fields in the HTTP request header.

PATRICK:  Yeah. I mean, it's not unlike you could in theory, and you do at times pre-compress with Brotli and pre-compress with Gzip the static version of the asset and then the web server will just pick the.gz or the.br version of the file from the server and serve that if you have it. And the client has advertised Brotli or gzip capability rather than doing it on the fly. And this is effectively the same thing, but it's looking at a different header as well. It's looking at the advertised dictionary header. But yes, it requires either a CDN edge or app server specific logic that knows how to look for the headers on the content encoding and pick the right file to send back.

DAN: Ah, so this might also be implemented by the CDN rather than by the server?

PATRICK: Yeah, because it can try a version of the URL that has the dictionary in the path. So it can append the dictionary value to the end of the file name and try requesting that and fall back to the main file if it doesn't have it. And then it can just pull it from cache because it'll use using the very headers and everything. Once it's in the CDN cache, you can just serve it directly. 

CHARLES: So basically I'm gonna back up about two steps. My understanding is that the compression effectively, like Dan says, it finds all the patterns, right? And then it says, this pattern is entry number 2000. And so then whenever it says, hey, I've got entry 2000 here, then when it decompresses, it just puts that pattern back in. Right. Um, and I, I know I'm oversimplifying a ton of this stuff, but effectively what you're saying then is this allows you to do patch level stuff, because one of the entries in your dictionary could be that patch as opposed to the pattern. And so you could, does it work that way?

PATRICK:  Effectively version one of your, uh, JS. becomes the patterns, a collection of patterns. And so you can say, like you have a one line change in version two of the JS file, for example, everything before that can just be replaced with a token that says, hey, old version one of JS from byte X to byte Y, and everything after that one can be another range that was in. So you basically compressed it down to a token referencing most of the file the one line change and a token referencing the rest of the file from the previous version.

CHARLES: So can it token across files? Because you've been talking about app version one, but if that version one is like 12 things I've stuck in my import maps or taken some other way of getting them in, can I?

PATRICK:  So you can't you can't have multiple dictionaries that you pull from, for example,

CHARLES: Can you have one dictionary that applies to all of your files though? 

PATRICK: Yes. So you can, and that's, we tend to look at that more in the HTML or API use case for dynamic resources where you can build a dictionary that has the common things. Like if your HTML is generally has similar head, similar meta tags, similar structure, similar filter across all of your pages, that can go into a dictionary that is side loaded. You do link rel equals dictionary to sort of side load and load that as a dictionary. And so future requests for HTML pages can say, oh, and by the way, I have this dictionary that has already all of your common template things. And then in that case, if you're doing compression directly on the origin or in the CDN, it would need to have the dictionary available to do the compression, but you can basically send down deltas of your HTML where all of the common stuff is compressed out. Same thing for like API calls, GraphQL, JSON stuff that tends to be very verbose in general with keys and tags, stuff that tend to be very common, can all be compressed out into an external dictionary. So you end up with just the actual data part, almost a binary transfer of API, even though it's still JSON. And as far as what you see, it's still JSON. 

DAN: This is pretty amazing. I mean, with the exception of media data, which I think this whole thing is potentially less relevant for. 

PATRICK: It doesn't care that it's text. It can be binary data, but there's not a lot of repeated media data. Like the one completely convoluted use case that I came up with is if you have color profiles in all of your JPEG images, for example, that are big and happen to be the same color profile you embed in all of them, in theory, you could have a dictionary that has that color profile and press it out, but the dictionary compression itself doesn't care text versus binary, it's just byte streams. But the use cases are definitely much more the text files that you see.

DAN: Yeah, so. 

CHARLES: Interesting too, from the standpoint of I see this solved, the common pieces of the page. So I work primarily in Ruby on Rails. And so I talked to David about a lot of some of these ideas and one of them's in Turbo, which effectively, it'll either load in new chunks of HTML, but Turbo, which piece of it? There's one piece of it where if you hit a link, it sends back all the HTML from the server, but then it does a diff against it, right? And it finds all those common pieces and says, this is the same, I'm not gonna reload it. And so it seems like this is kind of a backward way of solving the same problem where now it's, hey look, this is the same, I'm telling you this because it's in the dictionary, and so don't load it.

PATRICK: Yeah, don't transfer it over the network. And that's probably one key thing to remember is it does still parse, eval, execute the full file. And so you're not magically only evaluating one K worth of JavaScript. Now, if you've got a 10 megabyte JavaScript file, it's still parsing and executing all 10 megabytes of it. But at least we've solved the delivery side of things. 

CHARLES: Yeah. We're sending a skinny set of packets instead of a fat set of packets.

DAN: And when do we expect to see this in a server and browser near you?

PATRICK:  It's already in front. It's in front 119 as an origin trial. So it's in stable today. 

 CHARLES: Cool. 

PATRICK: So you can go ahead and play with it. There are companies experimenting with it. I will say

CHARLES: it has a graceful fallback, right? So if my firepox, no problem. It's no, 

PATRICK: it's completely progressive enhanced. I mean, if you don't advertise the dictionaries, the server doesn't send anything back. If you do advertise the dictionaries and the server doesn't either have the dictionary or an asset compressed with that dictionary, it just serves the original response.  So it's completely transparent, completely progressive. 

CHARLES: That's good. 

PATRICK: And so, yeah, I mean, none of the servers or CDNs, for example, will do it all for you yet. That's still part of what the origin trial is for is to see what the pain points are, if there are any, for the CDNs and servers. Most of it's being done either in application servers or just at serve time with the file, picking the right file to serve that was already pre-built using the command line tools. That said, I do have, I'd say it's probably 50% complete right now, a WASM implementation of Z standard that does dictionary compression to run on the cloud clear worker So we could do both the dynamic and static use cases on an edge worker Where you don't have to do anything at the origin or in the app servers I mean, it was completely wrong to be doing compression at the edge and wasm but hey Well,

CHARLES: it's something I don't think about

DAN: Well, yeah, I mean what? You know, it's pretty awesome from my perspective. Why do you feel like it's wrong?  

PATRICK: Because you're running compression in, effectively, a JSVM instead of natively on the hardware. 

DAN: Yeah, but it's in WASM, so. Mm-hmm. Yeah. Well, and it moves it up. And it's on somebody else's hardware. 

CHARLES: That's what I was gonna say. It moves it out of the realm of, oh, this is right in the mix of everything else that I'm doing, and it puts it out there, so it's like, okay. This is another way of doing it. It's another system. It's another, right? And so it, yeah, it's a different concern now from my writing my code and whether or not it compresses and builds. 

PATRICK: Yep. Although I will say, sort of, as far as how it builds and bundles, there will probably be some opportunity for things like Webpack and things like that to have more, I'd say, consistent or predictable builds from or bundles from build to build. Because if you're using a previous version as a dictionary, shaking the tree the same way, keeping function names the same from release to release, or even including the same modules in release to release will give you better compression. So it works remarkably well with no changes needed, but it can be better as well. 

DAN: We live in interesting time for bundler. I mean, you know, bundlers are becoming, first of all, the whole, yeah, but the whole, first of all, bundlers are not really just bundlers anymore. They're effectively transpilers slash compilers these days. I mean, if you look at stuff like that's happening with React server components or Inquick, which we kind of mentioned before, the bundlers have, you know, really sophisticated tasks of deciding what stays on the server? What goes to the client? Slicing and dicing the code. You know, tree shaking was mentioned. Yeah, it's becoming really difficult to build a bundler that does everything that the modern bundler is expected to do, which I guess is why Vite is so popular, because if you can get this kind of universal bundler that does everything for you. that makes life easier for all the framework makers and whatnot. But still, you know, well, here's another concern for the bundler makers. Build the dictionaries. 

PATRICK: Or build the bundles in such a way that dictionaries will work well with them.

CHARLES: Right. 

DAN: Yeah.

CHARLES: So Patrick, what's the downside? Is there a downside? 

PATRICK: Um, I sort of mentioned it already. It's still delivering or effectively. becoming the whole file. For CDNs, one of the things to worry about is the varies. You'll now have more copies of responses in your cache. So you may end up blowing out caches and things like that if you're not careful. Like if you say keep a years... So the dictionary response has its own TTL time to live in the response headers. And you can say, tell a browser how old it is allowed to be when it requests the dictionary compression to help mitigate that. But if you say, keep all dictionaries for a year and you do releases three times a day or whatever, all of a sudden you've got a thousand possible variations in the wild of headers that are coming in that are buried on your cache.

DAN: A thousand is an understatement. I mean, it's like every version compared to every previous version. 

PATRICK: Well, for the current release that you're on at this point in time, it could be delta compressed against a thousand previous versions. 

DAN: Oh, you're saying let's clear all the previous versions except... 

PATRICK: Yeah. 

CHARLES: Yeah, here's the dictionary to get you from where you are to current, not from where you are to any other version. 

PATRICK: And so the risk there is if you're not careful about your windows compared to your release updates, your caches will effectively become..

DAN: exponentially large. 

CHARLES: Linearly large, I think. 

PATRICK: Yeah. They'll usually have limits on how many variants the CDN, for example, will cache. And so you'll just have stuff that doesn't get cached and you'll hit origin for those cases. But if you if you balance the time of you allow for the dictionaries against your release schedules, you can plan that out fairly well. It obviously it requires more work, right? It's not broadly where you can just turn a flag on your CDN or on your edge server and magically everything gets smaller. You need to either generate the dictionaries if you're doing HTML or API stuff, or you need to do the delta compression artifacts and do the the logic for picking the right one to serve. So there's a little more work involved, but I think the the value that you get out of the end is well worth it.

DAN: And I imagine that you've probably ran some tests. You know, so if somebody asked like for an average website, what are the expected savings going to look like? You know, what would you say?

PATRICK:  So average is a tough thing, but I will say like on the HTML side of things, we ran, so the GitHub repo that has the dictionary transport explainer also has some examples, but. I just pulled a bunch of e-commerce news sites and that kind of thing. And for the HTML side of things, when we use a dictionary and search, when you use a dictionary created for those, you tend to get results that are 40 to 60% smaller using a dictionary than if you didn't use a dictionary. So it's a very significant saving. And that's smaller than the best Brotli or the best standard compared. 

DAN: It's...Let's say, so basically we're talking about around 50% size reduction compared to what's downloaded now. But that 50% is specifically for the textual files of the website. 

PATRICK: That's specifically for the HTML case, where you generate a dictionary based on the HTML. 

DAN: Ah, okay, I understand. So that's for the page itself.

PATRICK:  For JavaScript, on a version per version upgrade, It depends on how much you change. But some examples like the YouTube desktop player, which is 10 megabytes of JavaScript uncompressed, is 90% to 95% smaller when you do week over week delta updates than the best broadly. Their WASM, we've seen WASM 60% to 80% smaller. And that's for doing a delta compression that knows nothing about WASM. Um, Vercell's app bundle, uh, from one week to the next week, uh, was 98% smaller, I think, when we took a look at it. And so it depends on how much you change. If you rewrite all of your code, or you completely change how they're bundled, um, then you're not going to get nearly those savings. But it, it can be very significant, especially for those that release frequently with smaller changes.

DAN: Yeah. If you release every day and every day you write all of your, rewrite all of your code, then you've got a problem.

 PATRICK:  Right. But yeah, so if you're in something where you have a tempo of you're releasing every day, you should expect in the 90 plus percent smaller than broccoli even. 

CHARLES: Right. And if you have people coming back every few days, they're just going to pick up those tiny pieces that changed. Right. If somebody comes back after a year, then yeah, they may have to pull the majority of the file, but.

PATRICK: But that's what they're doing today. 

CHARLES: Yeah. So is there a tool that does this for me? 

PATRICK: Yeah.

CHARLES: I'm lazy. I don't want to make this myself. 

PATRICK: So doing it all for you, no. At least not yet. That's probably going to be something like a value add or something that the CDMs provide. There are command line tools for doing the delta compression, for example, if you want to create the artifacts after your build. Brotli and Zstandard, the standard command line tools will do that. If you want to generate a dictionary off of a bunch of HTML, I have a website that will do it for you. Use as dictionary.com, which just happens to be the header name that you use. Please don't abuse it too bad. It's a machine running in my basement. But, or the Brotli repo has a research tool called dictionary generator, which is basically where I'm posting, that you can run against a whole bunch of files and it'll pull out the common bits for you automatically and generate a side dictionary for you. As far as a click a box and do it magically, that's coming would be my best guess, not yet.

DAN: So coming means that you expect the next generation of bundlers and leading frameworks, meta-frameworks, to kind of do it out of the box?

PATRICK: So I don't know that bundlers will ever do it. When I expect CDNs will probably be the first ones, and what you'll probably be able to do is say, tell it the path where your application code is, and it will automatically add the necessary headers and do the delta compression from one version to the next for you and it will automatically... And you'll have to probably give it a dictionary or at least a collection of URLs to generate a dictionary off of the dynamic case. And then it can handle all of the header negotiation and everything else for you. The worker that I have that will do everything in Wasm will probably be able to do all of that as well. If you don't want to wait for a CDN to build its feature. But that's probably as close to automatic as we'll get.

DAN: So

CHARLES: I really like the idea of running code that came out of your basement. 

PATRICK: Well, I mean, that's where webpage test started. It's like,

DAN: it wasn't the same basement. 

PATRICK: Uh, we've moved since.

DAN: So essentially you're saying, essentially you're saying that at least to begin with, it's going to be work for the DevOps people in organizations rather than for the developers in those organizations.

PATRICK: Probably, because it's all serving time stuff. So the developers will just go ahead and create their bundles as they did. And then when it goes push to production, either the artifacts will get created or someone will take care of configuring something else.

DAN: So the dev ops people will do all the work. The developers will get all the credit and the marketing people will blow everybody out of the water by adding a 12 megabyte GIF into the website.

CHARLES:In other words, things never change. 

PATRICK: The developers are doing the DevOps, right? That's the part of DevOps. 

DAN: Yeah. Somehow, that's...

CHARLES: I used to be young and naive too.

 DAN: Oh, so and currently it's you said it's in origin trial in Chrome. What what are what's the what's it looking like with the other web web browsers?

PATRICK: So all of the browsers are supportive of the spec. There haven't been nobody has found any privacy or security concerns with the latest iteration, which makes me very confident that this is something that will ship in some form we're currently bike shedding on what all the headers and the values and the thing is, you know, spec standards tend to do. 

CHARLES: I love that term. 

PATRICK: Yeah, but we're, it's going through the HTTP working group and IETF right now. So we'll have it as an RFC plus part of the fetch spec, or sorry, the HTML spec on the WebWG side of things. But yeah, I mean, we filed positions with all of the browsers. They've all been very supportive my guess is we'll need to shake out in Chrome the actual user experience and get things sorted out and get people using it before there's adoption across all of the browsers. But there's no objections to it, which is a thing in and of itself. 

DAN: Yeah, from my experience as the web performance working group, the W3C web performance working group, getting all the browsers to agree can be a challenge.

CHARLES: All right. Well, anything else that we should let people know about before we start wrapping up? 

PATRICK: I think that's the big one on compression dictionaries. Definitely try them out, though. The whole purpose of the origin trial is to let us know what works and doesn't work in your environment. And if you just wait for it to get out of the origin trial, and then complain that it doesn't work in your environment, it's a little too late at that point. So it is in production. You can use it with stable users and get all of the benefits today and hopefully help steer any changes that need to happen to it. 

CHARLES: Awesome. All right, well we are getting toward the end of the scheduled time, so I think we wanted to talk a little bit more about performance and Core Web Vitals and some of that stuff. If we can do it in less than five minutes, we can go for it. Otherwise, I think we should just go to Pix.

PATRICK: Easy wins, sort of cheat codes for Core Web Vitals. Fetch priority, a spec from last year or so. It's a cheat code for largest contentful paint. If you fetch priority equals high on your hero images, they will load sooner and your LCP will go from core to task for the most part. And if you support HTTP3, Chrome 118 rolled out support for HTTPS DNS records which lets you tell Chrome that you support HTTP 3 at DNS time instead of connection setup time and it saves you effectively one round trip on your first visit and so that can be another cheat code that get you a quick one round trip off of all of your time set. How do I do that?

CHARLES: How do I tell it that I'm HTTP 3 happy?

PATRICK:  So there's an HTTPS DNS record where you can advertise It's a special DNS record type where you can advertise HTTP3 support. Cloudflare does it for you automatically if you're using Cloudflare and have them doing your DNS. Otherwise it's just a record type that you add to your DNS record. 

DAN: And it's worth mentioning that all modern browsers support HTTP3. So there's really no reason not to use a CDN vendor in a server configuration that supports HTTP3. It's a quick win even without that DNS record. And if you can throw that DNS record into the mix, then you get even better time to first byte, which usually speeds everything up along.

PATRICK: So yeah, far along. Safari has supported the record for going on two or three years now. And so there's- 

CHARLES: Yeah.

PATRICK: You'll get benefit outside of Chrome.

CHARLES: I found a page on chromestatus.com that talks in brief about the feature, but it has a link to the spec and a few other things on it. So I'll put that in the notes here or the comments on Facebook and

PATRICK: And Cloud Player, you know, the blogging company that also happens to write, run a CDM has a really good blog post on the

CHARLES: Yeah, I think I got that one here too. So I'll, I'll share it in the comments as well on Facebook and YouTube and then we'll get it in the show notes as well if you're on another platform. So cool. All right. Well, let's go ahead and do some picks.

Hey, this is Charles Maxwood. I just wanted to talk really briefly about the Top End Devs membership and let you know what we've got coming up this month. So in February, we have a whole bunch of workshops that we're providing to members. You can go sign up at topendevs.com slash sign up. If you do, you're going to get access to our book club. We're reading Docker Deep Dive, and we're gonna be going into Docker and how to use it and things like that we also have workshops on the following topics, and I'm just gonna dive in and talk about what they are real quick. First, it's how to negotiate a raise. I've talked to a lot of people that they're not necessarily keen on leaving their job, but at the same time, they also want to make more money. And so we're gonna talk about the different ways that you can approach talking to your boss or HR or whoever about getting that raise that you want and having it support the lifestyle you want. That one's gonna be on February 7th, February 9th, we're going to have a career freedom mastermind. Basically you show up, you talk about what's holding you back, what you dream about doing in your career, all of that kind of stuff, and then we're going to actually brainstorm together, you and whoever else is there and I, all of us are going to brainstorm on how you can get ahead. The next week on the 14th, we're going to talk about how to grow from junior developer to senior developer. The kinds of things you need to be doing, how to do them, that kind of a thing on the 16th, we're going to do a Visual Studio or VS Code tips and tricks. On the 21st, we're going to talk about how to build a software course. And on the 23rd, we're going to talk about how to go freelance. And then finally on February 28th, we're going to talk about how to set up a YouTube channel. So those are the meetups that we're going to have along with the book club. And I hope to see you there. That's going to be at topendevice.com slash sign up. 

CHARLES:Dan, do you want to start us off?

 DAN: Okay, it's not exactly a pick, but it's something as I kind of as everybody probably know There's a war going on in the middle east yet again uh This time between israel and hamas in gaza um I've kind of been trying to be kind of active about it on social networks. Um, Obviously as an israeli, I cannot claim to be objective and I won't even try to but I will say that I try to be fact-based, which means that everything that I say as far as I can help it is based on actual factual data, even when that data isn't necessarily pleasing for my side, as it were. Now where do I post stuff? Well, obviously on X. the badge thing so I can actually post, you know, things that are longer than just the, what is it, 200 and what? 60? Whatever. So now I can post longer. I, you know, I don't write essays within X. I don't think that's the proper, you know, platform for it. But I do tweet stuff. So that's one place where you can see my perspective on what's going on the other place is Quora. I kind of got into Quora several years back. For a while I was one of their top writers. I don't know if you're familiar with Quora. I like to say that in Stack Overflow you ask how you do something and in Quora you ask the why you do something or something along these lines. And obviously it's not just about tech. So I got on Quora a while back i even became like a top writer for a while. And then kind of, you know, you drift to other things. But now I'm kind of back there. And I write answers about things currently mostly related to the current conflict. So again, if you're interested in my perspective on those things, search for Dan Shapir, either on X. It's a Shapir with a double P or, you know, obviously the listeners can see my name as the panelists on the show, or alternatively, they can find me on Quora. And again, in the past, I wrote a lot of stuff about, you know, history of technology and stuff like that, like why C Sharp happened, why, you know, JavaScript happened and stuff like that. But in the recent weeks, I've been primarily posting about the ongoing conflict. So if anybody's interested in that The other thing and I have to say that like I said, I don't claim objectivity for example, I recently Went I've actually went several times to this kind of vigil slash rally for all the Israelis who are kidnapped in Gaza. I don't know how much you know, but there are now there are over 240 Israeli civilians, well, Israelis, most of them civilians, kidnapped inside Gaza. The oldest one is 85. The youngest one is 10 months old. So think about a 10-month-old baby held hostage in a war zone. And yeah, it's pretty bad. I also did want to mention, like I do almost any episode, that the ongoing war in Ukraine is still very much ongoing. And it's a literal meat grinder there. And the Russians are doing some pretty horrific things, both to the Ukrainians and to their own soldiers. And, you know, so, you know, the war, the war between Israel and Gaza is not the only war. And yeah, those are my unfortunate picks for today. 

CHARLES: All right. Yeah. Um, it's always tough to follow that because I think we get some of the information anywhere from a couple of days to a week later, you know, and anyway, I just, I'm just hoping, yeah, that, that, that stuff can get figured out, but you know, I've said it before if Israel's getting attacked they have to respond and respond in a way that makes it not happen again

DAN: So yeah, it's a really problematic. I'll give an example They recently released film from you know that soldiers and drones recorded in Gaza that literally shows Hamas tunnel exit here like like in the courtyard of a hospital in Gaza now that effectively makes that hospital a legitimate military target, which means that theoretically Israel could shell the hospital, but obviously that would kill all the people in that hospital. Is that something you do? On the other hand, if they're actually literally firing at you from there, do you not fire back? It's a really, yeah, you know bad situation. I don't, I don't, I don't even know what to say about that. 

CHARLES: Yeah. I mean, you're basically left with two options that have direct kind of drastic outcomes. So I don't know. Um, and yeah, I know, I don't know. It literally means I don't know what the answer is supposed to be on that. So, um, cause yeah, either way has, has consequences. I'm not entirely comfortable signing up on. Yeah. So anyway, I'm going to get a little more lighthearted here. 

DAN: Go for it. Please do. 

CHARLES: Last weekend, so Saturday in particular, I went down to TempCon. I told you all I was going for the last few weeks. So we have been playing these board games. And one of the games that we taught there is called Living Forest. And I think I might have picked it before because I might have picked it when I played it with my friends. But, um, what you were doing is you have your own forest and you have a forest spirit and, um, so then effectively you're recruiting other, uh, forest spirits, which are kind of animals, uh, into your deck, and then you lay out the deck until you get three, if you get three solitary animals, then you only get one action on your turn. If you stop before you get three, then you get two actions. And then you can recruit animals. You can plant trees. You can, uh, fight the fire spirit. Um, if you, yeah. And if you can't overwhelm the fire spirit at the end of your turn with your water, then you actually have to pick up fire cards, which are effectively, uh, solitaries that don't do anything for you. So you're much more likely to have to stop sooner so you have a real incentive to make sure you don't, that you put out enough of the fire to where you can be safe from it at the end of your turn. And the way you win is you collect 12 of one, or 12 of one of three tokens. So you've got trees, unique trees. So when you're buying and planting trees, if you plant enough so that you have 12 unique trees, you get, then the game ends, and if nobody else gets 12 tokens of any kind, then you win and if somebody else does get 12 of some kind, then you count up all the tokens that count for all three kinds. And whoever has the most of those wins. And that's effectively it. When you put out the fire, you get fire tokens. And then the other one is Lotus tokens and the Lotus tokens are whatever Lotus icons you accrue during your turn. But you can also get extra tokens from your forest and by there's an, where the fire is, there's a track that you move your, your primary spirit around and if you pass other players you can steal their tokens and so You know you could wind up with up to five tokens Lotus tokens or whatever fire tokens or unique trees that you've pulled from other people and so and that and that's the game and so it's relatively simple, but there are a lot of different ways of going about things because you get bonus bonuses from the trees and you know, the different animals give you different icons you can use for your actions. And so anyway, it's, it's pretty fun. Of the six games that we taught, it's probably my second favorite after first rat. And so I anyway, I really, really enjoyed it. So I'm going to pick Living Forest. Let me look it up on Board Game Geek. Yeah, so Board Game Geek has it at a weight of 2.19. And I tell people that 2.0 is kind of an easy casual game that has enough to it to where it's kind of a fun challenge, right? A one is like, it's super simple. It's the kind of game I play with my kids. So two is, you know, an adult who's not deep into board games, you know, could pick it up without too much trouble. So a board game, geek weight 2.19. And yeah, the artwork on it is awesome. Just really, really enjoyed that game. So I'm going to pick that. I don't know if I have any other picks this week. I've got some things coming down the pipe. Effectively, I'm doing the full launch of Top End Devs membership. I'm planning on charging $97 a month for that. We're going to have weekly calls. We're going to get different people on to talk about various topics. You know, probably haven't asked me anything, you know, if you have career questions or about technical topics that I or whoever I bring on to do the ask me anything can answer will answer them. And then yeah, we'll just have experts come in and talk about different things every week. It'll also include the book club. And it'll include different video tutorials that are 10 minutes or less every week. And I'm planning on doing a series of those on Ruby on Ruby and rails. And then I'm doing another one on JavaScript and react to start with. So I'm just getting all of the everything set up. But if you sign up before Black Friday, I'm going to leave the price at $39 a month. So if you sign up within the next few weeks, you can get that for $39 a month. Otherwise it's going to go up to $97. And then as we add more value to it, we're probably going to raise the price again after that. So now's a good time to get in. But yeah, that's my pick. Patrick, what are your picks? 

 PATRICK: There was homework required.

CHARLES: It can be anything. If there's a TV show you're enjoying or a movie you liked or Hey, I was playing with this dev tool and it was awesome. Whatever.

PATRICK: It's probably way too on topic, but I just got back from PerfNow, a performance now in Amsterdam. And it is at this point the only and the main web performance conference that remains and they're really good about releasing all of the talks on YouTube. So in the next couple of weeks, that's going to be a really interesting playlist to look at because they were all really good conversations with some of the top people in the industry. And if you can make it, it's some of the best people to just hang out with and chat on technical topics. 

CHARLES: Nice. Amsterdam's a great city to visit too. All right, well, I'm going to wrap this up then. Thanks for coming, Patrick. 

PATRICK: Awesome. That was very chat-meet here. 

CHARLES: Yeah. And I see you have your social media handle on there. Are there any other places you want people to find you? 

PATRICK: I mean, I'm on all of the, all of the social media's these times, these days. Uh, you can find me just look for Patrick Menon or Pat Menon, uh, on whichever your favorite one is, and you'll find me. I'm always happy to be on- 

DAN: And I, I, I highly recommend following Pat if you're at all interested in word performance and stuff that's happening on some of the bleeding edge of web performance. 

CHARLES: Alright, well, until next time folks, Max out!

 

 

Album Art
Next-Level Web Performance with Patrick Meenan - JSJ 608
0:00
1:13:31
Playback Speed: