Framework Comparisons, Real User Metrics, and Effective Performance Tools - JSJ 640
In today's episode, they dive deep into web performance optimization and the strategies employed by our expert panel to achieve it. Join Dan, Steve, Charles, and guest Vinicius Dallacqua as they explore robust techniques like code splitting, lazy loading, and server-side solutions to enhance website performance.
Special Guests:
Vinicius Dallacqua
Show Notes
In today's episode, they dive deep into web performance optimization and the strategies employed by our expert panel to achieve it. Join Dan, Steve, Charles, and guest Vinicius Dallacqua as they explore robust techniques like code splitting, lazy loading, and server-side solutions to enhance website performance.
In this episode, you'll hear Vinicius discuss his experiences with different benchmarking frameworks and innovative optimization strategies, including how he improved performance for the Prometheus client for Node. They delve into the importance of performance metrics, data analysis, and real user monitoring (RUM) tools. They underscore the need for precise measurements before and after optimizations and share insights on overcoming the challenges posed by third-party integrations.
Hear about practical tools like Partytown and Lighthouse, and how companies like NEXX Insurance have achieved significant performance gains. The conversation also touches on the critical balance between backend performance, CDNs, and frontend optimizations, alongside recommendations for engaging management to prioritize performance enhancements.
Plus, for a bit of fun, our episode includes some light-hearted "Dad jokes of the week" and book recommendations around TypeScript and AI.
Socials
Picks
- Charles - Take 5 | Board Game
- Dan - Total TypeScript
- Steve - Warp: Your terminal, reimagined
- Vinicius - Watch Sweet Tooth | Netflix Official Site
Transcript
Charles Max Wood [00:00:05]:
Hey, folks. Welcome back to another episode of JavaScript Jabber. This week on our panel, we have Dan Shappir.
Dan Shappir [00:00:11]:
Hi. From a very hot Tel Aviv.
Charles Max Wood [00:00:14]:
We also have Steve Edwards.
Steve Edwards [00:00:16]:
Hello. From a very hot Tel Aviv style Portland. Right.
Charles Max Wood [00:00:21]:
It's I'm
Steve Edwards [00:00:22]:
like, really hot here. Like yeah.
Charles Max Wood [00:00:25]:
I'm Charles Max Wood from Top End Devs. Yeah. It's it's been getting warm here too. I think it's all over the place. Maybe maybe out where, our guest is too. We have a get special guest this week. It's, I'm not even sure how to say your name. Vin Vinicius.
Vinicius Dallacqua [00:00:40]:
Vinicius. Yeah.
Charles Max Wood [00:00:41]:
That's good. Vinicius Delacqua.
Vinicius Dallacqua [00:00:44]:
That's me. Yes. We could call
Steve Edwards [00:00:45]:
him Vinny for short.
Vinicius Dallacqua [00:00:46]:
Yeah, Vinny. I'm pretty used to being called Vinny, to be honest. Oh, big V. Yeah.
Steve Edwards [00:00:51]:
My cousin, my cousin Vinny.
Vinicius Dallacqua [00:00:53]:
Exactly. That's kind of how it started whenever I started speaking to American people. But, yes, Vinish is here from Stockholm, Sweden, where I wish it was warmer. It's been a very, rainy summer, especially rainy and way worse than normal Swedish summers even though Sweden is not necessarily known for the good weather.
Charles Max Wood [00:01:17]:
Yeah. I was gonna say I'm I'm kinda wishing for that here, but I saw a headline a couple days ago in the newspaper that said that Utah is no longer in a drought. And in fact, all of our reservoirs are over full.
Steve Edwards [00:01:28]:
So That's good.
Charles Max Wood [00:01:30]:
Maybe we don't need the rain. I don't know. Anyway, Yeah. We brought you on to talk about performance, and, you shared an article with me before these guys got on. I know you've also been chatting with Dan. So I'm just gonna let you guys take the lead as far as where we go, and then I'll chime in with my, basic questions since I am not a performance expert like you guys are.
Vinicius Dallacqua [00:01:54]:
Yeah. Sounds good. You guys,
Steve Edwards [00:01:55]:
you mean those 2? Because I'm not the performance expert either.
Dan Shappir [00:02:01]:
Yes. You know, performance is kind of important. I don't think everybody can, should or can be an expert, but and I guess this is one of the things that we'll be talking about. It's not something that you can ignore either. Let's put it this way.
Charles Max Wood [00:02:16]:
Right.
Vinicius Dallacqua [00:02:17]:
Yeah. Yeah. It's it's it's one of those topics that it's like the more you know, the more you realize things go deeper, and you can't get pretty deep.
Steve Edwards [00:02:29]:
You know what? Yeah. But one example I've heard frequently mentioned and I thought about it myself is, you know, I'll look at Stack Overflow posts and people talk about, okay. What's the best way to loop through an array that's more performing? Or what's the best way to, you know, handle large datasets in code and and so on and stuff like that. People will obsess over these little things that will increase performance. Then they've got a 6 meg image file downloading on the same page, and
Vinicius Dallacqua [00:02:56]:
I'm Right. Everything anyway. You know? So
Steve Edwards [00:02:58]:
so I I think it's safe to say that, when you're talking performance, it's sort of gotta be comprehensive and not, you know, just focused on, you know, code performance, I think, or bundling
Vinicius Dallacqua [00:03:10]:
of
Steve Edwards [00:03:10]:
code and that kind of stuff.
Dan Shappir [00:03:11]:
It it also needs to serve a purpose at the end of the day, which again is something that I guess we will be talking about. But performance is not is it's not an ego trip. It's not about, you know, bragging rights. It's about serving your customers, your users. So anything that you do needs to be with that front and center. If it actually brings value to your users and your customers or if or doesn't. And if it doesn't, then it's kind of pointless.
Vinicius Dallacqua [00:03:47]:
Yeah. Yeah. Absolutely. And and the whole micro benchmarking thing is is was a very good example. Like, it's very easy to to look into, like, say, things that sound very flamboyant, and you've kinda missed the target where the actual hurt is, which is on the the user's, you know, experience.
Dan Shappir [00:04:09]:
Also, you know, in the context of JavaScript, it's lies, damn lies, and micro benchmarks. Yes.
Vinicius Dallacqua [00:04:18]:
That's a good one.
Dan Shappir [00:04:19]:
Yeah. Because the way that the JavaScript engines work, the modern JavaScript engines and the optimizers they contain, I've read some, quote unquote, horror stories about, you know, people drawing conclusions from micro benchmarks that were effectively not just meaningless, but in fact, wholly misleading. Let's put it this way. Like, people wrote loops, but because it had no side effect as it were, The optimizer kind of optimized the loop into nothing. And then what are you actually even measuring? So so yeah. But but put to put it bluntly, I wish my problems were how fast, you know, some loop in JavaScript works. Although occasionally it does actually happen. And I'll finish with that that this part, I actually contributed back to, the Prometheus client for Node.
Dan Shappir [00:05:29]:
And I don't know if you're familiar with it. It's, it's a it's a system or service for monitoring and alerting and stuff like that. And the optimization was actually an optimization about how to loop and build the response string. Because in that particular case, via profiling, I proved that it actually did make significant difference. So it's it can, but it usually doesn't. Anyway, enough for for my chitter chatter, and now over to you, Vinicius.
Vinicius Dallacqua [00:06:06]:
Yeah. The the I mean, the benchmarks and, it has to serve a purpose. Right? And if you're just benchmark like, if you're micro focusing on things, they might not be painting a good picture for even what you're even trying to measure. This actually brings to a very interesting conversation that I've had a chance to have at JS Nation, in Amsterdam couple of couple of weeks back with Ryan, from SolidJS and Atila. So we were talking about how can one build a well structured benchmark to test, different frameworks on some aspects. Right? And we started talking about how it would be good to have something that can bring towards web vitals because it's something we already kinda have standardized. And and that's how we can also measure actual impact and, like, break down different threshold, like, different kind of common problems where you can try to build correlations and build a good delta between different frameworks. Like, what kind of strategies pays off the most.
Vinicius Dallacqua [00:07:15]:
Right? Because Solid does it one way, React does it about that. So it cannot necessarily measure them apples to apples in a way. So let and so you need to try to establish, like, a good baseline of how can you benchmark those kind of things there. Does in the end, just deliver the kind of similar results, but they'll do it differently. So I'm not really trying to benchmark in a way the framework itself, but different strategies that they use.
Dan Shappir [00:07:38]:
Yeah. It's a it's an interesting conversation. First of all, it's worth mentioning that Ryan is kind of a regular guest here on the podcast. We've had him quite a number of times.
Steve Edwards [00:07:47]:
Mhmm.
Dan Shappir [00:07:48]:
And he's an incredible smart person. Because like you said, he's the creator of solid, solid start. He's also kind of the CEO of signals, Really popularized that. And he's also one of the people most knowledgeable about frameworks in general. I mean, he he won't like his his hobby is to get his hands on, you know, more or less every framework out there and then do all sorts of comparative testing and analysis and whatnot. So, yes, if if anybody is knowledgeable about how to best compare performance in other aspects of frameworks, it's probably Ryan. It's also worth mentioning a tool created by the quick people called mitosis, which you can actually use it to compile, like, sort of pseudo code, like react like pseudo code into various frameworks, and then you can kind of more easily build the same application using different frameworks and then be able to compare it. Because like you were saying, if we want to get away from micro benchmarks and we want to actually compare real applications built in various frameworks, we run right into the problem of the overhead of building and maintaining Mhmm.
Dan Shappir [00:09:10]:
Sophisticated applications in a variety of frameworks and, you know, who wants to do that. I just do want to mention that there's an alternative approach if you're interested in the performance of various frameworks. And I actually gave a talk about that at at several conferences, including, I think, JS nation a year before, which is to use RAM data to compare the performance of framework. So you're not looking, like, at a specific application. You're looking across all websites built with a particular framework, and then you're not so much saying, like, how fast will my application be with this framework? You're more looking at how likely am I going to how likely is it that I'll be able to build the fast website or web app using a particular framework? And you might say
Vinicius Dallacqua [00:10:04]:
Exactly.
Dan Shappir [00:10:05]:
This framework is more likely to produce, faster websites or web apps, and this one is less likely. You know, not surprising the one that is least likely is angular, and the ones that are most likely are quick, solid, svelte, you know, these. But it's really you really need to be careful about mixing correlation and causation. Like, for example, people who use QUIC are people who are more likely to be concerned about web performance. So is are they producing faster websites because they're using QUIC? Or are they producing faster websites because there are people that care about web performance and then so they know what to do, and they also happen to be using QUIC for that reason. So you need to be kind of careful with with doing these types of conclusions.
Vinicius Dallacqua [00:11:07]:
That's a very good point, actually, because it's one of the things that I, like, you you come to realize when you start collecting RAM data. So I I established I helped establish
Charles Max Wood [00:11:18]:
Time out. Time out. Time out. I know that we've defined this in other episodes, but RAM data, what is it?
Vinicius Dallacqua [00:11:25]:
It's real user metrics. So you have 2 different sides of performance monitoring. So you have what is called the lab and what is called the run side. So the lab is just, you know, CICD, lighthouse, and whatever you do to make sure that you don't perform regressions before changes actually go live to users. And on the RAM side, you have different providers, and you can even use, like, GA, Google Analytics even have, like, like, automatic web vitals.
Dan Shappir [00:11:54]:
Or Yeah. It's it's basically the question of, are you, testing your performance in a synthetic lab style setting, maybe even on your own computer while you're developing, and and and comparing that way, or are you collecting performance data from the field, from from actual, real user sessions? So, obviously, you can collect data for your own particular website, especially if you have enough traffic coming in. But the interesting thing and we actually had Rick Viscomi from Google here to talk about it. Google actually collects data from all sessions on Chrome, and they put it into this database called Chrome u user experience report or crux for short. And here and Steve usually makes a joke about the crux of the matter or the crux of the issue or whatever. Yeah. Exactly. And the nice thing about what Google does is that they actually give everybody essentially access to this data.
Dan Shappir [00:13:00]:
And also they attach all sorts of metadata to this data, like, you know, which framework was used to build which website and whatnot. So you can do all sorts of slicing and dicing. So you can look at, the performance of, you know, your own website or competing websites or ecommerce websites in general, in particular geos, or particularly particular types of devices, and or created using particular frameworks, using particular libraries, etcetera. It's interesting data for those of us who like to geek out on performance.
Vinicius Dallacqua [00:13:36]:
Yeah. That that's a that's a really good point to start. So for, like, for building for building web websites, like, things can go wrong many different ways. Even when people use the same framework and 2 points exactly, Dan, like, some people when building, like because also, like, when you pick React, most people big React is one of the most like, it's probably now this team most used framework out there already. I don't keep up with the trends. But
Dan Shappir [00:14:02]:
Oh, yeah. For sure.
Vinicius Dallacqua [00:14:04]:
Yeah. So it's it's, it's just because of the sheer volume you have, right, of people writing. Like, there will be all kinds of quality out there.
Dan Shappir [00:14:11]:
It's king of the hill. It's case in point effectively, just throw the numbers out, there are many as many websites and web apps being built in React as all the other frameworks put together.
Vinicius Dallacqua [00:14:25]:
Yeah. Yeah. That I would I would imagine so. And so in that mix, there would be a lot of so when you're trying to divide, like, if you're building some sort of collection tool based on cracks and you're trying to divide percentiles for each framework, the the the React having so many more samples. Right? It would kind of push the data towards the left, or rather towards the right side where you're gonna have more, worse, metrics overall just because of how many the just the sheer volume of it.
Dan Shappir [00:14:54]:
Yeah. That's one aspect. Another aspect is the fact that a lot of React websites like, there's a long tail of React websites. Websites either built, like, long ago or built with particular focus in mind. So, for example, if you're building a website in React and you're not using service side rendering or static side generation. If you're only using client side rendering, which means that, you're building the DOM representation of the website on the client side. Then by definition, effectively, you're not going to have good core vitals, which is the way that Google measures loading performance. You might have other aspects of performance that are good, Like, I don't know, responsiveness or stuff like that.
Dan Shappir [00:15:46]:
But in terms of loading performance, if you're just using, client side rendering or CSR, it's not going to be good. And the reality is that a lot of React's websites use CSR. Some of them don't care. If they're not indexed, then maybe they do. They don't care. But it but if you're looking at the metrics that Google is collecting, that's what you see.
Vinicius Dallacqua [00:16:14]:
Yeah. And the it's it's interesting that the whole pivoting that we are now having towards the service side as well. Because one of the things that, like, I've I've worked I've worked for Spotify and I worked for Klarna before with Spotify. And in both places, I've set up, the, like, monitoring tools and start collecting RAM data and all of this kind of stuff. And for CSR, one of the strategies that that people have is normally, like, code splitting line. That's pretty much your only, one of the few things you can do to try to improve that first paint and and the and the LCP. And even that, like, when I I run an experiment once trying to understand especially for cloud, when I was working at cloud, I I was trying to build benchmarks on how can we make sure that we have as a third party, how can you make sure to affect the loading of the site you're integrated with the, you know, the least. How can you make sure that you are not the one who is causing the the harm? And I ran an experiment on on on doing cold splitting.
Vinicius Dallacqua [00:17:14]:
And back then, mind you, it was, gosh, it was, like, 26 2018, I think, or 2017. So it was I I I've, like, do did, like, a full code splitting, kind of, like, manual code splitting, splitting chunks everywhere, and, like, lazy loading some stuff. And and back then, it was kind of a a revelation. Nowadays, I think more more people understand why, but, like, when you do that many chunks, things can get very congested when you're loading when you're loading websites. And doing a lot of JavaScript, small chunks at a time. It was a lot in compression size as well, and you're also gonna be hogging, you know, CPU time. And back then, not even using HTTP 2, if I remember correctly, with, even though it was already available. So, you know, like, the the whole, critical render path was absolutely destroyed, and congested.
Vinicius Dallacqua [00:18:06]:
And I I think back then, we didn't even have the preload scanner to try to, like, hack into as, like, priority, and all this kind of stuff. Yeah. So it was it was, like back then, it was a revelation on yeah. You can have too much of a good thing and then, like, start working with prioritization of chunks and this kind of stuff.
Dan Shappir [00:18:24]:
Yeah. We we actually had Robin Marks recently to talk about this whole network thing and and how it often behaves in a way that's not expected. And and to bring it to a more general aspect and kind of related to the main topic that we've yet to focus on. I would say that one of my main, things regarding performance that I always say is that you've got to measure. You've you you you've got to measure in order to decide what to focus on. And after you make a change, you got to measure to verify that the change you made actually improves things and doesn't even potentially degrade them. And and I can give a case in point, like, you know, if your CSS is small, a lot of times you will hear that you should inline that CSS into your HTML to avoid the extra round trip to bring the CSS because CSS is render blocking. So you want to get it down as quickly as possible.
Dan Shappir [00:19:28]:
And I've seen cases where inlining the CSS actually degraded, performance. And, you know, and without going to the details of why, the fact is that once you see that it's actually degraded, you roll it back. Yes. And because, you you know, it
Vinicius Dallacqua [00:19:49]:
didn't do measuring the whole measuring, it's it's like, I always mention whenever you start working with performance, you always start from the data. You always start if you don't have a good collection story, you don't have a good data story, you have to start collecting data from real users that is. Right? So because because lab data doesn't really tell you the whole story as as you've been breaking down earlier. So you need to to understand how's the actual user experience out there, and you need to understand even what kind of data do you wanna collect, which which kind of brings us into the the topic we've been discussing as well on the the article. And the whole thing of understanding your product and where you wanna collect data and what they are collecting and, you know, building the the story around performance from that perspective first instead of just trying to micro optimize first is how you're gonna make sure to deliver impactful results to your users.
Charles Max Wood [00:20:43]:
So yeah. So, I mean, I I read part of the article and yeah. And what I'm wondering, you know, just jumping in then, you talked in your article about lab versus rum and things like that and and and how to measure it. But, I mean, if if I'm if I'm really new to this, right, and, you know, Dan mentioned having measurements and knowing what your numbers are so that you can, you know, verify that you had the effect that you wanted, How do you start gathering that? I'm assuming that's kinda your first step, right, is gathering that information, whether it's in a lab setting or room setting.
Dan Shappir [00:21:20]:
Yeah. I mean, very often, if you're, like, engaged in in a performance, project or product related project, you'll be under kind of the gun to to show results. And it's kind of tempting to start optimizing things. And I whenever I was engaged in in projects like that, I I quickly pushed back, and said, no. We got to get, the the data first. We got to have the graphs. We got to have the measurements, because, because otherwise, you're just working blind. And if you want an extra incentive, when it comes to your time for your, annual review or half annual review and you want to, like, prove your worth to the company, having a graph that shows, like, this is what I did.
Dan Shappir [00:22:16]:
It's a it's a line that goes up Yes. Or down depending on what you're actually graphing is is a great thing that, you know, you want to have when you're trying to get a raise or something like that. So I literally, in various occasions, pushed back against management who were trying to get me to start optimizing before we actually had good measurements in place. And and to your question, Chuck, we've had some guests on the show to talk about, you know, these days, there are really 2 good really good ways that I have to mention to get good metrics. 1 is if you're is basically just use the the goo the Google search console. In the Google search console, they have a core web vitals panel, and you can get, you know, information about pages that have performance issues. It's it's not an ideal and optimal run solution. You know, for example, they average the results over a 28 over a period of a month effectively.
Dan Shappir [00:23:23]:
So improvements you will make will take time to show, and and likewise, the gradations will take time until they actually manifest themselves. But, you know, it's a good starting point. Another another good starting point is that they're like, Vinicius mentioned before. There are a lot of third party tools and services out there that are fairly straightforward to integrate into your website. No. Some of them are even, like, partially free. And but, you know, we've had people from Sentry. They have a wrong solution.
Dan Shappir [00:24:00]:
We had people from, Raygun, Raygun, Akama. We've had people from Akamai. They have the Impulse. There there are a lot of there are a lot of great tools out there that are fairly straightforward to integrate and that you can start getting data from. You know? Take your point.
Vinicius Dallacqua [00:24:21]:
Yeah. Yeah. You do. You do have a lot of options, like, debug bear and run vision. Yeah. You also have as a speed curve. But there is, there
Steve Edwards [00:24:28]:
so real quick. We're talking about tools. How often do you see tool because a lot of these tools are basically jumping JavaScript into your page. Right? So I know Google Analytics is a classic. So how many times do you see the tool that you're putting in to measure performance hurting your performance because of everything it's putting in your site to measure performance?
Vinicius Dallacqua [00:24:49]:
When you ask, I have heard horror stories with, OTL and, integration
Dan Shappir [00:24:56]:
name names.
Vinicius Dallacqua [00:24:58]:
Yeah. I'm not gonna, yeah, I'm not gonna name any names, but I haven't I have heard, interesting stories, when trying to implement OpenTelemetry. And that's one of the examples where it can go pretty bad in one way when you're trying to measure.
Steve Edwards [00:25:15]:
Yeah. I know. I remember Netlify had a service that you could pay for. I haven't seen it in a while where would it they would run stuff like that on the server. So you'd you know, you're not dumping JavaScript into your page. I I never used that. I just remember reading about it, seeing it as as an alternative.
Charles Max Wood [00:25:29]:
Most of the back ends have services like that too. And, sure, they don't give you, like, the core web vitals. Right? Because they're not
Dan Shappir [00:25:36]:
Yeah. They're front end things, not back end.
Charles Max Wood [00:25:38]:
Yeah. Because their front end measurements, not back end measurements. But But
Vinicius Dallacqua [00:25:41]:
is this is one of the
Charles Max Wood [00:25:42]:
They they give you some information about how long it takes to
Dan Shappir [00:25:45]:
get the data out of the way. I will say this, you know, like Vinicius used mentioned, you kinda need to be careful. But, most of the rum providers that actually measure core vitals, you know, they know about core vitals. They've done work to ensure that whatever they're providing has has little or no impact on on your score.
Vinicius Dallacqua [00:26:10]:
Exactly. Yeah.
Dan Shappir [00:26:11]:
If one of them does, it will be it will pretty quickly come out. But, you know, just Google with your friends. Search what people say about it and, you know, try it out yourself and and see what happens.
Vinicius Dallacqua [00:26:23]:
Yeah. And, on the whole topic of, actually, on the whole topic of back end, I've I've wrote an article last year for birth calendar about server timings, and I think there's there's a good amount of things you can do to help understand and, like, end to end sort of tracing, when it comes to even to back end stuff. I mean, we don't have it as well standardized as as Web Vitals. But understanding because one one of the things that a performance specialist becomes is, like, it comes in 3 parts. Right? Like, he's a salesman, he's a data analytics person that has to understand what kind of data you're trying to collect and understand the data you collected in order to visualize it well and, you know, a good, an engineer side as well. So the the the back end part, there is a whole lot of attributions that you can build within the back end side. Unfortunately, not as well standardized as web vitals, but but that's actually a good opportunity, right, where one can try to define good metrics within back end. Mhmm.
Vinicius Dallacqua [00:27:21]:
But, like, when you're trying to understand, especially, for instance, when you're now starting to have more React being run on the server, right, with with, Next, the no newest versions of Next, the newest version of React, and things are kind of moved They would be moved a bit more to the back end, and processing part time is gonna be spent more, you know, that TTF b window. And it does have a great amount of benefits that you can get out of it, but some things can be a bit, unexpected. Like, so you can have some performance degradation as well, on some types of websites out there. Because not all websites, have trying to deliver the same kind of experience, thus not all parts of web vitals matter equally to the to all of our websites. And so if there is if you are working on something that is very sensitive to that TTFB window, you really wanna make sure to understand from the server timings, then, you know, from behind the curtain, network curtain, what's going on, and you can collect that data as well.
Dan Shappir [00:28:24]:
Yeah. And and by the way, like I said, we had Robin Marks on the show. And one of the things that came up is basically that if you really care about TTFB, and especially if you've got a global audience, then you probably want to be using a CDN. And then the TTFB of your own server is not irrelevant, but is, you know, less less impactful. Let's call it let's put it this way. But I would actually like to pull us back a little bit. And and looking at your article, which I have read, one of the things that you wrote about that really resonated with me because it really matches my own experience is about getting buy in from management. Can you talk about that a little bit?
Vinicius Dallacqua [00:29:14]:
Yeah. Absolutely. Is is is the salesman part of the of the triad? So it is, it is a it's, like, in a way, you really have to understand the product you're working with. So if you're working in a product team, one of the things that you constantly hear and I've certainly faced it myself as well when I was working on on product teams is you have a certain pressure to ship features constantly. So sometimes in mobile, sometimes and, sometimes more than others, you will have a pressure looming and trying to solve back, backlog stuff, which some, for most cases, performance kinda falls under, you you it's hard to get the buy in coming from management. So one way you can start doing it is, of course, as we mentioned, collect data. And with that data you collect, you wanna try to understand correlations between, you know, pain points of your users and how your product is performing. Because when you're trying to build work within performance, you will really want to understand on if you work on a certain part of application, you will deliver the most, the world, the most optimal results for the perform for the product to perform.
Vinicius Dallacqua [00:30:25]:
So you're not only trying to cover the engineering side, but you will have like, when getting by, you wanna make sure to cover the product side. You know? Like, to do shipping good metrics, not only for APMs and for your performance metrics, but also understand how those will affect your product metrics. So the business side also matters. So you have to wear kind of, like, both hats and and build that bridge between engineering and product. That's how you can strengthen, the the buying you get from management.
Dan Shappir [00:30:55]:
I totally agree with that, and it kinda reminds me of, an interesting experience that I've had quite a number of years ago, and I won't be naming names. But I remember talking with this person. I was working at a certain company, and I was speaking with one of the people in product. And I asked that person why, their product specifications never include any performance requirements. And they answered first, the the, he said, I don't know how to specify performance requirements. And I get that, and it's become easier these days. Again, thanks to Google with co work vitals. We have, they they made the industry aware of a relatively small set of well defined metrics that, you know, you can tell that person from product, hey, read this article.
Dan Shappir [00:31:56]:
I actually even gave a talk about it at the previous employer to product people to kind of teach them about, how performance is measured and the impact of performance and whatnot. So that's that was his first point. And the second point, which really made me laugh, what he said, but I expect our developers to write the code in the most performant way possible because that's what developers do. And and I really started laughing because, you know, we we work, like you said, we're always in a rush to to deliver features. We're always under the gun. We're always facing short time frames and not enough resources. And certain aspects like performance can, you know, go out the window in these types of scenarios. If you're pressured to deliver a feature and, you know, you won't be really worrying about the performance impact if you hardly have time to to finish the the requirements that are actually in the spec.
Dan Shappir [00:33:00]:
And and yeah. So so if if you and and that's from my perspective why management buying is so important because very often we hear about people saying, hey. Can't we do stuff like that grassroots? Can't we introduce quality, like, from the bottom up and stuff like that? And the answer is no. Because if you're going to be engaged in a large scale ongoing project, it's going to require a lot of resources, time, and effort, and even money if you're, let's say, buying run tools and other and other things, then management needs to be aligned with that. Otherwise, it just can't happen.
Vinicius Dallacqua [00:33:43]:
Absolutely. Absolutely. Yeah. And then becomes a fight, right, over what your engineering team interests are and where your product team trying to ship and and, you know, all the things that product teams have agreed on for quarters. So it does become like, you need to get a lot more involved with the product side and understand what kind of values matter the most for the product side. And it that's why it becomes a salesman pitch because it's not just blindly trying to web virus have done wonders for that conversation starter. Like, I think most people nowadays understand some aspects of web browsers. At the very least, when you mention it, that it matters for SEO, it matters for for the product.
Vinicius Dallacqua [00:34:24]:
So that's great. That's really, like because I still remember from my days of Klarna, having to build, you know, like, what what metrics matter the most for a product was was an actual task. You had to so many metrics are available to you and so little knowledge around how they affect different parts of the product. But when it comes to to building those kind of values and translating them to to product, it becomes onto an where it's very important to be to understand the product that you're working on. Because, like, not all products will have like, as as I mentioned a bit earlier, like, not all products will have the same kind of values or you can't apply all metrics blindly to all products because some metrics just don't matter as much to certain type of products. For instance, me working at at now at Vova Cause, one of the products that I'm currently working very closely with is the configurator that they have. And the configurator as a as a as a product is just an actual product team that cares for it. And it does have very, very different, business metrics, product metrics than one would think about when you think about conversion and all this kind of stuff.
Vinicius Dallacqua [00:35:38]:
So how the product behaves in real time is a lot more important than how it loads. Because it you can do a lot of stuff to try to trick it loading, but once it is loaded, that's the thing. Like, you have you the INP metric for that product is is, like, king. That's the most important part, IP and CLS. So, you know, you need to understand which metrics is gonna be delivered the most impact for the types of products you're trying to ship. So you can build, you know, KPIs and SLAs with downstream teams and try to have this kind of consolidated governance model. Or at the very least, just understand how things impact your application differently. And from that part within the engineering, you can understand also how to tie up those engineering metrics with what kind of product metrics matter the most, and you can try to build correlations from that.
Dan Shappir [00:36:27]:
I totally agree with what you just said. An interesting case that I had experience with was when I was working Wix. There was the Wix editor that you would use to you build websites using WYSIWYG dragging and dropping things around. And you had the actual websites that you built with the Wix editor. So for the websites, the most important aspect was the loading performance, like, as measured by, let's say, LCP, largest content for paint. But for the editor itself, I mean, obviously, you don't want it to take half an hour to load. But most customers didn't care so much about how long it took the editor to load. What was more important for them was that once the editor was loaded, it was really responsive and snapping.
Dan Shappir [00:37:21]:
Mhmm. And when you drag things around, they would drag smoothly and not like, you know, with this jitter or trailing after the cursor or stuff like that. So you really need to be cognizant about what it is that is important to your users in the context of performance and what it is that you want to optimize. And, again, going back to that aspect of getting buy in from management that it's actually worthwhile to invest the effort. Because one of the things that I like to say is that performance is a journey, not a destination. It's not like I'm going to invest a little bit of time and effort, get our performance house in order, and then we can forget about it forever because it's done. That's not how it works. You you need to put systems in place to identify regressions because regressions will happen.
Dan Shappir [00:38:13]:
And then when you detect a regression, you're going to have to invest effort in fixing that regression. So it's ongoing effort. And by the way, in this context, I'm going to post an a good link. So, again, thank you to the people at Google. Their web.dev website has a section called case studies where they, where they, post a lot case studies about how improving performance has impact variety of businesses and products. So you can actually go in there, find services that are similar to what you're building, and then be able to bring that to management and say here, you know, this company x does something that's similar to what we're doing. This is how improving performance has impacted their business, their bottom line, and this is why it's worthwhile for us to invest this effort as well. Yeah.
Vinicius Dallacqua [00:39:11]:
Yeah. Absolutely. And there you know, the it's funny that back in the days where I actually have, an article from that time I'm posting here on the chat, and the building building, and then, performance monitoring, that was for lab, that article that I put based based on that. So it's like bill before the time when we had CICD with Lighthouse. So I was trying to integrate Lighthouse into CICD manually and build my own CICD server with Lighthouse. One of the things that Lighthouse allowed I think it still allows us to do. I haven't dug into the Lighthouse source code for quite some time. It was to define different weights or different metrics.
Vinicius Dallacqua [00:39:50]:
So when you get a lighthouse score from from from within your CICD, you can define, for instance, you for if you're in your your application math is more, INP math is more for application. You can customize the weights of the score you get from Lighthouse based on that. At least you could back then. I'm I'm not entirely sure if you can still can because I haven't dug into the source code for some time. But it's it's one of the things that back then
Dan Shappir [00:40:16]:
open source, so you can probably do whatever you want if you're putting
Vinicius Dallacqua [00:40:19]:
the Yeah. True. To put in the effort. Yeah. But, like, back then when I was doing that kind of integration, is that defining for for instance, back then, gosh, like, one of the metrics, that we used to have. I mean, LCP, FCP is still where we're around. So, like, those as a third party at Kalana was the ones that matter the most. So I don't know.
Vinicius Dallacqua [00:40:38]:
Like, you can fine tune the scores based on the weights that you well, you know, matter the most for you.
Dan Shappir [00:40:44]:
So we kind of talked about it already, but still, one of the sections in your article is about, metrics and KPIs and SLAs. Can you elaborate about this?
Vinicius Dallacqua [00:40:56]:
Yeah. Absolutely. So the so govern if you're trying to establish governance, if you try governance, and or if you're trying to get, better understanding even from what you're trying to ship, and, normally, we don't I mean, we don't work in isolation. As you're working for a very, very small company, normally, you have teams either above you or below you. So you'd are the downstream or you are working with teams downstream. So a graph service or an API or a back end p so you you after you understood which kind of metrics matter the most for you and for your product, and you're building starting to build correlations, one of the natural next steps is to build KPIs or key performance indicators for your application with a lens of performance, of course, and building also service level agreement, SLAs, against teams that might affect your performance. So this comes in, you know, the any care, person would, would know those by heart, but it's one of the things that really helps you strengthen, your because it's pretty much it's what you said. It's not a race.
Vinicius Dallacqua [00:42:06]:
It's a continuous thing, and you will have regressions. And whenever you have regressions, you, first of all, want to document those, and you always want to document both regressions and any performance, improvements that you ship because you wanna understand how things, you know, worked over time or not. And having those SLAs is a good way to communicate downstream and ensure to have more resilience on things that you're trying to ship. And I've because some parts of the performance, when you work in a big product, some parts of the performance, spectrum is not directly coming from you. One of the things, for instance, that we have, and I think most people out there will will know this, is GTM. And GTM can really you know, if you have a a markets team marketing team that's pushing a lot of tags on DTM, that can really mess you up, on performance, and it's not even coming from your code base necessarily. And that's GTM
Charles Max Wood [00:43:00]:
is Google Tag Manager. Right?
Vinicius Dallacqua [00:43:01]:
Google Tag Manager. Yeah. So so Google Tag Manager is a brilliant thing. It's a really, really interesting product. But anything can be misused and such as the such as the the nature with GTM in some teams.
Dan Shappir [00:43:13]:
And
Charles Max Wood [00:43:14]:
It's the marketer's junk drawer. It really is. But
Vinicius Dallacqua [00:43:17]:
it is yeah. And then one of the things that I'm currently trying investing a lot of time, in in the governance model that I'm trying to build is is around GTM as well. But then how can you even it's like the those the other side with the GTM side is a completely different set of, of skills. So how can you make sure that you build the connection on things that both of you can align on and understand? So that's where you the KPIs and SLAs come in. And you can you can really make sure that you can have a proper well, structured process around it.
Dan Shappir [00:43:48]:
Mhmm. By the way, we did have, I think, Adam Bradley from, Builder IO who spoke about the tool that they build called Partytown, which
Charles Max Wood [00:43:58]:
can,
Dan Shappir [00:43:59]:
which can reduce the impact of, 3rd party scripts and pixels and whatnot, by moving them off of the main thread onto a web worker. The big challenge is that it's a pretty manual effort. You know, we spoke about leading management buy in in order to approve, effort. That's one of those things. It's it's it's not a a drop in solution that you can just put in your product and and watch the improvement. You probably need to do some manual integration work, and in order to get the real benefits. But when you do, the benefits can be substantial. In in the case of, NEXX Insurance where I previously worked, by putting in Partytown, we were able to dramatically improve our INP.
Dan Shappir [00:44:51]:
The number of of, of pages that, have, good INP scores went from about something like less than half our pages to effectively all our pages.
Charles Max Wood [00:45:04]:
Oh, wow.
Dan Shappir [00:45:04]:
Thanks to thanks to to using party now. Yeah. Because yeah, unfortunately, a lot of these third party scripts can have significant impact on the responsiveness of of the website. They run a lot of JavaScript on the main thread unless, you know, you configure it otherwise. And that impacts, how responsive the the pages to user interactions, unfortunately.
Vinicius Dallacqua [00:45:32]:
Yeah. And that's why building a good attribution model is is, like, absolutely fundamental. It's one of the things that I'm obsessing over as of late because once you have the metrics up on the monitoring tool, that's a great start. You're off to a great start. You want starting to understand, you know, the nature of a product, and you're starting to understand how the things you do impact it. But then again, as previously mentioned, like, performance comes from, performance degradation can come from different, you know, parts of the web. You have the bugbear actually released a great, great article just a few weeks ago about the impact of, Chrome plugins. So if you have, like, different different plugins within Google Chrome, they can also affect your your performance and some more than others.
Vinicius Dallacqua [00:46:18]:
And there's a brilliant article there. I'll I'll see if I can find before, the the this recording ends, but it's really it's a really good example on how performance sometimes you can have some coming, you know, on your rear user metrics, all nonetheless, and coming from different kind of things even out of your control. So building good attributions, means not only having the metrics, but, you know, web vitals have an actual attribution build that you can do use where you can send more information about each one of the metrics you're collecting. So if you have, you know, INP, you you will show you if you will like details about the the low of data, long animation frame, and you can have a better intelligence on where is it coming from. Is it coming from a third party? Is it coming from, are you trying to come extension? Is this coming from Google Tag Manager or something like that? And that can give you more even more power than We're talking about RAM.
Dan Shappir [00:47:13]:
Yeah. A lot of the the tools provided by RAM providers have kind of attribution capabilities built in. So, you know, you you don't necessarily need to be an expert. But but that's actually another point that I want to mention is that it does require some expertise. Like, don't, expect that, you know, to if you want to do some performance improvements on your website, it does require some know how. So either, you know, you find it really interesting and you invest the time and and effort and learn it. Alternatively, you can bring in an an expert. Maybe hire an expert if you're large enough.
Dan Shappir [00:47:56]:
If you're not, you can bring in a consultant. Somebody like Harry Roberts, which we mentioned before, we started recording. We actually had a minute as a guest on the show, can bring on somebody like that to do an audit for you and kind of point you at the right directions.
Charles Max Wood [00:48:15]:
Yeah. I'm I'm curious with you mentioned Chrome extensions as an example, and you have attribution. That that tells you when you gather the information, hey. Look. You know, people who have this extension are having this impact. Right? One way one way or the other. Do you what what do you do about it? I mean, can you just add it to your CI for some of your tests to to check for the performance to get you know, once you've set those performance standards, do you put a banner up that detects, hey. You're running this Chrome extension, and it impacts you know, it may slow down your thing.
Charles Max Wood [00:48:51]:
I mean yeah. Once you have it, what do you do with it?
Dan Shappir [00:48:55]:
Yeah. No. So first of all, yes, you can put it as part of of your testing environment for sure. The question is, is the slowdown happening because just the fact that the extension is there? Like, is this extension just doing generally a bad thing? Or is it something that's kind of a bad, into into react you know, some sort of reaction or or, like, you know, your page specifically doesn't work well with this extension for some reason. Now if it's the the the latter, then obviously, if it's and it's a popular extension, then you may want to invest effort in trying to to fix it. Like, obviously, you can't fix the extension, but you can try to fix your stuff. If it's the former, if it's just a problem with the extension, basically, it is what it is. You may want to put some notice on on your website.
Dan Shappir [00:49:56]:
I wouldn't pop up an alert. That seems overly intrusive. Yep. But, like, you know, having something in your knowledge base. So at least your support people would be cognizant of of the problem. And if a customer complains, then they can ask them, you know, are you using this extension? And if so, please don't. Mhmm. But it's, like, a fact of
Vinicius Dallacqua [00:50:25]:
life. Right. Yeah. So on that, we have a a great talk, from last year's birth now by Tim Veraki. He men he's the name of the talk is noise can noise, canceling rum, or or something like that. I posted a link here on the the the chat, and that talks about what can you do to eliminate noise out of your metrics. And that's, like, that's why, like, having this kind of attribution and understanding attribution and you're very right. Like, this this kind of and this is one of the problems when it comes to performance as a subject is because it is somewhat of, very steep vertical for people to to get into.
Vinicius Dallacqua [00:51:08]:
And but one of the things you can do is once you to start understanding these kind of things is to understand also how to remove noise from your metrics and understand better how your actual percentiles are split up onto and understanding the distributions. Because also, like, building better attributions for a product comes from, bringing back to the to the whole, like, tying up with your product values. Right? Being it comes to understanding what kind of market markets means matters the most for your product. Right? So you don't wanna really measure globally because the bigger the the metrics code you're trying to have, the harder it is for you to find attributions or the harder it is for you to understand them because you're trying to deal with something that is too large of a context. And it's a lot easier when you're trying to slice that data up and, like, let's say, for a certain product, certain markets are a lot more important. And for that product, certain pages or feed functionality is a lot more important. So you definitely wanna slice your analytics and your data and your raw metrics in such a way where you can cut through the noise that you gather globally into something that builds better, attribution models and builds better correlations out of it. So that, like Right.
Vinicius Dallacqua [00:52:23]:
Removing this kind of noise from your metrics is also really, really good way to to avoid
Dan Shappir [00:52:28]:
I'll I'll give I'll give a different example for, you know, it might be that certain geos have performance issues. So let's so for example, at, again, talking about Wix, global company, we saw very different performance profiles, for example, for North America and for South America. On the other hand, you need to be careful with these sorts of things. Because, for example, you might say, hey, something really happened bad happened in Colombia. For some reason, I'm seeing a significant degradation. And then you realize, hey. I only have, like, 8 sessions a day. So if just one one bad session can skew my, my, data for for that geo.
Dan Shappir [00:53:19]:
So you you you kinda need to be careful. And again, this is something that you also talked about in the article about how you need to be careful with blind spots. Like, how, you know, some data can hide or obscure some other data and cause you to miss important aspects.
Vinicius Dallacqua [00:53:39]:
Yep. Yeah. Yeah. It's it's it's like I I bring within the archive, bring to to the same, talk by Tim. And it is, like, it is one of the things that it comes again from the triad. Right? Where you you that's from the data analytical part where you need to understand the how to analyze the data you have because, otherwise, you you might be looking to too much noise. And then it's harder to work with that data. Right? Like, then your your performance, work that you're trying to carve out of the data might even be is aligned.
Vinicius Dallacqua [00:54:14]:
So when you're trying to shift performance, you might not even see the needle moving properly because you're just being affected by by noisy data RAM data.
Dan Shappir [00:54:22]:
I I will say one thing, and I think we can summarize that this particular aspect with it is that I'm also a member of the w three c web performance working group. And there's always tension there with regard to attribution because you've got the RAM providers who are pushing to get ever more accurate and detailed attribution data that they can externalize and expose to their customers. And on the other hand, you've got the browser manufacturers who are concerned about privacy issues. So, you know, with too much attribution that opens the door to all sorts of fingerprinting techniques and and stuff like that. So you kind of need to strike a good balance there or put another way, you know, yet another reason why we can't have good things.
Vinicius Dallacqua [00:55:13]:
Yeah. That's a good point.
Charles Max Wood [00:55:14]:
Well, they're both valid concerns. So Yeah. It is a valid concern.
Vinicius Dallacqua [00:55:19]:
It is a very good concern.
Charles Max Wood [00:55:20]:
They're both con valid concerns. I really want to know what's causing my headaches, but, yeah, I also want to protect my users.
Vinicius Dallacqua [00:55:29]:
Yeah. There's there's a good way to anonymize that data, and most Yeah. Most collection tools will will make sure to do so. But that is a very if you're building your own collection tool, that's a very valid concern to have.
Dan Shappir [00:55:40]:
Yeah. And and, again, it's also a question for the browser manufacturers whether or not to provide certain APIs Right. Or not. Because, you know, obviously, if you could ensure that only good actors actually use those APIs, that would be wonderful. But, unfortunately, the world doesn't work that way.
Vinicius Dallacqua [00:55:59]:
Yeah. Absolutely.
Dan Shappir [00:56:01]:
I think we're nearing the end of our show. Is there anything in particular that you wanted to mention that we haven't mentioned so far?
Vinicius Dallacqua [00:56:11]:
I I mean, I can basically summarize that working with performance in a company, large or small, it to us with data. And understanding that data is naturally the second step, because a lot of times, you're very eager to get, you know, get down with with actual cold and try and move that needle. Sometimes it can be a bit too early, and while you're if you try to work on something that is not gonna be shifting, not only the actual application performance towards the right way, but not affecting the right product metrics. You know? It's it's very hard to get the buying again, and just gets harder to get a performance work off the ground. Now, of course, if you're working with a mature, engineering organization, that can be a different story, but most people are not in that kind of stage. Right? So working with attributions first and data and analyzing it, it might seem the most boring part, but it's the most important one as well. So make sure to just understand your data before trying to work on performance.
Charles Max Wood [00:57:14]:
Good deal. Alright. Well, let's go ahead and do our picks. Before we do that, though, where do people find you on the Internet if they're thinking, oh, I wanna know more?
Vinicius Dallacqua [00:57:26]:
Absolutely. I am mostly nowadays on Twitter.
Dan Shappir [00:57:31]:
X x. It's x.
Charles Max Wood [00:57:34]:
Everybody knows what you mean when you say Twitter. I'm joking.
Vinicius Dallacqua [00:57:38]:
Yeah. It's just I am not yet I most of the times, I forgot. It's just being renamed, to be honest with you. But
Dan Shappir [00:57:45]:
I think everybody would prefer to forget.
Vinicius Dallacqua [00:57:49]:
There, I am, Web Twitter. And, I mean, if you I have a pretty easy name to look on LinkedIn, but, that I've spent most of my time talk talking about performance on Twitter. That's where most people can find Web Twitter with, it's t w I t r. It's a bit, it's a bit shortened.
Charles Max Wood [00:58:12]:
Cool. Alright. Well, let's do ahead and do our picks. Steve, you wanna start us off?
Steve Edwards [00:58:19]:
Going for the high points first. Okay. That's right. Before I get to the dad jokes of the week, I do have one pick. And this has been around for a while, and I've heard other people talk about it extensively, but it's something I finally started digging into, and that's the warp terminal. For years, I've always just used the built in, terminal app that comes on Mac. And, you know, I've gotten fairly good at at, you know, switching me tabs tabs, and I've got, you know, tab groups set up and all that. But I heard a lot about it, so I started playing around with the warp terminal.
Steve Edwards [00:58:54]:
And it's really, really slick. Makes it very easy to find things. There's all kinds of enhancements. You can create custom workflows, which is sort of like custom commands where you pass in parameters. And, you know, you can it's pretty easy to customize your prompt to show, like, what git branch you're on and how what the difference is between you and your remote in terms of normal commit, colors. There's all kinds of customizations you can do. I know a lot of people use iTerm too, and I've used that in the past. But, I really like, the warp terminal just in in what I've played around with it.
Steve Edwards [00:59:28]:
You can find it at warp.dev, on the Internet.
Charles Max Wood [00:59:32]:
I'm gonna jump in here because, Warp is a recent thing that I've picked up as well and, yeah, all the things that Steve said. It also have some has some AI, so you can basically tell it what you want it to do, and kind of like an AI prompt. And it sometimes works, and sometimes it doesn't work quite how you want. Sometimes it also interprets my get commands as, or or my commands within, like, you know, using the rails CLI. And it'll say, oh, what you're saying is you wanna do this. And I'm like, yes. Because that is literally the command to do that. So so it's not perfect.
Charles Max Wood [01:00:16]:
But, you know, sometimes it's handy when you're trying to remember the exact combination of said ack and grip that you want in order to get the thing.
Steve Edwards [01:00:24]:
So, anyway Yeah. It also has some team stuff where you can use it with teams and share workflows and share output. Yeah.
Charles Max Wood [01:00:31]:
I haven't used any of that.
Steve Edwards [01:00:33]:
I haven't had a chance to play any either. But the downside that some people don't like is that you have to create an account, and they say why you need to do that. And so some people don't like that. But, anyway, it's it's really pretty cool. I really liked it.
Charles Max Wood [01:00:45]:
Yep.
Steve Edwards [01:00:46]:
Alright. Dad jokes of the week. So I've never understood why people wear black when they wanna be sneaky. They should just wear leather armor because it's made of hide. Didn't get a smile out of Dan. Alright. Gotta keep working here.
Charles Max Wood [01:01:05]:
So gonna say somebody laughed. Don't laugh. It only encourages him.
Steve Edwards [01:01:11]:
So I took my kids to see Disney on ice, and it really sucked. He was just some old dead guy in a box.
Dan Shappir [01:01:20]:
I had to I actually responded to that.
Steve Edwards [01:01:23]:
Yes. I love it.
Dan Shappir [01:01:24]:
That I said, I would that I actually expected a dead mouse rather than a dead Oh,
Vinicius Dallacqua [01:01:29]:
true. But
Steve Edwards [01:01:29]:
Well, if you noticed on Twitter, somebody responded with an actual cartoon of that same joke, and it shows these people, looking here's Walt Disney in a clear coffin on ice. It's really pretty morbid, but it's very funny too. And then finally, what famous military general was killed by a cannon? Napoleon blown apart. Those are my jokes of the week.
Charles Max Wood [01:01:59]:
Alright. Well, thank you for elevating my week. Dan, what are your picks?
Dan Shappir [01:02:06]:
1st, would like to unmute. I would like to remind us, all that Napoleon actually started as a gunnery officer, and one got one of his big promotions because, you know, there was this demonstration and they were trying to mob, the the the parliament, and he basically dispersed the crowd by firing the cannons point blank at into the crowd. Yeah. That that got him, to become a general. So yeah. Anyway, so about being blown apart by Bonaparte. Yeah. Anyway, so I don't have that much in the way of picks, this week.
Dan Shappir [01:02:54]:
So one thing that I have is that, Matt Pocock, who we've had as a guest on our show to talk about TypeScript, has written a book about, surprise, surprise, TypeScript. And since I consider Matt to be one of the foremost experts on this topic, this is a really good thing, especially given that the online version is available for free. So I'm actually going to share the link here right now, and we'll also obviously put it in the show notes. And the dog is barking, so hopefully, you can hear what I'm actually saying. But, here here's that link to that, to that book online. Highly recommended, Making effective use of TypeScript is incredibly useful and incredibly powerful, and, it's something that's kind of expected from web developers these days. You know, I think it can be it's a fair statement to say that almost all of us have moved off of, you know, untyped JavaScript and into TypeScript. So, that's that would be my first pick.
Dan Shappir [01:04:11]:
My, second pick is the fact that Google has kind of integrated AI into the browser, sort of. So they're actually experimenting with the window dot AI. It's currently, I'm I think it's it last time I checked, it was only in the beta version of Chrome and not in the release version, and you needed to, enable a flag to get it to work. But it's basically what they've done is that they've baked, Gemini micro, whatever they call it, into the browser itself so that you've got an, a large language model. That's a small large language model that's effectively baked into the browser itself and runs locally. So that your you don't need to be concerned with paying the bills for running it, in the cloud, which turns out is how Nvidia is making a ton of money and all the AI startups are burning through cash by paying all their money to Nvidia. So being able to run models locally is a very interesting technological development, and it will be interesting to see what happens with that and how various websites are able to leverage this technology once it becomes mainstream and open to all. Yeah.
Steve Edwards [01:05:52]:
I mean, considering how well the original Gemini rollout went, I I I think this should go very smoothly without any hitches. No bias or anything.
Dan Shappir [01:06:01]:
Exactly. And, also, there's the fact that it's a model that's not optimized for any particular use case. It's kind of a general model. So, there's also that aspect to factor in. I don't know what you want to do with A. I, but, you know, it does open up some interesting opportunities for user interactions. That's that's put it this way in websites.
Vinicius Dallacqua [01:06:28]:
Yeah. Because it has because it's a Gemini micro or whatever, it's it has a much smaller context than what it was trained on, but it's still very good to as a as a transformer, as a as a LLM. If you are curious about that stuff, I definitely suggest following Jason Mays, the web AI lead from Google. They he has a lot of great demos on what you can do with that stuff because not only the WebAI APIs, the the the upcoming, but you can also load models with WebAssembly and WebGPU. They can use your own models kinda locally. So that's pretty that's does a lot of good stuff that you can Yeah.
Dan Shappir [01:07:05]:
The only downside there is that downloading running the model locally, like I said, solves a lot of the cost issues, but it's also these models can be even the smaller models can be fairly large.
Vinicius Dallacqua [01:07:19]:
So downloading
Dan Shappir [01:07:20]:
so downloading them locally, that can that can be that can you know, we were talking about performance. That could be a performance issue.
Steve Edwards [01:07:29]:
I was gonna say that could hurt your performance.
Vinicius Dallacqua [01:07:31]:
It can hurt your performance. But then again, it depends on the nature of your product. Right? Because if you're, for instance, working with something like Photoshop on the web, they do kind they kinda do that. And it's not about the loading times, but execution times in that case.
Charles Max Wood [01:07:44]:
Yep.
Vinicius Dallacqua [01:07:45]:
Which then you actually gain a significant boost. Right? Because it's local, so the the latency is nonexistent. Well, and then I guess it's my picks.
Charles Max Wood [01:07:57]:
Yeah. You can go, and then I'll go. Yes. So go ahead.
Vinicius Dallacqua [01:08:00]:
Okay. So for my picks, I have two things as since we're on the topic of AI and ML. I've been started to dive into, ML training and machine learning, building models, and this kind of stuff. And I actually chose to do it with Alexei as a language. It's a brilliant language. Jose is a fellow Brazilian. I'm also from Brazil. And it's a really nice functional language to get started with.
Vinicius Dallacqua [01:08:25]:
And one of the things is that especially come from JavaScript. There's not as much tooling fatigue because when you're within, like, CF, things are kind of, like, very coherent, and there are and, you know, and all the options are kinda built by the same people. So everything kinda speaks not and no point in the same language. So things kind of flow very easily. So it's not like, for instance, you've started in Python and then then choosing a ML library to to to do things on top of and compatibility issues and all this kind of stuff. So that's my technical picking. Nontechnical, I would I'm all finishing the last season of sweet tooth. And if you haven't watched that series, it's very good for to to watch with the cases or if the kids are old enough.
Vinicius Dallacqua [01:09:06]:
It's a very nice, like, altered series to watch. Kind of tries to be lighthearted in a apocalyptical setting, which is an interesting thing.
Charles Max Wood [01:09:15]:
Awesome. Well, I'm gonna throw in a couple of pics of my own. First of all, I just wanna point out if you are interested in Elixir, we have an Elixir podcast. It's called Elixir Mix. I am not on that one, but it those guys are awesome, and they they cover things very, very well. I've also talked to Jose on multiple occasions from when he was in the Ruby community and then when he started doing Elixir stuff. So it's it's a very, very cool language. And if you're looking
Dan Shappir [01:09:42]:
I I whenever I hear about Elixir, it's usually in the context of how great it is. And to my shame, I've yet to learn it. Maybe I should start listening to that podcast then.
Charles Max Wood [01:09:52]:
Yeah. Yeah. It's
Vinicius Dallacqua [01:09:53]:
a it's a very, very not so hard language to get into. Even though it's fully functional, It's a very, very easy function to learn Yeah. Programming language to get into.
Charles Max Wood [01:10:02]:
Very approachable. I'm gonna start out with, game pick. I always pick a board game or a card game. When pick a card game, it's called 6 Nim. That's n I m m. Now when I looked it up on board game geek, it says that it is the same game as take 5. It has a bunch of other names, which looks like it's Italian. Category 5, take 6, blah blah blah blah.
Charles Max Wood [01:10:30]:
Anyway, a board game geek rates it, or has a weight of 1.19. Says 8 and older can play it, which is probably pretty accurate. Really just quickly what it is is, everybody plays a card face down. So you put a you put out 4 cards. There are 4 piles. Then you put out, your card face down, and then you go from lowest to highest, and you put your card you put the card onto whatever pile it would go on. Right? So it's it's whatever number is below it and nearest to it. Right? So, you know, if you if you put if there's a 34 out there and you put out a 35, it's gonna go on the 34.
Charles Max Wood [01:11:11]:
You put out a 40, the 40 will go on it as long as it there's nothing between 3540 out there. Otherwise, it'll go on the other card. Right? And it's always the highest card in the pile that you're playing on. When you play the 6th card on a pile, you get all 5 cards in the pile, and that 6th card that you play becomes the start of the next pile. The different cards are worth different points. And the only other, I guess, rule is is if you play a number that is lower than any of the piles out there, then you just take whichever pile you want and replace it with your card. So it's like you played the 6th card, but a lot of times there's a 1 or 2 card pile that only is worth 1 or 2 points. And so you you you take that pile because you're trying to get the lowest number of points.
Charles Max Wood [01:11:53]:
You play till somebody gets to 66, game's over. There. You know how to play it. It it was a lot of fun. The numbers are numbered 1 to 104, I think. And so, yeah, you just deal out 10 cards to every player, and you put the rest off to the side. So some of the some of the numbers aren't gonna be out there. But, anyway, it was a lot of fun.
Charles Max Wood [01:12:15]:
I think we played it in what? 10, 20 minutes? Yeah. I mean, it's real real quick play. But if you're looking for a fast fun game, 6 NIM or take 5 or whatever it's called where you live, that that was super fun game. I'm also gonna pile on the AI stuff. So I've started getting into writing AI code. I've been playing with it in both Ruby and JavaScript. I have to say that a lot of the tools for AI are actually nicer in Ruby. That might surprise some people.
Charles Max Wood [01:12:45]:
But, anyway, what I'm looking to do and so keep an eye out for this. I bought the Domains AI for Ruby, f o r, Ruby, and AI for JavaScript. And so if you're, and I'm gonna I'm gonna put together a newsletter for both of those topics. I'm also going to be putting on a summit, probably the week after Labor Day here in the US, which is the first Monday of of September. So it'll probably be Friday Saturday. I'll do the Ruby one first and then the JavaScript right after that. And then 2 weeks after that, so toward the end of September, beginning of October, I'm gonna do a 3 month boot camp, and I'm gonna be teaching people how to do Java or how to do AI. But the the difference between my boot camp and and it'll be a 3 month boot camp.
Charles Max Wood [01:13:33]:
So we're gonna get into prompt engineering and, you know, building chatbots and right. We're we're gonna be using the APIs. And, you know, we might do some, model training, but it's gonna be fairly lightweight, you know, so you don't have to know the math. You don't have to, you know, have a deep understanding of the models. This is how do you add AI to your app, you know, your web app, basically. But it could be a desktop app or something else if you're writing it in Ruby or JavaScript. So, and if you're if you're, you know, primary language is a different language, maybe it's Elixir, I'm I'm almost certain they have libraries that attach to the same stuff, and so you can probably figure it out. And I may or may not be able to point you in the right direction.
Charles Max Wood [01:14:16]:
But but this is what we're gonna do. And so yeah. So, AI for JavaScript and AI for Ruby dotcom. If you go to those websites, you'll be able to sign up for the newsletter for free. You'll be able to sign up for the summit for free. If you want the videos after the summit, that that's what you're paying for on those. And then, the boot camp will will cost money as well. But, I wanna talk to people and make sure that they're in a position to actually take advantage of the bootcamp because it's not gonna be a cheap thing.
Charles Max Wood [01:14:47]:
So anyway, that's what I'm working on these days. And, really, I'm really digging it. It's it's fun, fun, fun stuff. Another pick that I have, this is a Ruby Rogue's episode. I'll put a link to it in the show notes. But we recently talked to Obi Fernandez, and he has a book, about building AI stuff. It's language agnostic. I don't think we've released the Ruby rogues episode yet.
Charles Max Wood [01:15:15]:
We might have. But oh yeah, it is up here. I'll put the, here it is. I'll get the link in here. But, anyway, I'm I'm really, really loving the AI stuff. I think it's amazing. I I love the only thing that I guess JavaScript has that Ruby doesn't is, TensorFlow JS. Right? There's no TensorFlow Ruby, so you have to interface through JavaScript or Python.
Charles Max Wood [01:15:48]:
But beyond that, yeah, I'm I'm really enjoying just what you can put together with it. And I think a lot of the capabilities that come out of it, go well beyond just the prompt for something like chat GPT. Obi actually explains it very well. What he's done is he's built, essentially, virtual AI assistance that, they they take the prompt, but then they also have API capabilities into systems that you use, like your email and stuff. And so you can actually write a prompt where it'll go find an email for you or respond to certain kinds of emails for you or things like that. And so, you know, you start getting into okay. I'm I'm not gonna just prompt you to write something out or give me an answer. I'm gonna prompt you to actually go do something useful.
Charles Max Wood [01:16:38]:
So, anyway, I have pontificated on that longer than I needed to, and I am very much enjoying what I've got there. One other pick that I have, and this is a technology pick. It's, something is put out by, Basecamp or 37 signals. It's called Turbonative, and it's essentially a way of wrapping your web applications into a native app that you can deploy to the app stores on Android and iOS. And and I have been very, very happy with what I've been able to do with that without having to write a full on, native app. And I can see the pros and cons. Right? I mean, if if you're not on the web, it won't work. Right? And you you know, maybe you can use some local storage or things like that to make it do what you want, maybe load anyway.
Charles Max Wood [01:17:28]:
But, the majority of it just requires you to operate, you know, connected to the Internet, but most people are operating on their phones connected to the Internet anyway. And so, you know, anyway, for what that's worth, I'm I'm really, really enjoying that. Looking at ways to get my web apps onto other systems, though, beyond phones, like, the Fire Stick TV and things like that. But Fire Stick TV has a way of wrapping web apps anyway. And so, anyway, that's just another area that I'm I'm diving into, and I might put together a boot camp or a course on that as well. But, anyway, so turbo native is my last pick.
Vinicius Dallacqua [01:18:09]:
For AI and Alexa, you have an x and bumblebee. So if you were trying to get, like, familiar with that subject, training models and running models, and x and bumblebee got you covered.
Charles Max Wood [01:18:21]:
Cool. I
Dan Shappir [01:18:22]:
just tapped a button on
Charles Max Wood [01:18:23]:
my computer. Oh, there we go. It made the window go away, and I couldn't click on it. Alright. Good deal. Well, thanks for coming. I'm always hesitating to say your name.
Vinicius Dallacqua [01:18:37]:
Vinicius.
Charles Max Wood [01:18:38]:
Vinicius. Thank you for coming. This is this has been really cool, and I I love kinda just getting into, hey. You know, these are the steps and kinda levels to performance and and how to read the data and then how to make your case with the data.
Vinicius Dallacqua [01:18:52]:
Yeah. My pleasure. Always happy to to talk about it.
Charles Max Wood [01:18:56]:
Alright. Well, we'll go ahead and wrap it up here. Until next time, folks. Max out.
Hey, folks. Welcome back to another episode of JavaScript Jabber. This week on our panel, we have Dan Shappir.
Dan Shappir [00:00:11]:
Hi. From a very hot Tel Aviv.
Charles Max Wood [00:00:14]:
We also have Steve Edwards.
Steve Edwards [00:00:16]:
Hello. From a very hot Tel Aviv style Portland. Right.
Charles Max Wood [00:00:21]:
It's I'm
Steve Edwards [00:00:22]:
like, really hot here. Like yeah.
Charles Max Wood [00:00:25]:
I'm Charles Max Wood from Top End Devs. Yeah. It's it's been getting warm here too. I think it's all over the place. Maybe maybe out where, our guest is too. We have a get special guest this week. It's, I'm not even sure how to say your name. Vin Vinicius.
Vinicius Dallacqua [00:00:40]:
Vinicius. Yeah.
Charles Max Wood [00:00:41]:
That's good. Vinicius Delacqua.
Vinicius Dallacqua [00:00:44]:
That's me. Yes. We could call
Steve Edwards [00:00:45]:
him Vinny for short.
Vinicius Dallacqua [00:00:46]:
Yeah, Vinny. I'm pretty used to being called Vinny, to be honest. Oh, big V. Yeah.
Steve Edwards [00:00:51]:
My cousin, my cousin Vinny.
Vinicius Dallacqua [00:00:53]:
Exactly. That's kind of how it started whenever I started speaking to American people. But, yes, Vinish is here from Stockholm, Sweden, where I wish it was warmer. It's been a very, rainy summer, especially rainy and way worse than normal Swedish summers even though Sweden is not necessarily known for the good weather.
Charles Max Wood [00:01:17]:
Yeah. I was gonna say I'm I'm kinda wishing for that here, but I saw a headline a couple days ago in the newspaper that said that Utah is no longer in a drought. And in fact, all of our reservoirs are over full.
Steve Edwards [00:01:28]:
So That's good.
Charles Max Wood [00:01:30]:
Maybe we don't need the rain. I don't know. Anyway, Yeah. We brought you on to talk about performance, and, you shared an article with me before these guys got on. I know you've also been chatting with Dan. So I'm just gonna let you guys take the lead as far as where we go, and then I'll chime in with my, basic questions since I am not a performance expert like you guys are.
Vinicius Dallacqua [00:01:54]:
Yeah. Sounds good. You guys,
Steve Edwards [00:01:55]:
you mean those 2? Because I'm not the performance expert either.
Dan Shappir [00:02:01]:
Yes. You know, performance is kind of important. I don't think everybody can, should or can be an expert, but and I guess this is one of the things that we'll be talking about. It's not something that you can ignore either. Let's put it this way.
Charles Max Wood [00:02:16]:
Right.
Vinicius Dallacqua [00:02:17]:
Yeah. Yeah. It's it's it's one of those topics that it's like the more you know, the more you realize things go deeper, and you can't get pretty deep.
Steve Edwards [00:02:29]:
You know what? Yeah. But one example I've heard frequently mentioned and I thought about it myself is, you know, I'll look at Stack Overflow posts and people talk about, okay. What's the best way to loop through an array that's more performing? Or what's the best way to, you know, handle large datasets in code and and so on and stuff like that. People will obsess over these little things that will increase performance. Then they've got a 6 meg image file downloading on the same page, and
Vinicius Dallacqua [00:02:56]:
I'm Right. Everything anyway. You know? So
Steve Edwards [00:02:58]:
so I I think it's safe to say that, when you're talking performance, it's sort of gotta be comprehensive and not, you know, just focused on, you know, code performance, I think, or bundling
Vinicius Dallacqua [00:03:10]:
of
Steve Edwards [00:03:10]:
code and that kind of stuff.
Dan Shappir [00:03:11]:
It it also needs to serve a purpose at the end of the day, which again is something that I guess we will be talking about. But performance is not is it's not an ego trip. It's not about, you know, bragging rights. It's about serving your customers, your users. So anything that you do needs to be with that front and center. If it actually brings value to your users and your customers or if or doesn't. And if it doesn't, then it's kind of pointless.
Vinicius Dallacqua [00:03:47]:
Yeah. Yeah. Absolutely. And and the whole micro benchmarking thing is is was a very good example. Like, it's very easy to to look into, like, say, things that sound very flamboyant, and you've kinda missed the target where the actual hurt is, which is on the the user's, you know, experience.
Dan Shappir [00:04:09]:
Also, you know, in the context of JavaScript, it's lies, damn lies, and micro benchmarks. Yes.
Vinicius Dallacqua [00:04:18]:
That's a good one.
Dan Shappir [00:04:19]:
Yeah. Because the way that the JavaScript engines work, the modern JavaScript engines and the optimizers they contain, I've read some, quote unquote, horror stories about, you know, people drawing conclusions from micro benchmarks that were effectively not just meaningless, but in fact, wholly misleading. Let's put it this way. Like, people wrote loops, but because it had no side effect as it were, The optimizer kind of optimized the loop into nothing. And then what are you actually even measuring? So so yeah. But but put to put it bluntly, I wish my problems were how fast, you know, some loop in JavaScript works. Although occasionally it does actually happen. And I'll finish with that that this part, I actually contributed back to, the Prometheus client for Node.
Dan Shappir [00:05:29]:
And I don't know if you're familiar with it. It's, it's a it's a system or service for monitoring and alerting and stuff like that. And the optimization was actually an optimization about how to loop and build the response string. Because in that particular case, via profiling, I proved that it actually did make significant difference. So it's it can, but it usually doesn't. Anyway, enough for for my chitter chatter, and now over to you, Vinicius.
Vinicius Dallacqua [00:06:06]:
Yeah. The the I mean, the benchmarks and, it has to serve a purpose. Right? And if you're just benchmark like, if you're micro focusing on things, they might not be painting a good picture for even what you're even trying to measure. This actually brings to a very interesting conversation that I've had a chance to have at JS Nation, in Amsterdam couple of couple of weeks back with Ryan, from SolidJS and Atila. So we were talking about how can one build a well structured benchmark to test, different frameworks on some aspects. Right? And we started talking about how it would be good to have something that can bring towards web vitals because it's something we already kinda have standardized. And and that's how we can also measure actual impact and, like, break down different threshold, like, different kind of common problems where you can try to build correlations and build a good delta between different frameworks. Like, what kind of strategies pays off the most.
Vinicius Dallacqua [00:07:15]:
Right? Because Solid does it one way, React does it about that. So it cannot necessarily measure them apples to apples in a way. So let and so you need to try to establish, like, a good baseline of how can you benchmark those kind of things there. Does in the end, just deliver the kind of similar results, but they'll do it differently. So I'm not really trying to benchmark in a way the framework itself, but different strategies that they use.
Dan Shappir [00:07:38]:
Yeah. It's a it's an interesting conversation. First of all, it's worth mentioning that Ryan is kind of a regular guest here on the podcast. We've had him quite a number of times.
Steve Edwards [00:07:47]:
Mhmm.
Dan Shappir [00:07:48]:
And he's an incredible smart person. Because like you said, he's the creator of solid, solid start. He's also kind of the CEO of signals, Really popularized that. And he's also one of the people most knowledgeable about frameworks in general. I mean, he he won't like his his hobby is to get his hands on, you know, more or less every framework out there and then do all sorts of comparative testing and analysis and whatnot. So, yes, if if anybody is knowledgeable about how to best compare performance in other aspects of frameworks, it's probably Ryan. It's also worth mentioning a tool created by the quick people called mitosis, which you can actually use it to compile, like, sort of pseudo code, like react like pseudo code into various frameworks, and then you can kind of more easily build the same application using different frameworks and then be able to compare it. Because like you were saying, if we want to get away from micro benchmarks and we want to actually compare real applications built in various frameworks, we run right into the problem of the overhead of building and maintaining Mhmm.
Dan Shappir [00:09:10]:
Sophisticated applications in a variety of frameworks and, you know, who wants to do that. I just do want to mention that there's an alternative approach if you're interested in the performance of various frameworks. And I actually gave a talk about that at at several conferences, including, I think, JS nation a year before, which is to use RAM data to compare the performance of framework. So you're not looking, like, at a specific application. You're looking across all websites built with a particular framework, and then you're not so much saying, like, how fast will my application be with this framework? You're more looking at how likely am I going to how likely is it that I'll be able to build the fast website or web app using a particular framework? And you might say
Vinicius Dallacqua [00:10:04]:
Exactly.
Dan Shappir [00:10:05]:
This framework is more likely to produce, faster websites or web apps, and this one is less likely. You know, not surprising the one that is least likely is angular, and the ones that are most likely are quick, solid, svelte, you know, these. But it's really you really need to be careful about mixing correlation and causation. Like, for example, people who use QUIC are people who are more likely to be concerned about web performance. So is are they producing faster websites because they're using QUIC? Or are they producing faster websites because there are people that care about web performance and then so they know what to do, and they also happen to be using QUIC for that reason. So you need to be kind of careful with with doing these types of conclusions.
Vinicius Dallacqua [00:11:07]:
That's a very good point, actually, because it's one of the things that I, like, you you come to realize when you start collecting RAM data. So I I established I helped establish
Charles Max Wood [00:11:18]:
Time out. Time out. Time out. I know that we've defined this in other episodes, but RAM data, what is it?
Vinicius Dallacqua [00:11:25]:
It's real user metrics. So you have 2 different sides of performance monitoring. So you have what is called the lab and what is called the run side. So the lab is just, you know, CICD, lighthouse, and whatever you do to make sure that you don't perform regressions before changes actually go live to users. And on the RAM side, you have different providers, and you can even use, like, GA, Google Analytics even have, like, like, automatic web vitals.
Dan Shappir [00:11:54]:
Or Yeah. It's it's basically the question of, are you, testing your performance in a synthetic lab style setting, maybe even on your own computer while you're developing, and and and comparing that way, or are you collecting performance data from the field, from from actual, real user sessions? So, obviously, you can collect data for your own particular website, especially if you have enough traffic coming in. But the interesting thing and we actually had Rick Viscomi from Google here to talk about it. Google actually collects data from all sessions on Chrome, and they put it into this database called Chrome u user experience report or crux for short. And here and Steve usually makes a joke about the crux of the matter or the crux of the issue or whatever. Yeah. Exactly. And the nice thing about what Google does is that they actually give everybody essentially access to this data.
Dan Shappir [00:13:00]:
And also they attach all sorts of metadata to this data, like, you know, which framework was used to build which website and whatnot. So you can do all sorts of slicing and dicing. So you can look at, the performance of, you know, your own website or competing websites or ecommerce websites in general, in particular geos, or particularly particular types of devices, and or created using particular frameworks, using particular libraries, etcetera. It's interesting data for those of us who like to geek out on performance.
Vinicius Dallacqua [00:13:36]:
Yeah. That that's a that's a really good point to start. So for, like, for building for building web websites, like, things can go wrong many different ways. Even when people use the same framework and 2 points exactly, Dan, like, some people when building, like because also, like, when you pick React, most people big React is one of the most like, it's probably now this team most used framework out there already. I don't keep up with the trends. But
Dan Shappir [00:14:02]:
Oh, yeah. For sure.
Vinicius Dallacqua [00:14:04]:
Yeah. So it's it's, it's just because of the sheer volume you have, right, of people writing. Like, there will be all kinds of quality out there.
Dan Shappir [00:14:11]:
It's king of the hill. It's case in point effectively, just throw the numbers out, there are many as many websites and web apps being built in React as all the other frameworks put together.
Vinicius Dallacqua [00:14:25]:
Yeah. Yeah. That I would I would imagine so. And so in that mix, there would be a lot of so when you're trying to divide, like, if you're building some sort of collection tool based on cracks and you're trying to divide percentiles for each framework, the the the React having so many more samples. Right? It would kind of push the data towards the left, or rather towards the right side where you're gonna have more, worse, metrics overall just because of how many the just the sheer volume of it.
Dan Shappir [00:14:54]:
Yeah. That's one aspect. Another aspect is the fact that a lot of React websites like, there's a long tail of React websites. Websites either built, like, long ago or built with particular focus in mind. So, for example, if you're building a website in React and you're not using service side rendering or static side generation. If you're only using client side rendering, which means that, you're building the DOM representation of the website on the client side. Then by definition, effectively, you're not going to have good core vitals, which is the way that Google measures loading performance. You might have other aspects of performance that are good, Like, I don't know, responsiveness or stuff like that.
Dan Shappir [00:15:46]:
But in terms of loading performance, if you're just using, client side rendering or CSR, it's not going to be good. And the reality is that a lot of React's websites use CSR. Some of them don't care. If they're not indexed, then maybe they do. They don't care. But it but if you're looking at the metrics that Google is collecting, that's what you see.
Vinicius Dallacqua [00:16:14]:
Yeah. And the it's it's interesting that the whole pivoting that we are now having towards the service side as well. Because one of the things that, like, I've I've worked I've worked for Spotify and I worked for Klarna before with Spotify. And in both places, I've set up, the, like, monitoring tools and start collecting RAM data and all of this kind of stuff. And for CSR, one of the strategies that that people have is normally, like, code splitting line. That's pretty much your only, one of the few things you can do to try to improve that first paint and and the and the LCP. And even that, like, when I I run an experiment once trying to understand especially for cloud, when I was working at cloud, I I was trying to build benchmarks on how can we make sure that we have as a third party, how can you make sure to affect the loading of the site you're integrated with the, you know, the least. How can you make sure that you are not the one who is causing the the harm? And I ran an experiment on on on doing cold splitting.
Vinicius Dallacqua [00:17:14]:
And back then, mind you, it was, gosh, it was, like, 26 2018, I think, or 2017. So it was I I I've, like, do did, like, a full code splitting, kind of, like, manual code splitting, splitting chunks everywhere, and, like, lazy loading some stuff. And and back then, it was kind of a a revelation. Nowadays, I think more more people understand why, but, like, when you do that many chunks, things can get very congested when you're loading when you're loading websites. And doing a lot of JavaScript, small chunks at a time. It was a lot in compression size as well, and you're also gonna be hogging, you know, CPU time. And back then, not even using HTTP 2, if I remember correctly, with, even though it was already available. So, you know, like, the the whole, critical render path was absolutely destroyed, and congested.
Vinicius Dallacqua [00:18:06]:
And I I think back then, we didn't even have the preload scanner to try to, like, hack into as, like, priority, and all this kind of stuff. Yeah. So it was it was, like back then, it was a revelation on yeah. You can have too much of a good thing and then, like, start working with prioritization of chunks and this kind of stuff.
Dan Shappir [00:18:24]:
Yeah. We we actually had Robin Marks recently to talk about this whole network thing and and how it often behaves in a way that's not expected. And and to bring it to a more general aspect and kind of related to the main topic that we've yet to focus on. I would say that one of my main, things regarding performance that I always say is that you've got to measure. You've you you you've got to measure in order to decide what to focus on. And after you make a change, you got to measure to verify that the change you made actually improves things and doesn't even potentially degrade them. And and I can give a case in point, like, you know, if your CSS is small, a lot of times you will hear that you should inline that CSS into your HTML to avoid the extra round trip to bring the CSS because CSS is render blocking. So you want to get it down as quickly as possible.
Dan Shappir [00:19:28]:
And I've seen cases where inlining the CSS actually degraded, performance. And, you know, and without going to the details of why, the fact is that once you see that it's actually degraded, you roll it back. Yes. And because, you you know, it
Vinicius Dallacqua [00:19:49]:
didn't do measuring the whole measuring, it's it's like, I always mention whenever you start working with performance, you always start from the data. You always start if you don't have a good collection story, you don't have a good data story, you have to start collecting data from real users that is. Right? So because because lab data doesn't really tell you the whole story as as you've been breaking down earlier. So you need to to understand how's the actual user experience out there, and you need to understand even what kind of data do you wanna collect, which which kind of brings us into the the topic we've been discussing as well on the the article. And the whole thing of understanding your product and where you wanna collect data and what they are collecting and, you know, building the the story around performance from that perspective first instead of just trying to micro optimize first is how you're gonna make sure to deliver impactful results to your users.
Charles Max Wood [00:20:43]:
So yeah. So, I mean, I I read part of the article and yeah. And what I'm wondering, you know, just jumping in then, you talked in your article about lab versus rum and things like that and and and how to measure it. But, I mean, if if I'm if I'm really new to this, right, and, you know, Dan mentioned having measurements and knowing what your numbers are so that you can, you know, verify that you had the effect that you wanted, How do you start gathering that? I'm assuming that's kinda your first step, right, is gathering that information, whether it's in a lab setting or room setting.
Dan Shappir [00:21:20]:
Yeah. I mean, very often, if you're, like, engaged in in a performance, project or product related project, you'll be under kind of the gun to to show results. And it's kind of tempting to start optimizing things. And I whenever I was engaged in in projects like that, I I quickly pushed back, and said, no. We got to get, the the data first. We got to have the graphs. We got to have the measurements, because, because otherwise, you're just working blind. And if you want an extra incentive, when it comes to your time for your, annual review or half annual review and you want to, like, prove your worth to the company, having a graph that shows, like, this is what I did.
Dan Shappir [00:22:16]:
It's a it's a line that goes up Yes. Or down depending on what you're actually graphing is is a great thing that, you know, you want to have when you're trying to get a raise or something like that. So I literally, in various occasions, pushed back against management who were trying to get me to start optimizing before we actually had good measurements in place. And and to your question, Chuck, we've had some guests on the show to talk about, you know, these days, there are really 2 good really good ways that I have to mention to get good metrics. 1 is if you're is basically just use the the goo the Google search console. In the Google search console, they have a core web vitals panel, and you can get, you know, information about pages that have performance issues. It's it's not an ideal and optimal run solution. You know, for example, they average the results over a 28 over a period of a month effectively.
Dan Shappir [00:23:23]:
So improvements you will make will take time to show, and and likewise, the gradations will take time until they actually manifest themselves. But, you know, it's a good starting point. Another another good starting point is that they're like, Vinicius mentioned before. There are a lot of third party tools and services out there that are fairly straightforward to integrate into your website. No. Some of them are even, like, partially free. And but, you know, we've had people from Sentry. They have a wrong solution.
Dan Shappir [00:24:00]:
We had people from, Raygun, Raygun, Akama. We've had people from Akamai. They have the Impulse. There there are a lot of there are a lot of great tools out there that are fairly straightforward to integrate and that you can start getting data from. You know? Take your point.
Vinicius Dallacqua [00:24:21]:
Yeah. Yeah. You do. You do have a lot of options, like, debug bear and run vision. Yeah. You also have as a speed curve. But there is, there
Steve Edwards [00:24:28]:
so real quick. We're talking about tools. How often do you see tool because a lot of these tools are basically jumping JavaScript into your page. Right? So I know Google Analytics is a classic. So how many times do you see the tool that you're putting in to measure performance hurting your performance because of everything it's putting in your site to measure performance?
Vinicius Dallacqua [00:24:49]:
When you ask, I have heard horror stories with, OTL and, integration
Dan Shappir [00:24:56]:
name names.
Vinicius Dallacqua [00:24:58]:
Yeah. I'm not gonna, yeah, I'm not gonna name any names, but I haven't I have heard, interesting stories, when trying to implement OpenTelemetry. And that's one of the examples where it can go pretty bad in one way when you're trying to measure.
Steve Edwards [00:25:15]:
Yeah. I know. I remember Netlify had a service that you could pay for. I haven't seen it in a while where would it they would run stuff like that on the server. So you'd you know, you're not dumping JavaScript into your page. I I never used that. I just remember reading about it, seeing it as as an alternative.
Charles Max Wood [00:25:29]:
Most of the back ends have services like that too. And, sure, they don't give you, like, the core web vitals. Right? Because they're not
Dan Shappir [00:25:36]:
Yeah. They're front end things, not back end.
Charles Max Wood [00:25:38]:
Yeah. Because their front end measurements, not back end measurements. But But
Vinicius Dallacqua [00:25:41]:
is this is one of the
Charles Max Wood [00:25:42]:
They they give you some information about how long it takes to
Dan Shappir [00:25:45]:
get the data out of the way. I will say this, you know, like Vinicius used mentioned, you kinda need to be careful. But, most of the rum providers that actually measure core vitals, you know, they know about core vitals. They've done work to ensure that whatever they're providing has has little or no impact on on your score.
Vinicius Dallacqua [00:26:10]:
Exactly. Yeah.
Dan Shappir [00:26:11]:
If one of them does, it will be it will pretty quickly come out. But, you know, just Google with your friends. Search what people say about it and, you know, try it out yourself and and see what happens.
Vinicius Dallacqua [00:26:23]:
Yeah. And, on the whole topic of, actually, on the whole topic of back end, I've I've wrote an article last year for birth calendar about server timings, and I think there's there's a good amount of things you can do to help understand and, like, end to end sort of tracing, when it comes to even to back end stuff. I mean, we don't have it as well standardized as as Web Vitals. But understanding because one one of the things that a performance specialist becomes is, like, it comes in 3 parts. Right? Like, he's a salesman, he's a data analytics person that has to understand what kind of data you're trying to collect and understand the data you collected in order to visualize it well and, you know, a good, an engineer side as well. So the the the back end part, there is a whole lot of attributions that you can build within the back end side. Unfortunately, not as well standardized as web vitals, but but that's actually a good opportunity, right, where one can try to define good metrics within back end. Mhmm.
Vinicius Dallacqua [00:27:21]:
But, like, when you're trying to understand, especially, for instance, when you're now starting to have more React being run on the server, right, with with, Next, the no newest versions of Next, the newest version of React, and things are kind of moved They would be moved a bit more to the back end, and processing part time is gonna be spent more, you know, that TTF b window. And it does have a great amount of benefits that you can get out of it, but some things can be a bit, unexpected. Like, so you can have some performance degradation as well, on some types of websites out there. Because not all websites, have trying to deliver the same kind of experience, thus not all parts of web vitals matter equally to the to all of our websites. And so if there is if you are working on something that is very sensitive to that TTFB window, you really wanna make sure to understand from the server timings, then, you know, from behind the curtain, network curtain, what's going on, and you can collect that data as well.
Dan Shappir [00:28:24]:
Yeah. And and by the way, like I said, we had Robin Marks on the show. And one of the things that came up is basically that if you really care about TTFB, and especially if you've got a global audience, then you probably want to be using a CDN. And then the TTFB of your own server is not irrelevant, but is, you know, less less impactful. Let's call it let's put it this way. But I would actually like to pull us back a little bit. And and looking at your article, which I have read, one of the things that you wrote about that really resonated with me because it really matches my own experience is about getting buy in from management. Can you talk about that a little bit?
Vinicius Dallacqua [00:29:14]:
Yeah. Absolutely. Is is is the salesman part of the of the triad? So it is, it is a it's, like, in a way, you really have to understand the product you're working with. So if you're working in a product team, one of the things that you constantly hear and I've certainly faced it myself as well when I was working on on product teams is you have a certain pressure to ship features constantly. So sometimes in mobile, sometimes and, sometimes more than others, you will have a pressure looming and trying to solve back, backlog stuff, which some, for most cases, performance kinda falls under, you you it's hard to get the buy in coming from management. So one way you can start doing it is, of course, as we mentioned, collect data. And with that data you collect, you wanna try to understand correlations between, you know, pain points of your users and how your product is performing. Because when you're trying to build work within performance, you will really want to understand on if you work on a certain part of application, you will deliver the most, the world, the most optimal results for the perform for the product to perform.
Vinicius Dallacqua [00:30:25]:
So you're not only trying to cover the engineering side, but you will have like, when getting by, you wanna make sure to cover the product side. You know? Like, to do shipping good metrics, not only for APMs and for your performance metrics, but also understand how those will affect your product metrics. So the business side also matters. So you have to wear kind of, like, both hats and and build that bridge between engineering and product. That's how you can strengthen, the the buying you get from management.
Dan Shappir [00:30:55]:
I totally agree with that, and it kinda reminds me of, an interesting experience that I've had quite a number of years ago, and I won't be naming names. But I remember talking with this person. I was working at a certain company, and I was speaking with one of the people in product. And I asked that person why, their product specifications never include any performance requirements. And they answered first, the the, he said, I don't know how to specify performance requirements. And I get that, and it's become easier these days. Again, thanks to Google with co work vitals. We have, they they made the industry aware of a relatively small set of well defined metrics that, you know, you can tell that person from product, hey, read this article.
Dan Shappir [00:31:56]:
I actually even gave a talk about it at the previous employer to product people to kind of teach them about, how performance is measured and the impact of performance and whatnot. So that's that was his first point. And the second point, which really made me laugh, what he said, but I expect our developers to write the code in the most performant way possible because that's what developers do. And and I really started laughing because, you know, we we work, like you said, we're always in a rush to to deliver features. We're always under the gun. We're always facing short time frames and not enough resources. And certain aspects like performance can, you know, go out the window in these types of scenarios. If you're pressured to deliver a feature and, you know, you won't be really worrying about the performance impact if you hardly have time to to finish the the requirements that are actually in the spec.
Dan Shappir [00:33:00]:
And and yeah. So so if if you and and that's from my perspective why management buying is so important because very often we hear about people saying, hey. Can't we do stuff like that grassroots? Can't we introduce quality, like, from the bottom up and stuff like that? And the answer is no. Because if you're going to be engaged in a large scale ongoing project, it's going to require a lot of resources, time, and effort, and even money if you're, let's say, buying run tools and other and other things, then management needs to be aligned with that. Otherwise, it just can't happen.
Vinicius Dallacqua [00:33:43]:
Absolutely. Absolutely. Yeah. And then becomes a fight, right, over what your engineering team interests are and where your product team trying to ship and and, you know, all the things that product teams have agreed on for quarters. So it does become like, you need to get a lot more involved with the product side and understand what kind of values matter the most for the product side. And it that's why it becomes a salesman pitch because it's not just blindly trying to web virus have done wonders for that conversation starter. Like, I think most people nowadays understand some aspects of web browsers. At the very least, when you mention it, that it matters for SEO, it matters for for the product.
Vinicius Dallacqua [00:34:24]:
So that's great. That's really, like because I still remember from my days of Klarna, having to build, you know, like, what what metrics matter the most for a product was was an actual task. You had to so many metrics are available to you and so little knowledge around how they affect different parts of the product. But when it comes to to building those kind of values and translating them to to product, it becomes onto an where it's very important to be to understand the product that you're working on. Because, like, not all products will have like, as as I mentioned a bit earlier, like, not all products will have the same kind of values or you can't apply all metrics blindly to all products because some metrics just don't matter as much to certain type of products. For instance, me working at at now at Vova Cause, one of the products that I'm currently working very closely with is the configurator that they have. And the configurator as a as a as a product is just an actual product team that cares for it. And it does have very, very different, business metrics, product metrics than one would think about when you think about conversion and all this kind of stuff.
Vinicius Dallacqua [00:35:38]:
So how the product behaves in real time is a lot more important than how it loads. Because it you can do a lot of stuff to try to trick it loading, but once it is loaded, that's the thing. Like, you have you the INP metric for that product is is, like, king. That's the most important part, IP and CLS. So, you know, you need to understand which metrics is gonna be delivered the most impact for the types of products you're trying to ship. So you can build, you know, KPIs and SLAs with downstream teams and try to have this kind of consolidated governance model. Or at the very least, just understand how things impact your application differently. And from that part within the engineering, you can understand also how to tie up those engineering metrics with what kind of product metrics matter the most, and you can try to build correlations from that.
Dan Shappir [00:36:27]:
I totally agree with what you just said. An interesting case that I had experience with was when I was working Wix. There was the Wix editor that you would use to you build websites using WYSIWYG dragging and dropping things around. And you had the actual websites that you built with the Wix editor. So for the websites, the most important aspect was the loading performance, like, as measured by, let's say, LCP, largest content for paint. But for the editor itself, I mean, obviously, you don't want it to take half an hour to load. But most customers didn't care so much about how long it took the editor to load. What was more important for them was that once the editor was loaded, it was really responsive and snapping.
Dan Shappir [00:37:21]:
Mhmm. And when you drag things around, they would drag smoothly and not like, you know, with this jitter or trailing after the cursor or stuff like that. So you really need to be cognizant about what it is that is important to your users in the context of performance and what it is that you want to optimize. And, again, going back to that aspect of getting buy in from management that it's actually worthwhile to invest the effort. Because one of the things that I like to say is that performance is a journey, not a destination. It's not like I'm going to invest a little bit of time and effort, get our performance house in order, and then we can forget about it forever because it's done. That's not how it works. You you need to put systems in place to identify regressions because regressions will happen.
Dan Shappir [00:38:13]:
And then when you detect a regression, you're going to have to invest effort in fixing that regression. So it's ongoing effort. And by the way, in this context, I'm going to post an a good link. So, again, thank you to the people at Google. Their web.dev website has a section called case studies where they, where they, post a lot case studies about how improving performance has impact variety of businesses and products. So you can actually go in there, find services that are similar to what you're building, and then be able to bring that to management and say here, you know, this company x does something that's similar to what we're doing. This is how improving performance has impacted their business, their bottom line, and this is why it's worthwhile for us to invest this effort as well. Yeah.
Vinicius Dallacqua [00:39:11]:
Yeah. Absolutely. And there you know, the it's funny that back in the days where I actually have, an article from that time I'm posting here on the chat, and the building building, and then, performance monitoring, that was for lab, that article that I put based based on that. So it's like bill before the time when we had CICD with Lighthouse. So I was trying to integrate Lighthouse into CICD manually and build my own CICD server with Lighthouse. One of the things that Lighthouse allowed I think it still allows us to do. I haven't dug into the Lighthouse source code for quite some time. It was to define different weights or different metrics.
Vinicius Dallacqua [00:39:50]:
So when you get a lighthouse score from from from within your CICD, you can define, for instance, you for if you're in your your application math is more, INP math is more for application. You can customize the weights of the score you get from Lighthouse based on that. At least you could back then. I'm I'm not entirely sure if you can still can because I haven't dug into the source code for some time. But it's it's one of the things that back then
Dan Shappir [00:40:16]:
open source, so you can probably do whatever you want if you're putting
Vinicius Dallacqua [00:40:19]:
the Yeah. True. To put in the effort. Yeah. But, like, back then when I was doing that kind of integration, is that defining for for instance, back then, gosh, like, one of the metrics, that we used to have. I mean, LCP, FCP is still where we're around. So, like, those as a third party at Kalana was the ones that matter the most. So I don't know.
Vinicius Dallacqua [00:40:38]:
Like, you can fine tune the scores based on the weights that you well, you know, matter the most for you.
Dan Shappir [00:40:44]:
So we kind of talked about it already, but still, one of the sections in your article is about, metrics and KPIs and SLAs. Can you elaborate about this?
Vinicius Dallacqua [00:40:56]:
Yeah. Absolutely. So the so govern if you're trying to establish governance, if you try governance, and or if you're trying to get, better understanding even from what you're trying to ship, and, normally, we don't I mean, we don't work in isolation. As you're working for a very, very small company, normally, you have teams either above you or below you. So you'd are the downstream or you are working with teams downstream. So a graph service or an API or a back end p so you you after you understood which kind of metrics matter the most for you and for your product, and you're building starting to build correlations, one of the natural next steps is to build KPIs or key performance indicators for your application with a lens of performance, of course, and building also service level agreement, SLAs, against teams that might affect your performance. So this comes in, you know, the any care, person would, would know those by heart, but it's one of the things that really helps you strengthen, your because it's pretty much it's what you said. It's not a race.
Vinicius Dallacqua [00:42:06]:
It's a continuous thing, and you will have regressions. And whenever you have regressions, you, first of all, want to document those, and you always want to document both regressions and any performance, improvements that you ship because you wanna understand how things, you know, worked over time or not. And having those SLAs is a good way to communicate downstream and ensure to have more resilience on things that you're trying to ship. And I've because some parts of the performance, when you work in a big product, some parts of the performance, spectrum is not directly coming from you. One of the things, for instance, that we have, and I think most people out there will will know this, is GTM. And GTM can really you know, if you have a a markets team marketing team that's pushing a lot of tags on DTM, that can really mess you up, on performance, and it's not even coming from your code base necessarily. And that's GTM
Charles Max Wood [00:43:00]:
is Google Tag Manager. Right?
Vinicius Dallacqua [00:43:01]:
Google Tag Manager. Yeah. So so Google Tag Manager is a brilliant thing. It's a really, really interesting product. But anything can be misused and such as the such as the the nature with GTM in some teams.
Dan Shappir [00:43:13]:
And
Charles Max Wood [00:43:14]:
It's the marketer's junk drawer. It really is. But
Vinicius Dallacqua [00:43:17]:
it is yeah. And then one of the things that I'm currently trying investing a lot of time, in in the governance model that I'm trying to build is is around GTM as well. But then how can you even it's like the those the other side with the GTM side is a completely different set of, of skills. So how can you make sure that you build the connection on things that both of you can align on and understand? So that's where you the KPIs and SLAs come in. And you can you can really make sure that you can have a proper well, structured process around it.
Dan Shappir [00:43:48]:
Mhmm. By the way, we did have, I think, Adam Bradley from, Builder IO who spoke about the tool that they build called Partytown, which
Charles Max Wood [00:43:58]:
can,
Dan Shappir [00:43:59]:
which can reduce the impact of, 3rd party scripts and pixels and whatnot, by moving them off of the main thread onto a web worker. The big challenge is that it's a pretty manual effort. You know, we spoke about leading management buy in in order to approve, effort. That's one of those things. It's it's it's not a a drop in solution that you can just put in your product and and watch the improvement. You probably need to do some manual integration work, and in order to get the real benefits. But when you do, the benefits can be substantial. In in the case of, NEXX Insurance where I previously worked, by putting in Partytown, we were able to dramatically improve our INP.
Dan Shappir [00:44:51]:
The number of of, of pages that, have, good INP scores went from about something like less than half our pages to effectively all our pages.
Charles Max Wood [00:45:04]:
Oh, wow.
Dan Shappir [00:45:04]:
Thanks to thanks to to using party now. Yeah. Because yeah, unfortunately, a lot of these third party scripts can have significant impact on the responsiveness of of the website. They run a lot of JavaScript on the main thread unless, you know, you configure it otherwise. And that impacts, how responsive the the pages to user interactions, unfortunately.
Vinicius Dallacqua [00:45:32]:
Yeah. And that's why building a good attribution model is is, like, absolutely fundamental. It's one of the things that I'm obsessing over as of late because once you have the metrics up on the monitoring tool, that's a great start. You're off to a great start. You want starting to understand, you know, the nature of a product, and you're starting to understand how the things you do impact it. But then again, as previously mentioned, like, performance comes from, performance degradation can come from different, you know, parts of the web. You have the bugbear actually released a great, great article just a few weeks ago about the impact of, Chrome plugins. So if you have, like, different different plugins within Google Chrome, they can also affect your your performance and some more than others.
Vinicius Dallacqua [00:46:18]:
And there's a brilliant article there. I'll I'll see if I can find before, the the this recording ends, but it's really it's a really good example on how performance sometimes you can have some coming, you know, on your rear user metrics, all nonetheless, and coming from different kind of things even out of your control. So building good attributions, means not only having the metrics, but, you know, web vitals have an actual attribution build that you can do use where you can send more information about each one of the metrics you're collecting. So if you have, you know, INP, you you will show you if you will like details about the the low of data, long animation frame, and you can have a better intelligence on where is it coming from. Is it coming from a third party? Is it coming from, are you trying to come extension? Is this coming from Google Tag Manager or something like that? And that can give you more even more power than We're talking about RAM.
Dan Shappir [00:47:13]:
Yeah. A lot of the the tools provided by RAM providers have kind of attribution capabilities built in. So, you know, you you don't necessarily need to be an expert. But but that's actually another point that I want to mention is that it does require some expertise. Like, don't, expect that, you know, to if you want to do some performance improvements on your website, it does require some know how. So either, you know, you find it really interesting and you invest the time and and effort and learn it. Alternatively, you can bring in an an expert. Maybe hire an expert if you're large enough.
Dan Shappir [00:47:56]:
If you're not, you can bring in a consultant. Somebody like Harry Roberts, which we mentioned before, we started recording. We actually had a minute as a guest on the show, can bring on somebody like that to do an audit for you and kind of point you at the right directions.
Charles Max Wood [00:48:15]:
Yeah. I'm I'm curious with you mentioned Chrome extensions as an example, and you have attribution. That that tells you when you gather the information, hey. Look. You know, people who have this extension are having this impact. Right? One way one way or the other. Do you what what do you do about it? I mean, can you just add it to your CI for some of your tests to to check for the performance to get you know, once you've set those performance standards, do you put a banner up that detects, hey. You're running this Chrome extension, and it impacts you know, it may slow down your thing.
Charles Max Wood [00:48:51]:
I mean yeah. Once you have it, what do you do with it?
Dan Shappir [00:48:55]:
Yeah. No. So first of all, yes, you can put it as part of of your testing environment for sure. The question is, is the slowdown happening because just the fact that the extension is there? Like, is this extension just doing generally a bad thing? Or is it something that's kind of a bad, into into react you know, some sort of reaction or or, like, you know, your page specifically doesn't work well with this extension for some reason. Now if it's the the the latter, then obviously, if it's and it's a popular extension, then you may want to invest effort in trying to to fix it. Like, obviously, you can't fix the extension, but you can try to fix your stuff. If it's the former, if it's just a problem with the extension, basically, it is what it is. You may want to put some notice on on your website.
Dan Shappir [00:49:56]:
I wouldn't pop up an alert. That seems overly intrusive. Yep. But, like, you know, having something in your knowledge base. So at least your support people would be cognizant of of the problem. And if a customer complains, then they can ask them, you know, are you using this extension? And if so, please don't. Mhmm. But it's, like, a fact of
Vinicius Dallacqua [00:50:25]:
life. Right. Yeah. So on that, we have a a great talk, from last year's birth now by Tim Veraki. He men he's the name of the talk is noise can noise, canceling rum, or or something like that. I posted a link here on the the the chat, and that talks about what can you do to eliminate noise out of your metrics. And that's, like, that's why, like, having this kind of attribution and understanding attribution and you're very right. Like, this this kind of and this is one of the problems when it comes to performance as a subject is because it is somewhat of, very steep vertical for people to to get into.
Vinicius Dallacqua [00:51:08]:
And but one of the things you can do is once you to start understanding these kind of things is to understand also how to remove noise from your metrics and understand better how your actual percentiles are split up onto and understanding the distributions. Because also, like, building better attributions for a product comes from, bringing back to the to the whole, like, tying up with your product values. Right? Being it comes to understanding what kind of market markets means matters the most for your product. Right? So you don't wanna really measure globally because the bigger the the metrics code you're trying to have, the harder it is for you to find attributions or the harder it is for you to understand them because you're trying to deal with something that is too large of a context. And it's a lot easier when you're trying to slice that data up and, like, let's say, for a certain product, certain markets are a lot more important. And for that product, certain pages or feed functionality is a lot more important. So you definitely wanna slice your analytics and your data and your raw metrics in such a way where you can cut through the noise that you gather globally into something that builds better, attribution models and builds better correlations out of it. So that, like Right.
Vinicius Dallacqua [00:52:23]:
Removing this kind of noise from your metrics is also really, really good way to to avoid
Dan Shappir [00:52:28]:
I'll I'll give I'll give a different example for, you know, it might be that certain geos have performance issues. So let's so for example, at, again, talking about Wix, global company, we saw very different performance profiles, for example, for North America and for South America. On the other hand, you need to be careful with these sorts of things. Because, for example, you might say, hey, something really happened bad happened in Colombia. For some reason, I'm seeing a significant degradation. And then you realize, hey. I only have, like, 8 sessions a day. So if just one one bad session can skew my, my, data for for that geo.
Dan Shappir [00:53:19]:
So you you you kinda need to be careful. And again, this is something that you also talked about in the article about how you need to be careful with blind spots. Like, how, you know, some data can hide or obscure some other data and cause you to miss important aspects.
Vinicius Dallacqua [00:53:39]:
Yep. Yeah. Yeah. It's it's it's like I I bring within the archive, bring to to the same, talk by Tim. And it is, like, it is one of the things that it comes again from the triad. Right? Where you you that's from the data analytical part where you need to understand the how to analyze the data you have because, otherwise, you you might be looking to too much noise. And then it's harder to work with that data. Right? Like, then your your performance, work that you're trying to carve out of the data might even be is aligned.
Vinicius Dallacqua [00:54:14]:
So when you're trying to shift performance, you might not even see the needle moving properly because you're just being affected by by noisy data RAM data.
Dan Shappir [00:54:22]:
I I will say one thing, and I think we can summarize that this particular aspect with it is that I'm also a member of the w three c web performance working group. And there's always tension there with regard to attribution because you've got the RAM providers who are pushing to get ever more accurate and detailed attribution data that they can externalize and expose to their customers. And on the other hand, you've got the browser manufacturers who are concerned about privacy issues. So, you know, with too much attribution that opens the door to all sorts of fingerprinting techniques and and stuff like that. So you kind of need to strike a good balance there or put another way, you know, yet another reason why we can't have good things.
Vinicius Dallacqua [00:55:13]:
Yeah. That's a good point.
Charles Max Wood [00:55:14]:
Well, they're both valid concerns. So Yeah. It is a valid concern.
Vinicius Dallacqua [00:55:19]:
It is a very good concern.
Charles Max Wood [00:55:20]:
They're both con valid concerns. I really want to know what's causing my headaches, but, yeah, I also want to protect my users.
Vinicius Dallacqua [00:55:29]:
Yeah. There's there's a good way to anonymize that data, and most Yeah. Most collection tools will will make sure to do so. But that is a very if you're building your own collection tool, that's a very valid concern to have.
Dan Shappir [00:55:40]:
Yeah. And and, again, it's also a question for the browser manufacturers whether or not to provide certain APIs Right. Or not. Because, you know, obviously, if you could ensure that only good actors actually use those APIs, that would be wonderful. But, unfortunately, the world doesn't work that way.
Vinicius Dallacqua [00:55:59]:
Yeah. Absolutely.
Dan Shappir [00:56:01]:
I think we're nearing the end of our show. Is there anything in particular that you wanted to mention that we haven't mentioned so far?
Vinicius Dallacqua [00:56:11]:
I I mean, I can basically summarize that working with performance in a company, large or small, it to us with data. And understanding that data is naturally the second step, because a lot of times, you're very eager to get, you know, get down with with actual cold and try and move that needle. Sometimes it can be a bit too early, and while you're if you try to work on something that is not gonna be shifting, not only the actual application performance towards the right way, but not affecting the right product metrics. You know? It's it's very hard to get the buying again, and just gets harder to get a performance work off the ground. Now, of course, if you're working with a mature, engineering organization, that can be a different story, but most people are not in that kind of stage. Right? So working with attributions first and data and analyzing it, it might seem the most boring part, but it's the most important one as well. So make sure to just understand your data before trying to work on performance.
Charles Max Wood [00:57:14]:
Good deal. Alright. Well, let's go ahead and do our picks. Before we do that, though, where do people find you on the Internet if they're thinking, oh, I wanna know more?
Vinicius Dallacqua [00:57:26]:
Absolutely. I am mostly nowadays on Twitter.
Dan Shappir [00:57:31]:
X x. It's x.
Charles Max Wood [00:57:34]:
Everybody knows what you mean when you say Twitter. I'm joking.
Vinicius Dallacqua [00:57:38]:
Yeah. It's just I am not yet I most of the times, I forgot. It's just being renamed, to be honest with you. But
Dan Shappir [00:57:45]:
I think everybody would prefer to forget.
Vinicius Dallacqua [00:57:49]:
There, I am, Web Twitter. And, I mean, if you I have a pretty easy name to look on LinkedIn, but, that I've spent most of my time talk talking about performance on Twitter. That's where most people can find Web Twitter with, it's t w I t r. It's a bit, it's a bit shortened.
Charles Max Wood [00:58:12]:
Cool. Alright. Well, let's do ahead and do our picks. Steve, you wanna start us off?
Steve Edwards [00:58:19]:
Going for the high points first. Okay. That's right. Before I get to the dad jokes of the week, I do have one pick. And this has been around for a while, and I've heard other people talk about it extensively, but it's something I finally started digging into, and that's the warp terminal. For years, I've always just used the built in, terminal app that comes on Mac. And, you know, I've gotten fairly good at at, you know, switching me tabs tabs, and I've got, you know, tab groups set up and all that. But I heard a lot about it, so I started playing around with the warp terminal.
Steve Edwards [00:58:54]:
And it's really, really slick. Makes it very easy to find things. There's all kinds of enhancements. You can create custom workflows, which is sort of like custom commands where you pass in parameters. And, you know, you can it's pretty easy to customize your prompt to show, like, what git branch you're on and how what the difference is between you and your remote in terms of normal commit, colors. There's all kinds of customizations you can do. I know a lot of people use iTerm too, and I've used that in the past. But, I really like, the warp terminal just in in what I've played around with it.
Steve Edwards [00:59:28]:
You can find it at warp.dev, on the Internet.
Charles Max Wood [00:59:32]:
I'm gonna jump in here because, Warp is a recent thing that I've picked up as well and, yeah, all the things that Steve said. It also have some has some AI, so you can basically tell it what you want it to do, and kind of like an AI prompt. And it sometimes works, and sometimes it doesn't work quite how you want. Sometimes it also interprets my get commands as, or or my commands within, like, you know, using the rails CLI. And it'll say, oh, what you're saying is you wanna do this. And I'm like, yes. Because that is literally the command to do that. So so it's not perfect.
Charles Max Wood [01:00:16]:
But, you know, sometimes it's handy when you're trying to remember the exact combination of said ack and grip that you want in order to get the thing.
Steve Edwards [01:00:24]:
So, anyway Yeah. It also has some team stuff where you can use it with teams and share workflows and share output. Yeah.
Charles Max Wood [01:00:31]:
I haven't used any of that.
Steve Edwards [01:00:33]:
I haven't had a chance to play any either. But the downside that some people don't like is that you have to create an account, and they say why you need to do that. And so some people don't like that. But, anyway, it's it's really pretty cool. I really liked it.
Charles Max Wood [01:00:45]:
Yep.
Steve Edwards [01:00:46]:
Alright. Dad jokes of the week. So I've never understood why people wear black when they wanna be sneaky. They should just wear leather armor because it's made of hide. Didn't get a smile out of Dan. Alright. Gotta keep working here.
Charles Max Wood [01:01:05]:
So gonna say somebody laughed. Don't laugh. It only encourages him.
Steve Edwards [01:01:11]:
So I took my kids to see Disney on ice, and it really sucked. He was just some old dead guy in a box.
Dan Shappir [01:01:20]:
I had to I actually responded to that.
Steve Edwards [01:01:23]:
Yes. I love it.
Dan Shappir [01:01:24]:
That I said, I would that I actually expected a dead mouse rather than a dead Oh,
Vinicius Dallacqua [01:01:29]:
true. But
Steve Edwards [01:01:29]:
Well, if you noticed on Twitter, somebody responded with an actual cartoon of that same joke, and it shows these people, looking here's Walt Disney in a clear coffin on ice. It's really pretty morbid, but it's very funny too. And then finally, what famous military general was killed by a cannon? Napoleon blown apart. Those are my jokes of the week.
Charles Max Wood [01:01:59]:
Alright. Well, thank you for elevating my week. Dan, what are your picks?
Dan Shappir [01:02:06]:
1st, would like to unmute. I would like to remind us, all that Napoleon actually started as a gunnery officer, and one got one of his big promotions because, you know, there was this demonstration and they were trying to mob, the the the parliament, and he basically dispersed the crowd by firing the cannons point blank at into the crowd. Yeah. That that got him, to become a general. So yeah. Anyway, so about being blown apart by Bonaparte. Yeah. Anyway, so I don't have that much in the way of picks, this week.
Dan Shappir [01:02:54]:
So one thing that I have is that, Matt Pocock, who we've had as a guest on our show to talk about TypeScript, has written a book about, surprise, surprise, TypeScript. And since I consider Matt to be one of the foremost experts on this topic, this is a really good thing, especially given that the online version is available for free. So I'm actually going to share the link here right now, and we'll also obviously put it in the show notes. And the dog is barking, so hopefully, you can hear what I'm actually saying. But, here here's that link to that, to that book online. Highly recommended, Making effective use of TypeScript is incredibly useful and incredibly powerful, and, it's something that's kind of expected from web developers these days. You know, I think it can be it's a fair statement to say that almost all of us have moved off of, you know, untyped JavaScript and into TypeScript. So, that's that would be my first pick.
Dan Shappir [01:04:11]:
My, second pick is the fact that Google has kind of integrated AI into the browser, sort of. So they're actually experimenting with the window dot AI. It's currently, I'm I think it's it last time I checked, it was only in the beta version of Chrome and not in the release version, and you needed to, enable a flag to get it to work. But it's basically what they've done is that they've baked, Gemini micro, whatever they call it, into the browser itself so that you've got an, a large language model. That's a small large language model that's effectively baked into the browser itself and runs locally. So that your you don't need to be concerned with paying the bills for running it, in the cloud, which turns out is how Nvidia is making a ton of money and all the AI startups are burning through cash by paying all their money to Nvidia. So being able to run models locally is a very interesting technological development, and it will be interesting to see what happens with that and how various websites are able to leverage this technology once it becomes mainstream and open to all. Yeah.
Steve Edwards [01:05:52]:
I mean, considering how well the original Gemini rollout went, I I I think this should go very smoothly without any hitches. No bias or anything.
Dan Shappir [01:06:01]:
Exactly. And, also, there's the fact that it's a model that's not optimized for any particular use case. It's kind of a general model. So, there's also that aspect to factor in. I don't know what you want to do with A. I, but, you know, it does open up some interesting opportunities for user interactions. That's that's put it this way in websites.
Vinicius Dallacqua [01:06:28]:
Yeah. Because it has because it's a Gemini micro or whatever, it's it has a much smaller context than what it was trained on, but it's still very good to as a as a transformer, as a as a LLM. If you are curious about that stuff, I definitely suggest following Jason Mays, the web AI lead from Google. They he has a lot of great demos on what you can do with that stuff because not only the WebAI APIs, the the the upcoming, but you can also load models with WebAssembly and WebGPU. They can use your own models kinda locally. So that's pretty that's does a lot of good stuff that you can Yeah.
Dan Shappir [01:07:05]:
The only downside there is that downloading running the model locally, like I said, solves a lot of the cost issues, but it's also these models can be even the smaller models can be fairly large.
Vinicius Dallacqua [01:07:19]:
So downloading
Dan Shappir [01:07:20]:
so downloading them locally, that can that can be that can you know, we were talking about performance. That could be a performance issue.
Steve Edwards [01:07:29]:
I was gonna say that could hurt your performance.
Vinicius Dallacqua [01:07:31]:
It can hurt your performance. But then again, it depends on the nature of your product. Right? Because if you're, for instance, working with something like Photoshop on the web, they do kind they kinda do that. And it's not about the loading times, but execution times in that case.
Charles Max Wood [01:07:44]:
Yep.
Vinicius Dallacqua [01:07:45]:
Which then you actually gain a significant boost. Right? Because it's local, so the the latency is nonexistent. Well, and then I guess it's my picks.
Charles Max Wood [01:07:57]:
Yeah. You can go, and then I'll go. Yes. So go ahead.
Vinicius Dallacqua [01:08:00]:
Okay. So for my picks, I have two things as since we're on the topic of AI and ML. I've been started to dive into, ML training and machine learning, building models, and this kind of stuff. And I actually chose to do it with Alexei as a language. It's a brilliant language. Jose is a fellow Brazilian. I'm also from Brazil. And it's a really nice functional language to get started with.
Vinicius Dallacqua [01:08:25]:
And one of the things is that especially come from JavaScript. There's not as much tooling fatigue because when you're within, like, CF, things are kind of, like, very coherent, and there are and, you know, and all the options are kinda built by the same people. So everything kinda speaks not and no point in the same language. So things kind of flow very easily. So it's not like, for instance, you've started in Python and then then choosing a ML library to to to do things on top of and compatibility issues and all this kind of stuff. So that's my technical picking. Nontechnical, I would I'm all finishing the last season of sweet tooth. And if you haven't watched that series, it's very good for to to watch with the cases or if the kids are old enough.
Vinicius Dallacqua [01:09:06]:
It's a very nice, like, altered series to watch. Kind of tries to be lighthearted in a apocalyptical setting, which is an interesting thing.
Charles Max Wood [01:09:15]:
Awesome. Well, I'm gonna throw in a couple of pics of my own. First of all, I just wanna point out if you are interested in Elixir, we have an Elixir podcast. It's called Elixir Mix. I am not on that one, but it those guys are awesome, and they they cover things very, very well. I've also talked to Jose on multiple occasions from when he was in the Ruby community and then when he started doing Elixir stuff. So it's it's a very, very cool language. And if you're looking
Dan Shappir [01:09:42]:
I I whenever I hear about Elixir, it's usually in the context of how great it is. And to my shame, I've yet to learn it. Maybe I should start listening to that podcast then.
Charles Max Wood [01:09:52]:
Yeah. Yeah. It's
Vinicius Dallacqua [01:09:53]:
a it's a very, very not so hard language to get into. Even though it's fully functional, It's a very, very easy function to learn Yeah. Programming language to get into.
Charles Max Wood [01:10:02]:
Very approachable. I'm gonna start out with, game pick. I always pick a board game or a card game. When pick a card game, it's called 6 Nim. That's n I m m. Now when I looked it up on board game geek, it says that it is the same game as take 5. It has a bunch of other names, which looks like it's Italian. Category 5, take 6, blah blah blah blah.
Charles Max Wood [01:10:30]:
Anyway, a board game geek rates it, or has a weight of 1.19. Says 8 and older can play it, which is probably pretty accurate. Really just quickly what it is is, everybody plays a card face down. So you put a you put out 4 cards. There are 4 piles. Then you put out, your card face down, and then you go from lowest to highest, and you put your card you put the card onto whatever pile it would go on. Right? So it's it's whatever number is below it and nearest to it. Right? So, you know, if you if you put if there's a 34 out there and you put out a 35, it's gonna go on the 34.
Charles Max Wood [01:11:11]:
You put out a 40, the 40 will go on it as long as it there's nothing between 3540 out there. Otherwise, it'll go on the other card. Right? And it's always the highest card in the pile that you're playing on. When you play the 6th card on a pile, you get all 5 cards in the pile, and that 6th card that you play becomes the start of the next pile. The different cards are worth different points. And the only other, I guess, rule is is if you play a number that is lower than any of the piles out there, then you just take whichever pile you want and replace it with your card. So it's like you played the 6th card, but a lot of times there's a 1 or 2 card pile that only is worth 1 or 2 points. And so you you you take that pile because you're trying to get the lowest number of points.
Charles Max Wood [01:11:53]:
You play till somebody gets to 66, game's over. There. You know how to play it. It it was a lot of fun. The numbers are numbered 1 to 104, I think. And so, yeah, you just deal out 10 cards to every player, and you put the rest off to the side. So some of the some of the numbers aren't gonna be out there. But, anyway, it was a lot of fun.
Charles Max Wood [01:12:15]:
I think we played it in what? 10, 20 minutes? Yeah. I mean, it's real real quick play. But if you're looking for a fast fun game, 6 NIM or take 5 or whatever it's called where you live, that that was super fun game. I'm also gonna pile on the AI stuff. So I've started getting into writing AI code. I've been playing with it in both Ruby and JavaScript. I have to say that a lot of the tools for AI are actually nicer in Ruby. That might surprise some people.
Charles Max Wood [01:12:45]:
But, anyway, what I'm looking to do and so keep an eye out for this. I bought the Domains AI for Ruby, f o r, Ruby, and AI for JavaScript. And so if you're, and I'm gonna I'm gonna put together a newsletter for both of those topics. I'm also going to be putting on a summit, probably the week after Labor Day here in the US, which is the first Monday of of September. So it'll probably be Friday Saturday. I'll do the Ruby one first and then the JavaScript right after that. And then 2 weeks after that, so toward the end of September, beginning of October, I'm gonna do a 3 month boot camp, and I'm gonna be teaching people how to do Java or how to do AI. But the the difference between my boot camp and and it'll be a 3 month boot camp.
Charles Max Wood [01:13:33]:
So we're gonna get into prompt engineering and, you know, building chatbots and right. We're we're gonna be using the APIs. And, you know, we might do some, model training, but it's gonna be fairly lightweight, you know, so you don't have to know the math. You don't have to, you know, have a deep understanding of the models. This is how do you add AI to your app, you know, your web app, basically. But it could be a desktop app or something else if you're writing it in Ruby or JavaScript. So, and if you're if you're, you know, primary language is a different language, maybe it's Elixir, I'm I'm almost certain they have libraries that attach to the same stuff, and so you can probably figure it out. And I may or may not be able to point you in the right direction.
Charles Max Wood [01:14:16]:
But but this is what we're gonna do. And so yeah. So, AI for JavaScript and AI for Ruby dotcom. If you go to those websites, you'll be able to sign up for the newsletter for free. You'll be able to sign up for the summit for free. If you want the videos after the summit, that that's what you're paying for on those. And then, the boot camp will will cost money as well. But, I wanna talk to people and make sure that they're in a position to actually take advantage of the bootcamp because it's not gonna be a cheap thing.
Charles Max Wood [01:14:47]:
So anyway, that's what I'm working on these days. And, really, I'm really digging it. It's it's fun, fun, fun stuff. Another pick that I have, this is a Ruby Rogue's episode. I'll put a link to it in the show notes. But we recently talked to Obi Fernandez, and he has a book, about building AI stuff. It's language agnostic. I don't think we've released the Ruby rogues episode yet.
Charles Max Wood [01:15:15]:
We might have. But oh yeah, it is up here. I'll put the, here it is. I'll get the link in here. But, anyway, I'm I'm really, really loving the AI stuff. I think it's amazing. I I love the only thing that I guess JavaScript has that Ruby doesn't is, TensorFlow JS. Right? There's no TensorFlow Ruby, so you have to interface through JavaScript or Python.
Charles Max Wood [01:15:48]:
But beyond that, yeah, I'm I'm really enjoying just what you can put together with it. And I think a lot of the capabilities that come out of it, go well beyond just the prompt for something like chat GPT. Obi actually explains it very well. What he's done is he's built, essentially, virtual AI assistance that, they they take the prompt, but then they also have API capabilities into systems that you use, like your email and stuff. And so you can actually write a prompt where it'll go find an email for you or respond to certain kinds of emails for you or things like that. And so, you know, you start getting into okay. I'm I'm not gonna just prompt you to write something out or give me an answer. I'm gonna prompt you to actually go do something useful.
Charles Max Wood [01:16:38]:
So, anyway, I have pontificated on that longer than I needed to, and I am very much enjoying what I've got there. One other pick that I have, and this is a technology pick. It's, something is put out by, Basecamp or 37 signals. It's called Turbonative, and it's essentially a way of wrapping your web applications into a native app that you can deploy to the app stores on Android and iOS. And and I have been very, very happy with what I've been able to do with that without having to write a full on, native app. And I can see the pros and cons. Right? I mean, if if you're not on the web, it won't work. Right? And you you know, maybe you can use some local storage or things like that to make it do what you want, maybe load anyway.
Charles Max Wood [01:17:28]:
But, the majority of it just requires you to operate, you know, connected to the Internet, but most people are operating on their phones connected to the Internet anyway. And so, you know, anyway, for what that's worth, I'm I'm really, really enjoying that. Looking at ways to get my web apps onto other systems, though, beyond phones, like, the Fire Stick TV and things like that. But Fire Stick TV has a way of wrapping web apps anyway. And so, anyway, that's just another area that I'm I'm diving into, and I might put together a boot camp or a course on that as well. But, anyway, so turbo native is my last pick.
Vinicius Dallacqua [01:18:09]:
For AI and Alexa, you have an x and bumblebee. So if you were trying to get, like, familiar with that subject, training models and running models, and x and bumblebee got you covered.
Charles Max Wood [01:18:21]:
Cool. I
Dan Shappir [01:18:22]:
just tapped a button on
Charles Max Wood [01:18:23]:
my computer. Oh, there we go. It made the window go away, and I couldn't click on it. Alright. Good deal. Well, thanks for coming. I'm always hesitating to say your name.
Vinicius Dallacqua [01:18:37]:
Vinicius.
Charles Max Wood [01:18:38]:
Vinicius. Thank you for coming. This is this has been really cool, and I I love kinda just getting into, hey. You know, these are the steps and kinda levels to performance and and how to read the data and then how to make your case with the data.
Vinicius Dallacqua [01:18:52]:
Yeah. My pleasure. Always happy to to talk about it.
Charles Max Wood [01:18:56]:
Alright. Well, we'll go ahead and wrap it up here. Until next time, folks. Max out.
Framework Comparisons, Real User Metrics, and Effective Performance Tools - JSJ 640
0:00
Playback Speed: