Sentry's Impact on Web Vitals Understanding - JSJ 632
Lazar Nikolov is a Full-stack engineer. They engage in a deep exploration of diverse subjects, from historical veracity and book recommendations to crucial insights on web performance monitoring tools. Join the esteemed panelists as they navigate the complexities of understanding historical events, reflect on significant global issues such as Holocaust Memorial Day and ongoing conflicts, and delve into the intricacies of improving website performance with cutting-edge tools like Sentry. Stay tuned for an insightful and thought-provoking discussion that combines expert analysis with real-world applications in the realm of development and technology.
Special Guests:
Lazar Nikolov
Show Notes
Lazar Nikolov is a Full-stack engineer. They engage in a deep exploration of diverse subjects, from historical veracity and book recommendations to crucial insights on web performance monitoring tools. Join the esteemed panelists as they navigate the complexities of understanding historical events, reflect on significant global issues such as Holocaust Memorial Day and ongoing conflicts, and delve into the intricacies of improving website performance with cutting-edge tools like Sentry. Stay tuned for an insightful and thought-provoking discussion that combines expert analysis with real-world applications in the realm of development and technology.
Sponsors
- "Testim, who makes an end to end testing tool"
- Chuck's Resume Template
- Developer Book Club
- Become a Top 1% Dev with a Top End Devs Membership
Socials
- LinkedIn: Lazar Nikolov
- GitHub: nikolovlazar
Transcript
Charles Max Wood [00:00:04]:
Hey, everybody. Welcome to another episode of JavaScript Jabber. This week on our panel, we have Steve Edwards.
Steve Edwards [00:00:13]:
Oh, AJ's here, so I can't use the yo yo yo. So I'll just say hello from a still cloudy and rainy Portland. But for those of us skiers, we're still getting snow up in the mountains, blowing pretty hard, so it's great for late spring skiing.
Charles Max Wood [00:00:29]:
Yeah. It was wet here. And then when I was driving my daughter to school, we saw a couple of vehicles with snow on them that must have come down out of the canyon or something. We also have H. A. O'Neil. Oh, go ahead.
Steve Edwards [00:00:41]:
I was gonna say, yeah, when you get up above start where you climb up Mount Hood towards Timberline, it's immediate change, and you're still getting blowing snow. Nice. It's crazy. It's great for skiing. I was up yesterday. It was awesome.
Charles Max Wood [00:00:55]:
Nice. Alright. We also have AJ O'Neil.
AJ O'Neil [00:00:59]:
Yo yo yo coming at you from the fish room.
Dan Shappir [00:01:03]:
What's the fish room?
Steve Edwards [00:01:05]:
Fish room.
AJ O'Neil [00:01:07]:
Yeah.
Charles Max Wood [00:01:07]:
Is that an aquarium over there? Or
AJ O'Neil [00:01:09]:
Yes. Yeah. You can't really see it, can you?
Charles Max Wood [00:01:12]:
But, no, it's pretty small.
AJ O'Neil [00:01:15]:
Yeah. So I I've got, like, 10 aquariums in here right now. So, during the long hiatus, I I got into Aquaria. That was that was my my winter depression hobby.
Dan Shappir [00:01:28]:
The age of Aquarius?
Steve Edwards [00:01:30]:
AJ, I got a pick for you. I got a great pick for you when we get to picks.
Charles Max Wood [00:01:34]:
Excellent. Right? Are you an Aquarius? Is that one of the signs, or am I just making that up?
Steve Edwards [00:01:40]:
No. It's actually Aquarium. No. I'm kidding. Yes. It's Aquarius.
AJ O'Neil [00:01:44]:
A person who's into Aquaria is an Aquarist.
Charles Max Wood [00:01:50]:
Wow. Things I never needed to I mean, we also have Dan Shapiro.
Dan Shappir [00:01:54]:
Hello. From a warm and sunny Tel Aviv, which somehow didn't prevent me from getting this really bad cold for the entire week.
Steve Edwards [00:02:02]:
You do know that getting a cold has nothing to do with temperature. Right?
AJ O'Neil [00:02:06]:
Yeah. That's not true.
Dan Shappir [00:02:07]:
That's what I think. It has to do with changes in the weather, I think. Like, certain changes make you more susceptible.
AJ O'Neil [00:02:14]:
Yeah. It's it's stress to your body. When the temperature swings or when you're cold for a long period of time, your immune system dips. So bacteria and viruses that are all around you that weren't affecting you then suddenly affect you. Also also, I became allergic to the earth, by the way, since since last time we spoke. I have allergies now. First time.
Dan Shappir [00:02:37]:
The entire planet?
AJ O'Neil [00:02:39]:
Yeah. I think so. Yeah. I wake up in the morning. Well, actually, it hasn't been so bad. It is, like, 3 weeks or so. I wake up in the morning. My eyes were itchy.
AJ O'Neil [00:02:47]:
I'd sneeze, sneeze, sneeze, sneeze. Oh, it was terrible.
Steve Edwards [00:02:50]:
Well, it's getting close to hay fever time, which is what I get. Although here, it's been so wet that it's keeping everything down, but this is the best time of year that I get hay fever.
Charles Max Wood [00:02:58]:
I I get that too, except that, the rain that we get is like the light rain that you get. So it's just enough
Dan Shappir [00:03:05]:
to make
Charles Max Wood [00:03:05]:
everything bloom like crazy. Yeah. So I haven't had a cold, but I've had a lot of that what AJ is complaining about for the last few weeks. So I just I live on my Zyrtec. I have it at my desk because I just I just take it as soon as I said Yeah.
Steve Edwards [00:03:20]:
I know what you mean.
Charles Max Wood [00:03:21]:
Anyway, I'm Charles Max Wood from Top End Devs. This intro is taking forever, so I'm just gonna skip all the pleasantries. Go check out javascriptgeniuses.com. And we have a special guest this week and that is Lazar, Nick Nikolaev. Did I get anywhere close?
Lazar Nikolov [00:03:38]:
That was alright. Yeah. It's Lazar and Nikola. And thanks for that's what
Charles Max Wood [00:03:42]:
I meant
AJ O'Neil [00:03:42]:
to say.
Steve Edwards [00:03:45]:
You were close. You got the syllables right just in the wrong pronunciation.
Charles Max Wood [00:03:50]:
Yeah. Anyway, we we brought you through the century.
Steve Edwards [00:03:54]:
Well, Zara, after listening to all of that, or was that just so entertaining that you're on the edge of your seat? No. I'm I'm alright. Okay. Good.
Charles Max Wood [00:04:02]:
Yeah. All the back and forth. Matt Henderson from Sentry recommended that we have you on talk about web performance stuff and maybe some of the stuff that Sentry provides. And so, yeah, do you wanna just, fill us in on who what else we need to know about you? We had a long discussion about Macedonia before you.
Lazar Nikolov [00:04:20]:
Yeah. Jumped in.
Charles Max Wood [00:04:21]:
So
Lazar Nikolov [00:04:21]:
yeah. Well, thanks, Matt, for, recommending me. Thanks, bud. But, yeah, I'm Lawson and Nikolaev. I am a part of the DevRel team here at Sentry, and I'm all about web performance. That's my main focus.
Charles Max Wood [00:04:37]:
Very cool.
Dan Shappir [00:04:37]:
Cool. I I'm really I really love web performance. I dig it. Right. Yeah.
Charles Max Wood [00:04:44]:
So so I'm a little curious just as we get into this, you know, and and Dan fills us in periodically. It's like, hey, there's this thing. But but what kinds of things are you focused on at Sentry as far as making people's web apps more performant or more user friendly or, you know, whatever it is that, you know, you're you're working on these days in the web performance arena?
Lazar Nikolov [00:05:08]:
Yeah. So, I'm focused on the educational parts, and not necessarily just, Century educational part, but just, like, in general, you know, how to how to develop good developing, habits, so your performance is in order, basically. That's that's all I do.
Dan Shappir [00:05:28]:
But So you're what? Yeah. Okay. If maybe I I I do want to direct us first at slightly a different direction because we have been speaking about performance a lot on this podcast of late.
Steve Edwards [00:05:40]:
Mhmm.
Dan Shappir [00:05:40]:
But but we've not spoken about Sentry in a while. I mean, we've had some people from Sentry, like, I think, like, year or 2 ago. But maybe some of our listeners somehow are not familiar with who Sentry are or what Sentry is and what it's bringing to the table. So maybe we should actually start a little bit talking about Sentry before we then go talk specifically about Sentry and performance.
Charles Max Wood [00:06:06]:
Yeah. Sounds good.
Lazar Nikolov [00:06:10]:
Should I go for it?
Charles Max Wood [00:06:11]:
Yeah.
Lazar Nikolov [00:06:12]:
Awesome. Yeah. So, for those for those of you who don't know, Sentry is an, error monitoring and APM solution for any of your web apps and mobile apps and, I don't know, desktop apps and anything, basically. So what it does is, it offers, I would say that the like the best in class, error monitoring. Every time something wrong happens in your app, you get an email, you get a Slack message, you get a Teams message if you're there, you get a you get paged. If you're on PagerDuty, you get a ticket in Jira, etcetera, etcetera. You just get annoyed basically from all the sides. And that's just for the error monitoring aside from just telling you, you know, your app's on fire.
Lazar Nikolov [00:06:56]:
You also get a a whole bunch of cool, and useful information like hardware info, software info, what the user is, if, you know, you have configured tracking users, etcetera. Yeah. All you need in in order to fix the bug. So that is the error side and there was also the performance side or just in general, APM because there's more than just performance. You get monitoring for web vitals, networks, resources, Traces, which are becoming, like, a central part at Sentry right now. And, yeah, it's it's it's a tool that you install in your application. Whatever the SDK is, we pretty much, have support for all of them. And, it lets you know and, when something bad happens, and it keeps your performance in order.
Lazar Nikolov [00:07:56]:
That's basically what Sentry is.
Dan Shappir [00:07:58]:
So first of all, you use the acronym APM. Can you explain what that means?
Lazar Nikolov [00:08:03]:
Yeah. It's, application performance monitoring. Cool. Now
Dan Shappir [00:08:08]:
if we if yeah. It's continuously what? Sorry.
Lazar Nikolov [00:08:12]:
It's basically continuously measuring, and reporting, metrics such as web vitals and what else? Yeah. Like like I mentioned, resources that your website is pulling in, network request that, you know, the app is triggering, etcetera.
Dan Shappir [00:08:36]:
Now if we start with with the legacy stuff sorry, Chuck. You want you were saying to say?
Charles Max Wood [00:08:41]:
I I was just gonna say, you know, when you asked what it measured, I mean, we keep hearing web vitals. We talked a lot about that, but I've also seen them track, other things. Right? So, Core Web Vitals is usually used for SEO. You can go listen to the handful of episodes we've done, but I'm of a I've also seen it track, how long certain function calls take or, on the back end, how much time it's taking to access the database and execute queries and things like that, or how long this request took to to a back end or an API or things like that. So it's it's more than just web vitals. But a lot of the business folks, that's what they're gonna care about. They're gonna care about, how performant is it and does it match the web vitals so that we can show up higher in the list on Google.
Dan Shappir [00:09:29]:
Although to be fair, and I and I keep saying that. I mean, on the one hand, it's a really good thing that Google made, performance or core vitals a ranking signal because it really pushes the entire industry to to improve that aspect of of, you know, web websites and web applications. But the reality is that the impact on ranking in most cases is not that massive, to be honest. It's it's often considered to be a tiebreaker rather than a decider. Like, the big aspects of ranking are still things like, you know, content. Content is king. So whether it's relevant content and also authority. Like, you know, if you're going to ask something related to health or medicines, then the CDC or something like that will come up first and for good reasons.
Dan Shappir [00:10:26]:
Or if you ask something about the news, then CNN will likely appear high in the search results even though their performance is historically atrocious, because it's more about those aspects. But Right. First of all, it still is important because it can be a tie breaker. But more significantly than that, it actually has significant impact on user experience once they're in the page or in the website. And, lots of research has shown that when performance is poor, bounce rate is high. And conversely, that when performance is good, then conversion is high. So there's definitely a significant motivation to improve that. But, again, I want to pull us back a little bit because, again, Sentry has historically been known for the error tracking and monitoring.
Dan Shappir [00:11:19]:
I think that's still what most people think about
Steve Edwards [00:11:22]:
Mhmm.
Dan Shappir [00:11:23]:
When they think about Sentry. So I do want to talk a little bit about this before we dive into performance if we can. Per my understanding these days, Sentry is kind of full stack. It does both the front end and the back end and provides a holistic view of all this information together. Is that a correct summation?
Lazar Nikolov [00:11:45]:
Yeah. Basically, when you have multiple projects in Sentry and, you use Sentry's SDKs to connect them, you automatically get a distributed trace. You can start looking at the front end and then move on to the back end and database and then go back whatever your operation flows are configured.
Dan Shappir [00:12:04]:
And, again, per my understanding, Sentry is something that I can install effectively for free on premises, or I can use kind of your hosted server, which then I have to pay for. Is that correct?
Lazar Nikolov [00:12:19]:
Yeah. Yeah. It's, free. So, self hosted. Yeah.
Dan Shappir [00:12:26]:
So when I self host it, it's free and it think it's also or at least partially open source?
Lazar Nikolov [00:12:34]:
It's it's like it's source available. Yeah. So the license is not MIT. We came up with our own, but that's just for, you know, yeah, like protecting other businesses from, you know, piggybacking on on Centurion, repackaging it. Mhmm. But it is yeah. It's source available.
Dan Shappir [00:12:59]:
So if I have, like, an error in my JavaScript, you know, something that would would get written to the console or an Ajax call that fails for some reason, an uncaught promise, rejection and uncaught exception, so on and so forth. All of these things are collected and then exposed to the site owner through the Sentry management console effectively. If I if I, again, understand correctly.
Lazar Nikolov [00:13:33]:
Yeah. So, like, every SDK, has a DSN configured, which is basically a link to where your instance is running. If you're self hosted, you're gonna have your own URL tell there. So that basically tells the SDK whatever you capture and and, you know, measure, send it over there. Yeah. So when you're on premise, the data is on your, servers.
Dan Shappir [00:13:58]:
Understood. And one of the things that I recall that I really liked about Sentry was the fact that it was smart enough to group errors together. Like, one of the problems with a lot of these monitoring solutions is that you could literally drown in a sea of errors because let's face it. You know, so much stuff is getting collected and reported. So you you don't want to, you know, have to sift through tons and tons of of logs and errors and and stuff like that. You want the system to aggregate the results and then just show you, you know, this is something that's important. This is something that you should look at. So that's one thing that I recall about being really powerful with Sentry.
Dan Shappir [00:14:44]:
And another thing that I recall being really powerful with Sentry is the amount of information that was exposed that could help you. Okay. You know, there's an let's say, an, an uncaught prom a promise rejection. Why? Like, what is this promise? What what's it's what's the value that it's trying to get? So all this information is also surfaced to you as the, website or web application owner.
Lazar Nikolov [00:15:16]:
Yeah. Totally. I mean, you you get all of those information attached to the issue itself to we don't call them crashes or anything. We just call them issues. So, like, it's it's all there. Yeah.
Dan Shappir [00:15:30]:
So how then did you actually pivot into performance? I mean, if you had this focus on errors and collecting errors and reporting errors, where's this sudden focus on performance? Where did it come from?
Lazar Nikolov [00:15:47]:
Not sure to be honest, because I I think there was, like, a few years, before I started in Sentry, but, I think it was natural. We we got the we had errors. We had tracing. And with tracing, you can, identify all the performance bottlenecks. There's also profiling, which taps into the whole, like, connects into the whole tracing and and performance, topic. So I'm not sure, like, what is the exact moment that prompted Sentry to start looking into performance. But, we just we we just know that performance is super important.
Dan Shappir [00:16:31]:
Would you say tracing? What exactly do you mean? Yeah.
Lazar Nikolov [00:16:35]:
I mean, we, we mean, like, tracing, the operation flow, for a specific operation, that can be, I don't know, a checkout flow or a login flow. So as, functions executes and, you know, API calls get executed, tracing is basically putting little bread crumbs along the way and then capturing the whole, the whole path of the operation flow. So you can see what where the operation flow went in in, I don't know, what microservice and how long it took for each of the functions or whatever you have defined or instrumented.
Steve Edwards [00:17:18]:
So sort of like a stack trace then is with variation?
Lazar Nikolov [00:17:23]:
Yeah. Stack trace is like a the the snapshot of the of the stack at that specific moment, but tracing is more of a, it's like on a on a on a timeline basis where the operation went. So for example, if we start at the the front end, we could have some spans and spans are how we spans are the things that we use to define the trace, right? So we, we basically create a trace and the SDK automatically does it for you. But let's say we just create a trace, we get a trace ID, and then we sprinkle around spans which are connected to the trace, but those define what happens at a specific moment. So instead of just console lock, I am here or console lock, we reach this point, you're basically creating spans, which also, have a starting point and ending point. They have tags. You can put tags in them, etcetera. And those don't just stop at the, I don't know, front end.
Lazar Nikolov [00:18:36]:
You could send the trace header to the back end and continue the trace on the back end. So then when when you want to debug how, a certain operation behaved or, like, what happened, you see one timeline that, combines the data from the front end and the back end or your back end. If your back end is a microservice architecture, then you'll just have the data from all of the microservices there on one timeline, let's say.
Dan Shappir [00:19:05]:
So a question about that, is it kind of like open telemetry traces?
Lazar Nikolov [00:19:12]:
Pretty much. Yeah. And we also have, an Otel adapter. So you could use the Otel instrument instrumentor to get the data and then send it to Sentry as well.
Charles Max Wood [00:19:25]:
There's What is OTEL? Sorry? OTEL. What what is that?
Lazar Nikolov [00:19:30]:
Alright. This old OpenTelemetry.
Charles Max Wood [00:19:32]:
That is Oh, okay.
Lazar Nikolov [00:19:33]:
Yeah. That's o an open source standard, I would say. There's also
Dan Shappir [00:19:39]:
Standard and implementation, I think.
Lazar Nikolov [00:19:41]:
Yeah. It's both. And implementation and tooling, around the instrumenting parts of your application.
Dan Shappir [00:19:47]:
Basically, think about if if you want to, like, have a, like, a mental image of what is meant by traces as Lazar describe them. It's kind of similar kind of similar to what you see in the, performance, tab of the Chrome Dev Tools. Like so it's kind of or, like, what is often known as a flame chart. You kind of see each of those traces are like one of the, levels in the flame chart. So you have, like, a span that starts and ends when a certain operation takes place from the beginning to the end of that logical operation. But we within that, you have span. So you could think of a function. So you have a span for the execution time of that entire function, but it calls sub functions and they have their own spans within that span.
Dan Shappir [00:20:39]:
So you can, like, go through, like, the execution. So it's like, stack trace over time as it
Charles Max Wood [00:20:47]:
were. Okay.
Dan Shappir [00:20:50]:
And and it's now you collect this for, like, every session or, like, is it sampled? Or how how does it work?
Lazar Nikolov [00:20:58]:
Yeah. There is a sampling configuration, already put in into the SDK so you can you can play around with the values. It's basically just from 0 to 1. That's how Sentry configures it. So if you want 10% of your sessions to be sampled, then you just type in 0.1. So you don't And
Dan Shappir [00:21:16]:
the sampling doesn't adversely impact the performance of the session? Like, the user doesn't notice it that it's being sampled in this way?
Lazar Nikolov [00:21:24]:
I wouldn't say so, but depending on on on you. For example, if you're instrumenting everything, all the, you know, important and non important bits and pieces in your application, then you'll probably see some performance overhead. But that's yeah. That that's all in your control, basically.
Dan Shappir [00:21:45]:
Cool. So we were talking about the fact that you have this tracing mechanism. And from that, you kind of moved into also performance monitoring.
Lazar Nikolov [00:21:57]:
Yeah. Yeah. Because the trace contains everything. Right? The the trace knows where, the application went or the operation went and how long it took so we can identify performance bottlenecks. But then what we do at Sentry as well is whenever an error happens, we attach it to the trace. So you can see on the timeline, on the flame chart, you can see where an error happens and and how the operation behaved after, you know, the error got triggered.
Dan Shappir [00:22:30]:
Also, I assume how you actually got to that point of the error because that's often the thing you most want to know.
Lazar Nikolov [00:22:37]:
Exactly. Yeah. The the spans will give you the information about what data, you know, was being handled and basically what happened leading up to the error.
Dan Shappir [00:22:51]:
So do you have, like, something like I I'm like, I can I literally see things like what, you know, like a, sort of like, what the user was actually even maybe seeing on the screen or stuff like that when the error happened? Or
Lazar Nikolov [00:23:08]:
Yeah. We do. It's it's like a different, sub product. It's called session replay, and it's just an integration, that you need to install in your client facing applications. Right now it's just web, but we're coming up with, support for mobile as well. So it's like DOM recording. It's not a screen recording, but it's basically records the the DOM and sends all the yeah. In a sense, basically, the recordings along with the whole, you know, monitoring and capturing.
Dan Shappir [00:23:41]:
Oh, that's enough. I mean, it don't does represent what the user actually sees. Yeah. And
Lazar Nikolov [00:23:47]:
that's all done. Like, you you also get the net network request. You also get the console log, console error or whatever you're outputting into the browser's console. It's it's like literally your you have access to the person's computer.
Charles Max Wood [00:24:05]:
That one one question I have related to that is a lot of times there's information, be it personally identifiable information or passwords or things like that. I I'm assuming that you can configure Sentry to screen all that out. So does it show up in the
Lazar Nikolov [00:24:20]:
Yeah. Yeah. Sentry comes we're scrubbing that out, automatically, and and that happens on in in both sides. For example, the SDK itself does some scrubbing, so there is no PII going through the wire. But then also before we start processing the data, there's a there's a a a thing called relay, which takes in all the data and does additional scrubbing on the server side as well before we put it. And that's for the, I I think the self hosted one has it, but then also the the SaaS one also supports the scrubbing of the PII. But it's configurable. For example, if you're if you're tracing if you implement tracing and you append the you explicitly append PII in through the context or tags, whatever it is, is going to be sent.
Dan Shappir [00:25:15]:
So you need to think about what it is that you're actually collecting. That's the bottom line.
Lazar Nikolov [00:25:20]:
Yeah. Like, out of the box, if you don't explicitly send data, it's not gonna be sent. But if you're, you know, if you're configuring the context, of the SDK to include the logged in user or whatever data you want to, you know, attach, to the to the context, then it's going to be sent. But out of the box, it's, nothing gets sent, basically.
Dan Shappir [00:25:45]:
So when you added RAM capabilities and by the way, RAM in the context of performance is really user measurements, which means it's data from the field rather than synthetic data created in simulated environments. You mentioned that you obviously collect the the core vitals, which these days are LCP largest content for paint, CLS, commodity layer of shift, and the INP, which is in, input.
Lazar Nikolov [00:26:17]:
Interaction to next page.
Dan Shappir [00:26:19]:
Interaction to next page. Exactly. I've been, like I said, had a head cold. I'm still recovering. But what other performance data are you collecting that's relevant to, identifying performance bottlenecks in in, in the application?
Lazar Nikolov [00:26:38]:
Yeah. So, there's also database, monitoring. So whatever you use, you you essentially has a has a an integration for your database driver. There's a built in for postgres. There's also for Prisma if you're using Prisma. But if not, there's a way you can, you can manually, attach, the query and also the results in the span itself when you're instrumenting. So there's also database monitoring or, like, you're monitoring the queries as well, yeah, aside from the web files. We also measure all of the resources that your page is pulling in a JS files, CSS files, if they're blocking or not, images as well, how how big they are, like, is there, like, a way to opportunities for you to optimize them, etcetera? So there's quite a quite some data getting pulled into the Sentra dashboards for you.
Dan Shappir [00:27:45]:
Cool. One of the biggest challenges with performance monitoring in the field, or RAM, as I mentioned, is the concept of attribution, which basically means, let's say I have a bottleneck. My, larger my LCP, largest content for paint, is high, which means it takes a long time for the primary content to be displayed from the session start, which is fine and good. But I want to understand why it happens and what I should change Or even potentially more challenging, the new metric, INP, interaction to next paint has to do with how often the main thread is blocked by, for example, for example, long running JavaScript code. But there might be a lot of different JavaScript code that is running inside the context of my session. Could be first party code. It could be third party code. For example, various tag managers, pixels, and whatnot.
Dan Shappir [00:28:58]:
So and I want to understand, you know, which one is the one that's causing the most harm, the most damage, and that requires some sort of mechanism of attribution. So, how do you go about that?
Lazar Nikolov [00:29:15]:
Yeah. So, I feel like you have all the tools, to fix all these things. For example, if you're looking into INP, then maybe capturing a profile and looking at a profile, of, you know, of your running application can actually shed some light on what is happening in the background and why your, you know, your website is experiencing INPs, at the time of, I don't know, interacting with the element?
Dan Shappir [00:29:44]:
So it's it's not profiling on the developer's machine. Not like opening the Chrome Dev Tools and running the profile tab. It's actually collecting profile profiles for for the actual real user sessions.
Lazar Nikolov [00:29:58]:
Exactly. Yeah. It's configurable. Yeah. You can also pull in profiles and, you know, use them to debug INPs or whatever it is. Yeah.
Dan Shappir [00:30:10]:
So can you give us, like, a concrete example? I'm sure that you work with various cost, customers on that. So without naming names, can you, like, give us an example of, like, an interesting case that you ran into and were able to debug in this way?
Lazar Nikolov [00:30:25]:
I don't work with customers, so I can't give you like, yeah, I can give you examples from my demos that I create.
Dan Shappir [00:30:32]:
Go for it.
Lazar Nikolov [00:30:33]:
But, yeah, I I usually just use a trace trace view because the trace view basically tells me everything I need to know in terms of how the page loads or in terms of how my operations my custom operation flows are behaving because I define them in in my apps. But it basically, that's that's how I I go about debugging any of the, like, the top level stuff. But then, as I mentioned, like, if I have DB problems, I'll just use the queries product. And the idea is that it's it's all here. It's all connected to the same data. And on the sidebar, you have all of these tools, that we're we're talking about, and it's all connected to the same trace.
Dan Shappir [00:31:30]:
And what about integrations with, various development environments? Like, let's say I'm using, I don't know, Next JS or or Nox or RemX or whatever. You know, there are like a 1000000 frames or frameworks out there, Astro. Mhmm. What kind of integrations do I get? Are you just looking at it as is JavaScript and the web, but it's all the same? Or are you, or do you have, like, specific integrations for the different frameworks and meta frameworks?
Lazar Nikolov [00:32:06]:
Yeah. We do we do have specific integrations with all of the different frameworks and a lot of them that are currently out there, people that are using or not using. We do that because we wanna tap into the framework itself or the library or the tool, whatever it is. We do that because we wanna utilize the the functionalities that the tool itself is providing when it comes to either instrumenting or, monitoring for errors, etcetera. So a lot of the let's say we're talking about instrumenting. Right? We're talking about tracing. A lot of the operations are already going to be instrumented because we tap into the tool itself and also connecting the, clients or the projects, I would say, we don't really care about that as well because when when one project makes an API or a request in any way to, to a different one, we can also the SDKs will automatically create that trace for you. So a lot of the times it's enough to just install Sentry in all of your projects, set up the SDKs and you'll get a lot of data already configured and ready for you.
Lazar Nikolov [00:33:33]:
But if you wanna get into more, details, then you do the you do the tracing, basically. But web vitals, it's automatic. Session replay, you just need to add the integration. The tracing is covered as much as the framework can cover, but if you have more details or like if you wanted tracing to a big, into a greater detail, then you can always supplement, the the the trace with, you know, additional spends.
Dan Shappir [00:34:08]:
By the way, out of curiosity, do you use, like, the built in browsers, performance dot mark and performance dot measure? Or do you have, like, your own wrappers for the spends?
Lazar Nikolov [00:34:20]:
In some places, we do, but we usually use, for example, for core web vitals, there was this, JavaScript library, and I think it was from the the Google Chrome team.
Dan Shappir [00:34:32]:
Yeah. We
Lazar Nikolov [00:34:32]:
we use that under the hood to capture the data.
Dan Shappir [00:34:36]:
Okay. What are things that you are I don't know if you you can answer that if you're not primarily working with customers, but what are the things that you're seeing as the most common sources of performance issues with people who are using Sentry for monitoring?
Lazar Nikolov [00:34:58]:
Yeah. Well, I've, talking with people, I've, I've noticed that there's a lot of, undefined errors, for example, in in JavaScript land where, you know, you're trying to access a property of an undefined variable or object, you you get the undefined error that is pretty much the most common one. And also, like, failed to fetch, fetch errors not, happening, because of, I don't know, the URL doesn't exist or something like that. I've seen that way too many times, maybe.
Dan Shappir [00:35:31]:
Yeah. But those are general errors. I'm I'm asking what sort of performance related issues are you primarily seeing?
Lazar Nikolov [00:35:41]:
I don't think I can, there's a lot of n plus one when it comes to, like, API request and also DB queries. What else? I don't know. No. I haven't haven't seen I haven't been exploring too much of the customer data.
Dan Shappir [00:36:01]:
I'm seeing from what an interesting split between 2 types of issues.
Lazar Nikolov [00:36:09]:
Mhmm.
Dan Shappir [00:36:09]:
Like, there's the issues related to, let's say, largest contentful paint or LCP, which are about how quickly a website loads. Mhmm. And that's not always interesting for all web applications. I mean, it's interesting if you're building an ecommerce website. But if you're building some sort of a dashboard or or something like that, or you're sitting behind authentication, then you don't care about it as much. Basically, RCP is really important when your when your page is ranked. If your page is not getting scanned and ranked, then you you often don't really care as much about that. And in these days of of service side rendering, SSR, it's often not even the RCP may not even be dependent on the performance of database calls.
Dan Shappir [00:37:02]:
So that's, like, one category of performance that I'm seeing, which has to do with how well have I configured my CDN? How well have I configured caching for my files? Am I using properly optimized images? Am I properly loading my phones and CSS and stuff like that? So that's, like, one category of performance issues that I see. And the other, which is one that you kind of mentioned a couple of times, has to do with once that web application is already loaded, how well does it respond to the various operations that I that either user do inside of it? In which case, it is often has to do with, like, when when a certain operation is performed, how many Ajax calls does it get translated into? How how are these are these call parallel paralyzed or or are they sequential? If they hit the database, which they often are, how optimized is that database query, for example? Like, you know, have I properly created the indexes in the for the SQL query that I'm performing? You know, stuff like that. So these are kind of like the 2 categories of performance issues that I'm primarily seeing. Does that kind of match your understanding?
Lazar Nikolov [00:38:28]:
Pretty much. Yeah. Yeah. And then that's how we're also building the product to, to help developers with these two types of, of issues.
Dan Shappir [00:38:42]:
Okay. So if we're going to to the talking about the database issues. So you said you've got integrations with the major, database providers or database platforms, will it work, like, with any back end that I might have? Is the integration in the at the database layer? Or does it need to be some sort of I don't know. We support Node, but we don't support Go. I don't know.
Lazar Nikolov [00:39:08]:
Yeah. We we do that through the backend, I would say. But it, it, it, it depends on what it's, what is the driver that is being used to access the the database. So we're not basically, monitoring the database server or instance itself, but how the the the client that uses the driver to query the database from that point of view. So it it matters if there is support for the client itself or the back end framework or technology. That is the first thing. And then the second thing is what driver does the back end use to query the database for data? So that is the second thing.
Dan Shappir [00:39:57]:
So which backend technologies do you support?
Lazar Nikolov [00:40:01]:
Oh, a lot. We have Node. We have, Django and anything Python. We have PHP stuff
Charles Max Wood [00:40:12]:
like Laravel, etcetera. I've used it with Rails.
Lazar Nikolov [00:40:15]:
Rails. Yeah. A lot of the the the I would say all the most the the the famous ones, like, the most popular
Dan Shappir [00:40:22]:
JVM, I would assume, probably.
Lazar Nikolov [00:40:25]:
Yeah. There's, like, Java stuff. Yeah. I don't know all of them, but there's, like, too many.
Steve Edwards [00:40:31]:
Yeah. Just for what it's worth, we my, company uses Sentry on a pretty huge Laravel and View site. We use it pretty intensely both for we, you know, see our both Laravel and Vue errors, JavaScript errors, you know, bundling errors and, you know, any number of different errors from both both ends of the application. So it's very handy.
Dan Shappir [00:40:51]:
Do you also use it for performance monitoring? Not
Steve Edwards [00:40:57]:
so much. I think we have some other tools we use. We use it mostly just for the error tracking. We get that like he mentioned earlier, we where he's talking about it being annoying. We get emails and the Slack channel updates and and all that kind of stuff. So
Dan Shappir [00:41:12]:
I think the key thing from my perspective, thinking about, you know, making the web faster, what I like about this is there there are a lot of websites that are using Sentry. Like, Sentry has become, from my perspective, almost a de facto standard for error tracking and error monitoring in in on the web today. And if, you know, if some of these, organizations that are that are using Sentry already have some sort of a RAM tool in place and good for them. But if they don't, then why not just use Sentry to get all this performance information and start making your website faster?
Lazar Nikolov [00:41:56]:
Yeah. I mean, I've seen, I've seen clients, that only use us for error monitoring to also, you know, implement our performance, tools, whether they have or or don't have, you know, already other tools in place. So I've I've seen them move towards Sentry as well.
Charles Max Wood [00:42:18]:
Yeah. I've I've seen that on a few other places and with other competitors to Sentry, you know, that either went from, hey, we do the APM or the application, performance monitoring, and then they added the error stuff or vice versa. But, yeah, effectively people start using one part of the tool and then when they run into some issue. Right, so it it'll come up that, hey, our core vital scores are not where we want them. Right? Because that got on the radar and they checked it out. Or, hey, we've got this page that just takes forever to load. Right? And so then they figure out, oh, we've been putting all this data into this system for a long time. And so now we have it.
Charles Max Wood [00:43:07]:
And so now we're gonna go look at it. And so then they start to audit what's going on in their website. And I think that's one thing that's kind of nice. So the tool that I use for most of my websites for error monitoring does not have an APM component to it. And, yeah, there have been a couple of times where I've looked at things and gone, man, I really need something to, you know, pull this this piece out. And so, yeah, it it's really convenient to have them both there.
Dan Shappir [00:43:36]:
Another thing is a lot of site owners and SEOs, especially SEOs, they look at the Google Search Console. And the Google Search Console, you do have a core vitals area within it. So they can see, like, performance issues that they have with various page and their pages in their website. The problem with it is that, the Google Search Console has, like, this 28 day smoothing window. So you often see issues, like, 2 to even 3 weeks after they've actually started. So you say, hey. You know, I've got a problem, but it's affecting your website for it's been affecting your website for for 2 to 3 weeks already. So that's problem number 1.
Dan Shappir [00:44:29]:
And then you make a fix. It then takes another 2 to 3 weeks to actually see that the fix actually impacted. So, yes, you can tell Google Search Console that, like, I made a change. Please review it. But it's still it's still a chore. So that's problem number 1. And problem number 2 is I mentioned before is this whole issue of attribution. So you can see that in certain in a certain page or group of pages.
Dan Shappir [00:44:56]:
I don't know. C l, CLS has gotten worse. But, like, why? What's caused it? No. I you know, what what change have we made over the past 3 weeks? And there can be so many things. It might be somebody from marketing who just made some sort of change in the page layout or content. It might be, I don't know, issues within one of, you know, an improperly configured CDN or maybe, you know, you you misconfigured your your web pack or whatever and you're creating much larger bundles. Or maybe somebody in marketing added another pixel, and and it could be so many things that can have the impact. And and and this whole concept of of getting good attribution, of like Sentry and like the other run providers, at least the good ones, like Sentry and like the other run providers, at least the good ones, can provide so much value over what Google gives you out of the box.
Charles Max Wood [00:46:04]:
I I was just gonna say, because you were listing examples, and when you mentioned added a pixel, that that's the one that I've seen come up. But the trick is, is that a lot of times that doesn't show up in your code because you're using something like Google Tag Manager or something. And so, right, the changes and then the code base, the changes in the tool. And so just just be aware of what you're allowing to modify your page that may or may not be the code.
Dan Shappir [00:46:34]:
Exactly. I mean, you know, you see that, Let's say you've pinpointed the the time of the degradation, and then you start doing, let's say, git bisect, and you find that nothing's changed because it's not it's not in your code. Like you said, it might be that somebody in marketing using Google Tag Manager just added another, you know, pixel that's really killing your performance. But again, totally not reflected in your code base.
Lazar Nikolov [00:47:01]:
I've also seen cases where a parallel happens. So it's not just from this point on or, like, from this commit on then, you know, the the performance gets worse. But, like, there's, like, 2 parallel lines where one of the one group of the users are experiencing okay, are having an okay experience, but then another is not so much okay. And it they didn't have anything to do with pixels or or stuff like that. It had to do with, the state of the application. So I've seen, like banners, that, you know, change the LCP score because they're pretty big at the top and some users take the time to hit the x button, but some don't. Right? So for some of the users, your RAM data is going to report the LCP is going to be based on the banner, but for the others, it's not gonna be the the element. It's not gonna be the banner itself or the background or whatever it is, but it's gonna be some different part of the the page.
Lazar Nikolov [00:48:02]:
So there's, like, also parallels, of of of data.
Dan Shappir [00:48:07]:
And I've seen situations where a lot of organizations do all sorts of AB tests.
Lazar Nikolov [00:48:13]:
Mhmm.
Dan Shappir [00:48:13]:
So they might be testing different, let's say, head title or header messages. And they had different header messages have a slightly different length. And then, due to wrap around and stuff like that, all of a sudden, a different piece of content is the largest contentful con, piece of content to be painted based on where you are in the AB test. So all these things can really drive you nuts trying to figure out, hey, what's actually going on? You know, why is it that all of a sudden my page shows poor performance even though I don't I didn't think that anything actually changed. And and again, attribution is really key, in in in being able to solve these type of types of scenarios.
Lazar Nikolov [00:49:04]:
Yeah. And I I was, I would say that Sentry does handle that in a really good way. For example, in cases where an AB test is happening or in case where some of the elements or state of the application can affect the the, the web vitals, you can always, you know, add tags to the, to the context. So all of the data that's being measured is going to be tagged like a banner shown true or false or that the user is logged in, yes or no, etcetera. So that based on the tag itself, you can filter out specific scenarios of the state of the application, and then you can zoom in in that data and see how the, I don't know, web vitals and all the other metrics look like without these cases in mind.
Dan Shappir [00:49:54]:
So it's not just desktop versus mobile or Chrome versus Edge. It's also, do I have this banner or don't I have this banner? Am am I on this? Am I on the a part of the test or the b part of the test and and stuff like that?
Lazar Nikolov [00:50:09]:
Exactly. Yeah. And you have the you you already have the tooling in the in the SDK and in in the dashboard in century to, to do all the slices, that you need to do so that the data makes sense. Cool.
Dan Shappir [00:50:27]:
Yeah.
Charles Max Wood [00:50:32]:
So you said you haven't gotten into the customer data a ton. I'm kind of curious if there are case studies though where somebody, basically demonstrated, or, you know, had some kind of major shift in their performance or a major shift in their web vitals or a major shift in, hey, this was our moneymaker page. And it went it got faster, and then we made more money. I I I'm just I don't know which of those you might be aware of, but that would be cool to hear about.
Lazar Nikolov [00:51:02]:
Yeah. Totally. I mean, we internally and, like, you know, we always talk with the customers and there's sometimes we even do, events like workshops where we just gather some, you know, get our people in Zoom and we talk with the, with our clients or people who use Sentry. I remember I also did one, with a person who who used Sentry in a React Native application, and we talked about their experience, etcetera. So we sometimes we do publish these kinds of interactions and conversations either through video formats or or on our blog. So there are some stuff like that. Yeah.
Charles Max Wood [00:51:47]:
How do I find those?
Lazar Nikolov [00:51:51]:
Not sure if there's, like, one page, that lists all of them, but if there isn't, then that's a really good idea. And I'm gonna take it up and see if we can build it, but, I'm not sure. I'm not sure. Maybe, either the blog or or the YouTube. There's probably a playlist in the on the YouTube channel.
Charles Max Wood [00:52:10]:
Okay. Any of the rest, do you have questions or should we go to picks?
Dan Shappir [00:52:28]:
I'm I've covered my basis. Lazar, is there anything else that you specifically want to add?
Lazar Nikolov [00:52:36]:
I don't think so. Yeah. I think we had a really good discussion around facing and and performance and stuff. Cool.
Dan Shappir [00:52:45]:
I have to say, just to conclude, that I'm a huge believer in in real user measurements. You're always surprised if you just go by synthetic measurements. You know, it's highly likely that you're not actually testing what your users are really experiencing. So and and so that's number 1. And number 2 is that production will always surprise you. Mhmm. And, I've seen things. So, just trying to rely on on on synthetic tests and simulated environments, that's that's just not enough.
Lazar Nikolov [00:53:25]:
Yeah. Have you seen the, have you seen the reports? I think it's a it's a few years old, but there was a report where it got all of the, Lighthouse data and got all, like, the top I don't know how many websites with their scores, and then, mapped them out with their, data from the CrUX database. And it turns out that, like, 43% of, good Lighthouse scores don't even meet the the minimum of the, web vitals when from the crux database.
Dan Shappir [00:54:01]:
That's interesting. I've I've often seen the reverse, like, pages that actually have good core vitals but seem not to have good simulated scores, especially for mobile because Google really simulates a low end device that's slower in many cases than what users often actually have.
Lazar Nikolov [00:54:22]:
Yeah.
Dan Shappir [00:54:23]:
But the reverse can also happen. I know that, Rick Viscomi from Google, I think, actually even wrote an article discrepancies between the synthetic test and the real user measurements. So, yes, I think that the best option is really to use both. That during the development cycle, you're using the synthetic tools to make sure that you don't degrade before pushing to production. But then you also have tools monitoring your production environment to catch all the things that slip through because inevitably, things will slip through. And like we said, you know, changes might be as a result of things that have nothing to do with the actual code. They might have to do with pixels, with images. I've seen, for example, cache headers for files misconfigured.
Steve Edwards [00:55:14]:
Mhmm.
Dan Shappir [00:55:15]:
So or I've seen situation where somebody accidentally turned off all the compression for all the files at the CDN. So you suddenly are downloading, you know, 5 times as much data than before. So I've seen so many reasons for poor performance that might not be caught by synthetic tools that only check during build times.
Steve Edwards [00:55:38]:
I guess you could say really really say based on the report that that was the crux of the issue. Right?
Dan Shappir [00:55:44]:
Yeah. I think you made the same joke when, when we had Rick Visco on the show.
Steve Edwards [00:55:50]:
Hey. Would it's if it works, you know, keep using it.
Charles Max Wood [00:55:54]:
Some people hadn't heard it yet.
Steve Edwards [00:55:56]:
That's Sorry.
Charles Max Wood [00:56:00]:
All good. Alright. Well, let's go and move on to the picks. Before we do that though, Lazar, how do people find you on the Internet if they have questions or wanna chat or whatever?
Lazar Nikolov [00:56:10]:
Yeah. I mean, I try to keep, a consistent, username, but let me just change it real quick so everyone can see it. It's at Nikolov Lazar, and this is how it looks like.
Charles Max Wood [00:56:24]:
Alright. We'll also put that in the in the comments on our various streaming platforms. And I can type it and in the show notes as well. And that way people can look you up. All right. Well, let's go and do the picks. Steve, do you wanna start us off for picks?
Steve Edwards [00:56:47]:
Going for the high point early again. Okay. I can appreciate that. So, before I get to the dad jokes of the week, AJ, I had mentioned earlier that I thought of a pick. You inspired me to pick when we were talking about your aquariums. Back in the eighties, there was this comedian, and he was the king of puns. He was, I guess you could say one of my idols. His name was Kippadatta, and he has a song called Wet Dream that is all about fish puns.
Steve Edwards [00:57:21]:
And, you know, it starts out how he was driving, in downtown Atlanta Atlantis, and his Barracuda was wasn't working, so he was driving a rented Stingray. Anyway, there's a great line in there where he's trying to pick up some, hot fish in a bar, and he asked her, what's her, what's your sign? And she says, aquarium. And he says, great. Let's get tanked. But, anyway, that's a, it's an all time classic. If you wanna check it out, you can find it. There's actually a video very you can tell it's very early MTV video, style on on YouTube, called wet dream. So the dad jokes of the week.
Steve Edwards [00:58:03]:
Oh, Oh, I just lost them. Sorry. Stand by. Stand by. Did you know? It turns out that if you you can actually hear the blood flowing through your veins, you just have to listen very closely. Varicose veins, very costly. Sorry. I flubbed that one.
Steve Edwards [00:58:24]:
Along the lines of of the fish puns, what do you call a shrimp that is always getting injured? He's accident prone. And then finally, the other day, I went to see my, doctor about this issue I've been dealing with, and he said, well, do you wanna hear the good news first or the bad news? I said, good news, please. He says, we're naming a disease after you.
Dan Shappir [00:58:54]:
Yeah. You don't want that.
Charles Max Wood [00:58:56]:
Yeah. Right?
Steve Edwards [00:58:57]:
Yeah. It's funny. There's, in the fire service, they say, if there's a drill named after you, it's because something really bad happened, which is generally true.
Charles Max Wood [00:59:10]:
Alright. AJ, what are your picks? You are muted.
Dan Shappir [00:59:14]:
I think his first his first pick is the unmute button.
Lazar Nikolov [00:59:18]:
Where is it?
AJ O'Neil [00:59:19]:
Okay. I found it.
Steve Edwards [00:59:20]:
Yes. It is. Yeah.
AJ O'Neil [00:59:24]:
Let's see. I was just, looking to find something to pick.
Steve Edwards [00:59:30]:
It's because he had no idea that we were gonna have picks today. Right?
AJ O'Neil [00:59:34]:
I had no idea. Total surprise. No idea. I was completely surprised. I was taken aback. Oh, gosh. Let's see. Well, this is not really a pick as much of a as a thing that happened.
AJ O'Neil [00:59:50]:
Okay. I got a I got a couple. I I saw the movie Being There. Steve, have you seen that movie?
Steve Edwards [00:59:59]:
Yes. That is such a weird movie. My, when I was in college, one of my Spanish professors had recommended. She really liked it. I'm a huge Peter Sellars fan from the, you know, the Pink Panther movies, the Inspector Flussell stuff that he'd done. But, yeah, this being there is just it's interesting.
AJ O'Neil [01:00:17]:
I I don't know that it's interesting. It's weird. So it's it's it's critically acclaimed. It's on it's on some lists of, you know, best movies you gotta watch because my wife and I, we like, we've watched all of the TV shows and movies that have come out in the last, I don't know, 10 years that are worth watching, and there's not very many of them. And and so and and, like, nothing new comes out. It's all yeah. It's, like, very, very meh. And so we, we decided to go backwards to find some older stuff, some stuff that people really felt like had meaning, was done well.
AJ O'Neil [01:00:50]:
And we landed on being there, and the reviews on it were so high. The trailer looks so weird, but the trailer was a very accurate depiction of the movie. It it really made no sense. It there was no plot. It was basically the the, depicting someone who, is semi autistic slash echolalist, who just says a few things here and there and ends up, like, next to the vice president, essentially. But it is it was I don't know. So it I
Dan Shappir [01:01:28]:
don't know.
AJ O'Neil [01:01:29]:
I that's just something that happened. But something that, something that's good is I finished listening to the first book of The Expanse, and I have to say, overall, I do like the book better than the show. There for the first half of the book, I think I like the show better because the show gives a little bit more it it it like with The Hunger Games, you only get a certain perspective in the book. But then in the movie, they get to tell you things that are going on in the rest of the universe that you can't see from the the main character's perspective. In the book, they do have a couple main characters, but I guess where the second half of the book I I liked it better than the show was that it's just much more focused. The show progressively got worse and worse as they tried to just make the characters more extreme and just introducing more characters and then just having them yell louder and cuss stronger. And and that kinda got old, but the book just stays focused on the few characters that are the important ones that drive the story. It doesn't try to introduce a bunch of others.
AJ O'Neil [01:02:41]:
And and, I mean, like, the book's got plenty of, language and whatnot too, but it's not it's not the same where it's just like, oh, she's on screen now. Cue the f word. Yep. That was her line. Yep. Okay. Now we move on to the next character. Oh, his line is angry drunk.
AJ O'Neil [01:03:00]:
Got it. Okay. And we move on to the next character. They in the book, at least for the first book, they develop the characters a lot better. And so I'm I didn't think I was gonna listen to the whole book series. I I don't know if I will, but I am going to pick up the second book, at some point. I've got a whole backlog of Audible to to do. But, and then last thing is, again, not really a a a pick per se, but just an experience.
AJ O'Neil [01:03:29]:
So while we were on the show here, I did try to get Sentry self hosted installed. I ran into some Docker issues because, you know, Docker. But I'm I'm glad to see that it's available for self install, and I'm glad to see that the install script can kind of resume when it hits a hiccup. I'll I'll play with it a bit more and see if I can find, like, the right version of Docker to host on the right type of VM to actually get it to install because, I I would like to see how that works. But the documentation looks pretty decent. I I do wish that it had, like, just the scripts to run the installers without having to deal with Docker. Just say, okay. Like, you gotta use Debian for this.
AJ O'Neil [01:04:16]:
But if you use like, just tell me what the operating system is that Docker would have used. Let me install that operating system, and let me just, like, run the scripts without having to deal with Docker. Because Docker, man, it's such a pain in the butt. You know? But, but I'm glad to see that it's there. And I was trying it out, and and the documentation looks good. And I like, like I said, I like that it's, it it seems to gracefully restart. As you hit one issue, you can solve that and then restart the install again, and it'll pick up where it left off. And that's always very nice.
AJ O'Neil [01:04:48]:
So, plus 1 to the the Sentry self hosted on that, and, hopefully, I'll I'll get it all the way. And that's, that's the end of my ramblings for today.
Charles Max Wood [01:04:58]:
I just wanna chime in on the Expanse stuff. Also, I'm just gonna put it out there. I very much prefer the Docker setups to the other kinds of setups. But, The Expanse in particular, you're going to find through the whole TV series and through the whole book series that your observation mostly holds out for all of the other books. A couple of things that bothered me a little bit is that they so the book some of the books are spaced out over years. Right? So you you have one book and then a bunch of stuff happens, and then the next book starts. And a lot of times, there's a novella that fills in the gaps. Some of the novellas aren't as good.
Charles Max Wood [01:05:42]:
But the other thing is is that those gaps and the things that happen in those gaps are kind of important. And so the way that they try and shoehorn some of the plot points to keep it more or less continuous didn't really work. And so when they ended the TV series, they actually left off the last book and, you know, a bunch of other stuff that I kinda wish they'd done. So, anyway but overall, they did an excellent job on the TV series. The other thing to keep in mind with the TV series is I think the sci fi channel did the first 2
Steve Edwards [01:06:15]:
3, season 3.
AJ O'Neil [01:06:16]:
And then Amazon picked it up.
Charles Max Wood [01:06:18]:
Amazon picked it up. And when Amazon picked it up, we got better. So
AJ O'Neil [01:06:23]:
There I know there was 1 like, the first season of the TV series was pretty good, and then either the either the 3rd or the 4th one was pretty good. But that yeah. That's I think it was the second one. It was just, like, out of nowhere. It's like they just I don't know if the book's that way. I don't know. Does is the book like a completely different unrelated story for
Dan Shappir [01:06:42]:
the second book? No. I don't I don't remember for sure, but, no, I don't I
Charles Max Wood [01:06:43]:
don't remember it being that. So
AJ O'Neil [01:06:48]:
Yeah. Because it they they go into this thing about Mars and then the the the humanoid aliens. And and then, like, that storyline has dropped, and it's never picked up back up again. So I don't know if that like, if the book has that or if that was just like they were padding the TV show.
Charles Max Wood [01:07:04]:
The the continuity I remember the second season being mostly based on the second book. But, yeah, the continuity in the books is really, really tight.
AJ O'Neil [01:07:13]:
So Okay. Cool.
Charles Max Wood [01:07:16]:
I'll
AJ O'Neil [01:07:16]:
look forward to the second one
Steve Edwards [01:07:18]:
then.
Charles Max Wood [01:07:18]:
Yeah. Alright. Dan, what are your picks?
Dan Shappir [01:07:24]:
Okay. So I have a couple of picks today. My first, since we've been discussing, performance and the impact of perform that performance can have on the success of a website, There's this excellent website for web performance that Google created. Well, web performance and web development called web.dev. And they have a section there web.dev slash case dash studies. We'll put the link in the show notes, obviously. But it's got a whole like, lots of case studies of companies that improve their performance or certain aspects of their performance and the benefits that they've gained as a result of these improvements, like actual numbers and actual testimonials and and figures and stuff like that. So if if you need to prove to your, let's say, management while why it's worthwhile investing time, effort, maybe money into improving the performance of your website or web application.
Dan Shappir [01:08:28]:
You know, you can go there and you'll find a lot of relevant content. So I think this is a useful resource in the context of what we've been talking about today. So that that would be my first pick. My second pick, I've mentioned that, we've been clearing up our library and, I found various books that they haven't read in a while and was deciding which ones to keep and which ones to let go, basically donate, and also which ones to reread. And I'm and I I think I've mentioned before that I'm actually reading a series of books called the saga of the Plios in exile. It's, a series of books from the eighties written by Julian May. She was, sci fi slash speculative fiction author. It's kind of an interesting work in the sense that it's kind of midway between science fiction and fantasy In that, it's it's supposed to be like science fiction based, but it gives a lot of, fantasy vibes.
Dan Shappir [01:09:35]:
But it's, from my perspective, it's an excellent series of books. There there are 4 books in the series since it's from the eighties. They're all written, So you don't have to worry about, you know, an incomplete series of books. It's it goes from start to finish. It's they're pretty thick. Lots and lots of characters. Lots of character development and character interactions. And it's just a great series of books.
Dan Shappir [01:10:00]:
Lots of action and adventure. But also, she really gives, like, the flashes out the various characters. One complaint that I've heard about the books that and I can see where it's where it's coming from, though I although I don't necessarily agree with it, is that the depictions of the LGBTQ, let's call it community or or people that are identify as such, especially trans people, is not ideal. It may have to do also when with when the books were written, but I'm just putting it out there in the case that it impact might impact the decisions of some people to read it. As I said, I think the books are are really good. But, again, this is my own personal opinion. So that would be my 2nd pick. And I mentioned them before.
Dan Shappir [01:10:52]:
It's just that it's, like, it's a long series of books. So it's taking me a while to read through them. And I, by the way, I can't really I can't deal with audiobooks. It's it's I have to actually read the book. I don't know. When somebody is reading it out to me, it kind of feels weird to me. I don't know. It's maybe it's just me, but that's the way it is.
Dan Shappir [01:11:15]:
So that would be my 2nd pick. My 3rd pick, I'm also very much a history buff or history fan, and especially of, ancient history. And given the fact that I live in Israel, also history of the Middle East, which has a lot of history and a lot of ancient history. And I found these series of lectures called the Rise of Ancient Israel with professor Israel Finkelstein. He's a professor of archaeology in the Tel Aviv University. He's done some of the most significant is archaeological digs in Israel, certainly in recent years. And it's a very long series of conversations that he has with one of his students who's actually made this, basically recorded this series. There are 20, 1 discussions, and they are like something like 40 something minutes long each.
Dan Shappir [01:12:18]:
And they talk about, you know, how, the the evolution of of the the ancient kingdoms of Israel, of, you know, King David and and and and before and after. It's it's really interesting. And if you're into that, I do have to caveat this with the fact that, he takes the Bible as a serious source of historical information. But when there's a conflict between the story in the Bible and the archaeological findings in the field, he will side with your archaeological findings. Or or put another way, he sees the Bible as, not as a historical book so much as a book of, you know, religious book and a and an an ideological book that is based on historical events. So
Steve Edwards [01:13:21]:
Considering the archaeological brass fee of the Bible is something I read up quite a bit too on. That's sort of an Oh,
Dan Shappir [01:13:27]:
there's a lot of there's a lot of historical veracity. He he is not he's not denying it. But, again, when there are conflicts and there are some conflicts between result. Potentially. But, you know, the the problem that he likes to state, that he says a lot about archaeology is that you can only go but what you but what you by what you have found. And that maybe tomorrow, you'll find something new that completely changes your point of view. Yep. That's very true.
Dan Shappir [01:13:58]:
But again, you can only go based on what you found. And, while a lack of evidence is not an evidence well, how does it go?
Steve Edwards [01:14:07]:
I know exactly what you're saying. Yes.
Dan Shappir [01:14:09]:
But still, you know, if there's no evidence for particular events where you expect evidence to be abundant, it does say something or at least it raises some in some significant questions. Anyway, I highly recommend it. It's it's an excellent conversation. I put the link to the entire playlist. If you're into that, very highly recommended. Cool. And my final
Charles Max Wood [01:14:36]:
That sounds really fascinating. Yeah. There goes my week.
Steve Edwards [01:14:40]:
Yeah. Right.
Dan Shappir [01:14:42]:
And, Yeah. I would be curious, you know, as to what you get from it. It's it's really, really informative. And my final I won't call it pick. I'll call it a mention. Today is the Holocaust Memorial Day in Israel. And this one's especially hard because we still have a 132 hostages being held in Gaza by Hamas. So it's kind of I wouldn't call it the modern day Holocaust.
Dan Shappir [01:15:15]:
It's not quite up there, but it's it's it makes everything harder. And, you know, we don't even know how many of them are still alive and how many of them have been murdered or tortured to death. Mhmm. And and it's and it's, you know, and you swing between hope hope and and, you know, feelings of of hopelessness, and and it's it's really hard. So, anyway, those would be the picks and mentions that I wanted to make for today. And over to you, Chuck.
Charles Max Wood [01:15:50]:
Alright. I'm gonna put out my picks and then we'll let Lazar do his. So I always start with a board game. In this case, I'm doing a card game. This one's called Hanabi. Hanabi is the Japanese word for fireworks. And, the game's pretty simple. You are dealt a hand of cards.
Charles Max Wood [01:16:10]:
It's usually, it's always 4 cards. You hold them facing everybody else. You don't know what cards you have. You can see what everybody else has, but you can't see what your, what you have. And then what you do is you, you can either play a card. So if you know what you have, or you think you have a good idea of what you have, then you can play the card. And what you're trying to do is you're trying to get stacks of all the colors to go from 1 to 5. And there are 3 ones, 2 of 2, 3, and 4, and then 15 of each color.
Charles Max Wood [01:16:44]:
And so, you can play a card, you can discard a card. So the way we always play is we always let everybody know I'm discarding off of the right hand side of my hand. Right? So if people don't want you to get rid of it, cause it's a 5. And if you discard a 5, you lose because you can't play it then. Right. Then people will clue you, Hey, this is, and so that's the last one is you can give a clue and a clue is these cards are white or these cards are yellow, or you could do these cards are twos. Right? And so anyway, and then you're, you have to kind of keep track of what, where stuff is in your hand without being able to see it. And so, anyway, it's a super fun cooperative game.
Charles Max Wood [01:17:27]:
I like it better than a lot of the other cooperative games because, like when I play cooperative games, there's one in particular that when I play with my wife, it's, it's me and her and anyone else who's playing and she's telling us all what to do. And I just, I don't, I don't love playing the game where I'm watching somebody else play my game. So, anyway, this one's different because you you can't do that because you are missing information. So yeah. And and we usually chitchat while we're doing it. You just have to be careful. Because if if I'm holding my cards up and I know that I have a particular card, I may have inferred that from the clues I got and the fact that I can see the other players hands. And so if you have, if you know what a card is, you can always say, I know that this is the 4 of white because you might've inferred that from the fact that they told you it was, a white card and, you know, from the discard pile and the other hands that, right, it can't be anything else.
Charles Max Wood [01:18:30]:
So anyway, super fun game. So you can buy it. It's like $10 on Amazon. And then the other picks I have, I have a couple of them. So one of them is a movie that my wife and I saw last week or the week before it's called Escape from Germany. And it's a true story about, Latter day Saint missionaries that were in Germany when the war started. And so, you know, as you can imagine, they were somewhat hostile toward, Americans. And they were also hostile toward, missionaries because they were hostile towards certain, kinds of religion.
Charles Max Wood [01:19:20]:
And so, anyway, it's it's just a series of miracles on how they got all those missionaries out of Germany. And I I really, really enjoyed it, so I'm gonna pick that. It's done by TC Christiansen, who's the guy that did The Other Side of Heaven. And so if if you like that movie or that brand of movie, then definitely check it out. And then, the last pick I have besides telling you to go check out, javascriptgeniuses.com is if you go to so Brandon Sanderson last year, he put up a YouTube video and a Kickstarter and basically said, I was locked in my house during the pandemic. So I did what I do. I wrote all these books, but I didn't tell anybody about them. And so it's a series of books he's called The Secret Projects.
Charles Max Wood [01:20:12]:
And, I listened to the first book on Audible. It's called Tress of the Emerald Sea. And, I'll put an Amazon link on, to the Audible, books. You can buy it with a credit. But it's this one's part of the Cosmere. So he has he has a university, writes a number of his books in, and you can kind of see some of these worlds converging. Right? Beginning to converge because you have crossover with some of the characters. Usually it's minor characters, not major characters.
Charles Max Wood [01:20:47]:
But anyway, this one is in that vein. The narrator of the book is Hoid. If you've been following along with Brandon Sanderson's stuff, he wrote the whole thing in Hoid's voice. And Hoid is one of the main characters in this book. But it is it it was it was it was a fun book, really fun book to listen to. So if you're into audiobooks or if you want to just I guess you could just pick up a copy of it. The Kickstarter, he mailed out a book every month along with a bunch of other stuff. But now you can go and you can get the books without being part of the Kickstarter.
Charles Max Wood [01:21:23]:
So a year later right? So last year, if you backed Kickstarter, you got this book in January and you got the next one in February, March, April. And now the first four books are out because we've gone through April. And so I'm assuming that the 5th book in the secret, secret book series is gonna come out pretty quick here because we're into May. So anyway, I really, really enjoyed that book. So, anyway, those are my picks. Alright, Lazar, what are your book? What are your picks?
Lazar Nikolov [01:21:56]:
Speaker 3 (1h 13m 13s): question. Do the pigs need to be non technological or
Charles Max Wood [01:22:01]:
They can be technical or nontechnical.
Lazar Nikolov [01:22:04]:
Okay. Because I only got technical. I'm a I'm a That's
Charles Max Wood [01:22:06]:
it's all good. I'm I'm always into new stuff, and I'm starting to get into AI. So I'm gonna start picking some of that stuff too.
Lazar Nikolov [01:22:13]:
Oh, cool. Yeah. Yeah. So I I I didn't know that I needed to prepare picks, but, something on top of my mind. I'm gonna mention Sentry, of course. Check it out. The free tier is generous enough for you to get started and, it it's it's it's generous, so you should check it out. But, when it comes to picks, I got one interesting and that is a project by, Joanne Leon.
Lazar Nikolov [01:22:38]:
I'm sorry if I'm butchering your name, but it's basically and I'll drop the links here so you can check it out. It's basically a collection of web performance snippets that you can install, not install, but I move into your web browser so you can, you know, check out what is the LCP elements or whatever it is, that that the snippet provides. There's also IMP. There's also the a whole loading category, but it's basically at dev time before you, commit what you have. If you wanna check out how the performance looks like on your machine, you can check out all of these, beautiful snippets. So these are at dev time. One more new thing that, I came across and and the and, Henri Elvertica told me about this is the RAM archive. It it's it's it's basically like crux, but it's data taken from Akamai, and it's put together in a database where you can query it.
Lazar Nikolov [01:23:44]:
And I haven't been playing too much with it, but it's basically, yeah, you can use you use this database to see run data, and you can split it into different frameworks, etcetera. So, can plug that in. And then also I've been playing with, a project called UI dot acetonity. So it's a collection of UI components for React built with Tailwind and Framer Motion, and they look really good. And I tried using them, but some of them are, are really making an impact on the performance. So I'm looking into these, components right now and and figuring out how I can or if I can make them a bit more performant. Right? So instead of using Framer Motion, can we do that with plain CSS? So we're not introducing or shipping too much JavaScript to the client. So these are the the things that are on, you know, top of my mind.
Lazar Nikolov [01:24:52]:
And I I would have been more prepared, but I'm sorry.
Charles Max Wood [01:24:56]:
No. It's all good. Thanks for coming. This was a lot of fun. It's good to kinda dive into some of these tools that a lot of people use. I also found a lot of applicability for if people were using, like like I said, things that are like Sentry but are not Sentry. Some of those features are there. Some of them are not.
Charles Max Wood [01:25:16]:
But, yeah, I I think I have a much better idea in some of these areas, especially on the APM, the performance side on what I can grab. So, so thanks for jumping in and until next time folks, max out.
Hey, everybody. Welcome to another episode of JavaScript Jabber. This week on our panel, we have Steve Edwards.
Steve Edwards [00:00:13]:
Oh, AJ's here, so I can't use the yo yo yo. So I'll just say hello from a still cloudy and rainy Portland. But for those of us skiers, we're still getting snow up in the mountains, blowing pretty hard, so it's great for late spring skiing.
Charles Max Wood [00:00:29]:
Yeah. It was wet here. And then when I was driving my daughter to school, we saw a couple of vehicles with snow on them that must have come down out of the canyon or something. We also have H. A. O'Neil. Oh, go ahead.
Steve Edwards [00:00:41]:
I was gonna say, yeah, when you get up above start where you climb up Mount Hood towards Timberline, it's immediate change, and you're still getting blowing snow. Nice. It's crazy. It's great for skiing. I was up yesterday. It was awesome.
Charles Max Wood [00:00:55]:
Nice. Alright. We also have AJ O'Neil.
AJ O'Neil [00:00:59]:
Yo yo yo coming at you from the fish room.
Dan Shappir [00:01:03]:
What's the fish room?
Steve Edwards [00:01:05]:
Fish room.
AJ O'Neil [00:01:07]:
Yeah.
Charles Max Wood [00:01:07]:
Is that an aquarium over there? Or
AJ O'Neil [00:01:09]:
Yes. Yeah. You can't really see it, can you?
Charles Max Wood [00:01:12]:
But, no, it's pretty small.
AJ O'Neil [00:01:15]:
Yeah. So I I've got, like, 10 aquariums in here right now. So, during the long hiatus, I I got into Aquaria. That was that was my my winter depression hobby.
Dan Shappir [00:01:28]:
The age of Aquarius?
Steve Edwards [00:01:30]:
AJ, I got a pick for you. I got a great pick for you when we get to picks.
Charles Max Wood [00:01:34]:
Excellent. Right? Are you an Aquarius? Is that one of the signs, or am I just making that up?
Steve Edwards [00:01:40]:
No. It's actually Aquarium. No. I'm kidding. Yes. It's Aquarius.
AJ O'Neil [00:01:44]:
A person who's into Aquaria is an Aquarist.
Charles Max Wood [00:01:50]:
Wow. Things I never needed to I mean, we also have Dan Shapiro.
Dan Shappir [00:01:54]:
Hello. From a warm and sunny Tel Aviv, which somehow didn't prevent me from getting this really bad cold for the entire week.
Steve Edwards [00:02:02]:
You do know that getting a cold has nothing to do with temperature. Right?
AJ O'Neil [00:02:06]:
Yeah. That's not true.
Dan Shappir [00:02:07]:
That's what I think. It has to do with changes in the weather, I think. Like, certain changes make you more susceptible.
AJ O'Neil [00:02:14]:
Yeah. It's it's stress to your body. When the temperature swings or when you're cold for a long period of time, your immune system dips. So bacteria and viruses that are all around you that weren't affecting you then suddenly affect you. Also also, I became allergic to the earth, by the way, since since last time we spoke. I have allergies now. First time.
Dan Shappir [00:02:37]:
The entire planet?
AJ O'Neil [00:02:39]:
Yeah. I think so. Yeah. I wake up in the morning. Well, actually, it hasn't been so bad. It is, like, 3 weeks or so. I wake up in the morning. My eyes were itchy.
AJ O'Neil [00:02:47]:
I'd sneeze, sneeze, sneeze, sneeze. Oh, it was terrible.
Steve Edwards [00:02:50]:
Well, it's getting close to hay fever time, which is what I get. Although here, it's been so wet that it's keeping everything down, but this is the best time of year that I get hay fever.
Charles Max Wood [00:02:58]:
I I get that too, except that, the rain that we get is like the light rain that you get. So it's just enough
Dan Shappir [00:03:05]:
to make
Charles Max Wood [00:03:05]:
everything bloom like crazy. Yeah. So I haven't had a cold, but I've had a lot of that what AJ is complaining about for the last few weeks. So I just I live on my Zyrtec. I have it at my desk because I just I just take it as soon as I said Yeah.
Steve Edwards [00:03:20]:
I know what you mean.
Charles Max Wood [00:03:21]:
Anyway, I'm Charles Max Wood from Top End Devs. This intro is taking forever, so I'm just gonna skip all the pleasantries. Go check out javascriptgeniuses.com. And we have a special guest this week and that is Lazar, Nick Nikolaev. Did I get anywhere close?
Lazar Nikolov [00:03:38]:
That was alright. Yeah. It's Lazar and Nikola. And thanks for that's what
Charles Max Wood [00:03:42]:
I meant
AJ O'Neil [00:03:42]:
to say.
Steve Edwards [00:03:45]:
You were close. You got the syllables right just in the wrong pronunciation.
Charles Max Wood [00:03:50]:
Yeah. Anyway, we we brought you through the century.
Steve Edwards [00:03:54]:
Well, Zara, after listening to all of that, or was that just so entertaining that you're on the edge of your seat? No. I'm I'm alright. Okay. Good.
Charles Max Wood [00:04:02]:
Yeah. All the back and forth. Matt Henderson from Sentry recommended that we have you on talk about web performance stuff and maybe some of the stuff that Sentry provides. And so, yeah, do you wanna just, fill us in on who what else we need to know about you? We had a long discussion about Macedonia before you.
Lazar Nikolov [00:04:20]:
Yeah. Jumped in.
Charles Max Wood [00:04:21]:
So
Lazar Nikolov [00:04:21]:
yeah. Well, thanks, Matt, for, recommending me. Thanks, bud. But, yeah, I'm Lawson and Nikolaev. I am a part of the DevRel team here at Sentry, and I'm all about web performance. That's my main focus.
Charles Max Wood [00:04:37]:
Very cool.
Dan Shappir [00:04:37]:
Cool. I I'm really I really love web performance. I dig it. Right. Yeah.
Charles Max Wood [00:04:44]:
So so I'm a little curious just as we get into this, you know, and and Dan fills us in periodically. It's like, hey, there's this thing. But but what kinds of things are you focused on at Sentry as far as making people's web apps more performant or more user friendly or, you know, whatever it is that, you know, you're you're working on these days in the web performance arena?
Lazar Nikolov [00:05:08]:
Yeah. So, I'm focused on the educational parts, and not necessarily just, Century educational part, but just, like, in general, you know, how to how to develop good developing, habits, so your performance is in order, basically. That's that's all I do.
Dan Shappir [00:05:28]:
But So you're what? Yeah. Okay. If maybe I I I do want to direct us first at slightly a different direction because we have been speaking about performance a lot on this podcast of late.
Steve Edwards [00:05:40]:
Mhmm.
Dan Shappir [00:05:40]:
But but we've not spoken about Sentry in a while. I mean, we've had some people from Sentry, like, I think, like, year or 2 ago. But maybe some of our listeners somehow are not familiar with who Sentry are or what Sentry is and what it's bringing to the table. So maybe we should actually start a little bit talking about Sentry before we then go talk specifically about Sentry and performance.
Charles Max Wood [00:06:06]:
Yeah. Sounds good.
Lazar Nikolov [00:06:10]:
Should I go for it?
Charles Max Wood [00:06:11]:
Yeah.
Lazar Nikolov [00:06:12]:
Awesome. Yeah. So, for those for those of you who don't know, Sentry is an, error monitoring and APM solution for any of your web apps and mobile apps and, I don't know, desktop apps and anything, basically. So what it does is, it offers, I would say that the like the best in class, error monitoring. Every time something wrong happens in your app, you get an email, you get a Slack message, you get a Teams message if you're there, you get a you get paged. If you're on PagerDuty, you get a ticket in Jira, etcetera, etcetera. You just get annoyed basically from all the sides. And that's just for the error monitoring aside from just telling you, you know, your app's on fire.
Lazar Nikolov [00:06:56]:
You also get a a whole bunch of cool, and useful information like hardware info, software info, what the user is, if, you know, you have configured tracking users, etcetera. Yeah. All you need in in order to fix the bug. So that is the error side and there was also the performance side or just in general, APM because there's more than just performance. You get monitoring for web vitals, networks, resources, Traces, which are becoming, like, a central part at Sentry right now. And, yeah, it's it's it's a tool that you install in your application. Whatever the SDK is, we pretty much, have support for all of them. And, it lets you know and, when something bad happens, and it keeps your performance in order.
Lazar Nikolov [00:07:56]:
That's basically what Sentry is.
Dan Shappir [00:07:58]:
So first of all, you use the acronym APM. Can you explain what that means?
Lazar Nikolov [00:08:03]:
Yeah. It's, application performance monitoring. Cool. Now
Dan Shappir [00:08:08]:
if we if yeah. It's continuously what? Sorry.
Lazar Nikolov [00:08:12]:
It's basically continuously measuring, and reporting, metrics such as web vitals and what else? Yeah. Like like I mentioned, resources that your website is pulling in, network request that, you know, the app is triggering, etcetera.
Dan Shappir [00:08:36]:
Now if we start with with the legacy stuff sorry, Chuck. You want you were saying to say?
Charles Max Wood [00:08:41]:
I I was just gonna say, you know, when you asked what it measured, I mean, we keep hearing web vitals. We talked a lot about that, but I've also seen them track, other things. Right? So, Core Web Vitals is usually used for SEO. You can go listen to the handful of episodes we've done, but I'm of a I've also seen it track, how long certain function calls take or, on the back end, how much time it's taking to access the database and execute queries and things like that, or how long this request took to to a back end or an API or things like that. So it's it's more than just web vitals. But a lot of the business folks, that's what they're gonna care about. They're gonna care about, how performant is it and does it match the web vitals so that we can show up higher in the list on Google.
Dan Shappir [00:09:29]:
Although to be fair, and I and I keep saying that. I mean, on the one hand, it's a really good thing that Google made, performance or core vitals a ranking signal because it really pushes the entire industry to to improve that aspect of of, you know, web websites and web applications. But the reality is that the impact on ranking in most cases is not that massive, to be honest. It's it's often considered to be a tiebreaker rather than a decider. Like, the big aspects of ranking are still things like, you know, content. Content is king. So whether it's relevant content and also authority. Like, you know, if you're going to ask something related to health or medicines, then the CDC or something like that will come up first and for good reasons.
Dan Shappir [00:10:26]:
Or if you ask something about the news, then CNN will likely appear high in the search results even though their performance is historically atrocious, because it's more about those aspects. But Right. First of all, it still is important because it can be a tie breaker. But more significantly than that, it actually has significant impact on user experience once they're in the page or in the website. And, lots of research has shown that when performance is poor, bounce rate is high. And conversely, that when performance is good, then conversion is high. So there's definitely a significant motivation to improve that. But, again, I want to pull us back a little bit because, again, Sentry has historically been known for the error tracking and monitoring.
Dan Shappir [00:11:19]:
I think that's still what most people think about
Steve Edwards [00:11:22]:
Mhmm.
Dan Shappir [00:11:23]:
When they think about Sentry. So I do want to talk a little bit about this before we dive into performance if we can. Per my understanding these days, Sentry is kind of full stack. It does both the front end and the back end and provides a holistic view of all this information together. Is that a correct summation?
Lazar Nikolov [00:11:45]:
Yeah. Basically, when you have multiple projects in Sentry and, you use Sentry's SDKs to connect them, you automatically get a distributed trace. You can start looking at the front end and then move on to the back end and database and then go back whatever your operation flows are configured.
Dan Shappir [00:12:04]:
And, again, per my understanding, Sentry is something that I can install effectively for free on premises, or I can use kind of your hosted server, which then I have to pay for. Is that correct?
Lazar Nikolov [00:12:19]:
Yeah. Yeah. It's, free. So, self hosted. Yeah.
Dan Shappir [00:12:26]:
So when I self host it, it's free and it think it's also or at least partially open source?
Lazar Nikolov [00:12:34]:
It's it's like it's source available. Yeah. So the license is not MIT. We came up with our own, but that's just for, you know, yeah, like protecting other businesses from, you know, piggybacking on on Centurion, repackaging it. Mhmm. But it is yeah. It's source available.
Dan Shappir [00:12:59]:
So if I have, like, an error in my JavaScript, you know, something that would would get written to the console or an Ajax call that fails for some reason, an uncaught promise, rejection and uncaught exception, so on and so forth. All of these things are collected and then exposed to the site owner through the Sentry management console effectively. If I if I, again, understand correctly.
Lazar Nikolov [00:13:33]:
Yeah. So, like, every SDK, has a DSN configured, which is basically a link to where your instance is running. If you're self hosted, you're gonna have your own URL tell there. So that basically tells the SDK whatever you capture and and, you know, measure, send it over there. Yeah. So when you're on premise, the data is on your, servers.
Dan Shappir [00:13:58]:
Understood. And one of the things that I recall that I really liked about Sentry was the fact that it was smart enough to group errors together. Like, one of the problems with a lot of these monitoring solutions is that you could literally drown in a sea of errors because let's face it. You know, so much stuff is getting collected and reported. So you you don't want to, you know, have to sift through tons and tons of of logs and errors and and stuff like that. You want the system to aggregate the results and then just show you, you know, this is something that's important. This is something that you should look at. So that's one thing that I recall about being really powerful with Sentry.
Dan Shappir [00:14:44]:
And another thing that I recall being really powerful with Sentry is the amount of information that was exposed that could help you. Okay. You know, there's an let's say, an, an uncaught prom a promise rejection. Why? Like, what is this promise? What what's it's what's the value that it's trying to get? So all this information is also surfaced to you as the, website or web application owner.
Lazar Nikolov [00:15:16]:
Yeah. Totally. I mean, you you get all of those information attached to the issue itself to we don't call them crashes or anything. We just call them issues. So, like, it's it's all there. Yeah.
Dan Shappir [00:15:30]:
So how then did you actually pivot into performance? I mean, if you had this focus on errors and collecting errors and reporting errors, where's this sudden focus on performance? Where did it come from?
Lazar Nikolov [00:15:47]:
Not sure to be honest, because I I think there was, like, a few years, before I started in Sentry, but, I think it was natural. We we got the we had errors. We had tracing. And with tracing, you can, identify all the performance bottlenecks. There's also profiling, which taps into the whole, like, connects into the whole tracing and and performance, topic. So I'm not sure, like, what is the exact moment that prompted Sentry to start looking into performance. But, we just we we just know that performance is super important.
Dan Shappir [00:16:31]:
Would you say tracing? What exactly do you mean? Yeah.
Lazar Nikolov [00:16:35]:
I mean, we, we mean, like, tracing, the operation flow, for a specific operation, that can be, I don't know, a checkout flow or a login flow. So as, functions executes and, you know, API calls get executed, tracing is basically putting little bread crumbs along the way and then capturing the whole, the whole path of the operation flow. So you can see what where the operation flow went in in, I don't know, what microservice and how long it took for each of the functions or whatever you have defined or instrumented.
Steve Edwards [00:17:18]:
So sort of like a stack trace then is with variation?
Lazar Nikolov [00:17:23]:
Yeah. Stack trace is like a the the snapshot of the of the stack at that specific moment, but tracing is more of a, it's like on a on a on a timeline basis where the operation went. So for example, if we start at the the front end, we could have some spans and spans are how we spans are the things that we use to define the trace, right? So we, we basically create a trace and the SDK automatically does it for you. But let's say we just create a trace, we get a trace ID, and then we sprinkle around spans which are connected to the trace, but those define what happens at a specific moment. So instead of just console lock, I am here or console lock, we reach this point, you're basically creating spans, which also, have a starting point and ending point. They have tags. You can put tags in them, etcetera. And those don't just stop at the, I don't know, front end.
Lazar Nikolov [00:18:36]:
You could send the trace header to the back end and continue the trace on the back end. So then when when you want to debug how, a certain operation behaved or, like, what happened, you see one timeline that, combines the data from the front end and the back end or your back end. If your back end is a microservice architecture, then you'll just have the data from all of the microservices there on one timeline, let's say.
Dan Shappir [00:19:05]:
So a question about that, is it kind of like open telemetry traces?
Lazar Nikolov [00:19:12]:
Pretty much. Yeah. And we also have, an Otel adapter. So you could use the Otel instrument instrumentor to get the data and then send it to Sentry as well.
Charles Max Wood [00:19:25]:
There's What is OTEL? Sorry? OTEL. What what is that?
Lazar Nikolov [00:19:30]:
Alright. This old OpenTelemetry.
Charles Max Wood [00:19:32]:
That is Oh, okay.
Lazar Nikolov [00:19:33]:
Yeah. That's o an open source standard, I would say. There's also
Dan Shappir [00:19:39]:
Standard and implementation, I think.
Lazar Nikolov [00:19:41]:
Yeah. It's both. And implementation and tooling, around the instrumenting parts of your application.
Dan Shappir [00:19:47]:
Basically, think about if if you want to, like, have a, like, a mental image of what is meant by traces as Lazar describe them. It's kind of similar kind of similar to what you see in the, performance, tab of the Chrome Dev Tools. Like so it's kind of or, like, what is often known as a flame chart. You kind of see each of those traces are like one of the, levels in the flame chart. So you have, like, a span that starts and ends when a certain operation takes place from the beginning to the end of that logical operation. But we within that, you have span. So you could think of a function. So you have a span for the execution time of that entire function, but it calls sub functions and they have their own spans within that span.
Dan Shappir [00:20:39]:
So you can, like, go through, like, the execution. So it's like, stack trace over time as it
Charles Max Wood [00:20:47]:
were. Okay.
Dan Shappir [00:20:50]:
And and it's now you collect this for, like, every session or, like, is it sampled? Or how how does it work?
Lazar Nikolov [00:20:58]:
Yeah. There is a sampling configuration, already put in into the SDK so you can you can play around with the values. It's basically just from 0 to 1. That's how Sentry configures it. So if you want 10% of your sessions to be sampled, then you just type in 0.1. So you don't And
Dan Shappir [00:21:16]:
the sampling doesn't adversely impact the performance of the session? Like, the user doesn't notice it that it's being sampled in this way?
Lazar Nikolov [00:21:24]:
I wouldn't say so, but depending on on on you. For example, if you're instrumenting everything, all the, you know, important and non important bits and pieces in your application, then you'll probably see some performance overhead. But that's yeah. That that's all in your control, basically.
Dan Shappir [00:21:45]:
Cool. So we were talking about the fact that you have this tracing mechanism. And from that, you kind of moved into also performance monitoring.
Lazar Nikolov [00:21:57]:
Yeah. Yeah. Because the trace contains everything. Right? The the trace knows where, the application went or the operation went and how long it took so we can identify performance bottlenecks. But then what we do at Sentry as well is whenever an error happens, we attach it to the trace. So you can see on the timeline, on the flame chart, you can see where an error happens and and how the operation behaved after, you know, the error got triggered.
Dan Shappir [00:22:30]:
Also, I assume how you actually got to that point of the error because that's often the thing you most want to know.
Lazar Nikolov [00:22:37]:
Exactly. Yeah. The the spans will give you the information about what data, you know, was being handled and basically what happened leading up to the error.
Dan Shappir [00:22:51]:
So do you have, like, something like I I'm like, I can I literally see things like what, you know, like a, sort of like, what the user was actually even maybe seeing on the screen or stuff like that when the error happened? Or
Lazar Nikolov [00:23:08]:
Yeah. We do. It's it's like a different, sub product. It's called session replay, and it's just an integration, that you need to install in your client facing applications. Right now it's just web, but we're coming up with, support for mobile as well. So it's like DOM recording. It's not a screen recording, but it's basically records the the DOM and sends all the yeah. In a sense, basically, the recordings along with the whole, you know, monitoring and capturing.
Dan Shappir [00:23:41]:
Oh, that's enough. I mean, it don't does represent what the user actually sees. Yeah. And
Lazar Nikolov [00:23:47]:
that's all done. Like, you you also get the net network request. You also get the console log, console error or whatever you're outputting into the browser's console. It's it's like literally your you have access to the person's computer.
Charles Max Wood [00:24:05]:
That one one question I have related to that is a lot of times there's information, be it personally identifiable information or passwords or things like that. I I'm assuming that you can configure Sentry to screen all that out. So does it show up in the
Lazar Nikolov [00:24:20]:
Yeah. Yeah. Sentry comes we're scrubbing that out, automatically, and and that happens on in in both sides. For example, the SDK itself does some scrubbing, so there is no PII going through the wire. But then also before we start processing the data, there's a there's a a a thing called relay, which takes in all the data and does additional scrubbing on the server side as well before we put it. And that's for the, I I think the self hosted one has it, but then also the the SaaS one also supports the scrubbing of the PII. But it's configurable. For example, if you're if you're tracing if you implement tracing and you append the you explicitly append PII in through the context or tags, whatever it is, is going to be sent.
Dan Shappir [00:25:15]:
So you need to think about what it is that you're actually collecting. That's the bottom line.
Lazar Nikolov [00:25:20]:
Yeah. Like, out of the box, if you don't explicitly send data, it's not gonna be sent. But if you're, you know, if you're configuring the context, of the SDK to include the logged in user or whatever data you want to, you know, attach, to the to the context, then it's going to be sent. But out of the box, it's, nothing gets sent, basically.
Dan Shappir [00:25:45]:
So when you added RAM capabilities and by the way, RAM in the context of performance is really user measurements, which means it's data from the field rather than synthetic data created in simulated environments. You mentioned that you obviously collect the the core vitals, which these days are LCP largest content for paint, CLS, commodity layer of shift, and the INP, which is in, input.
Lazar Nikolov [00:26:17]:
Interaction to next page.
Dan Shappir [00:26:19]:
Interaction to next page. Exactly. I've been, like I said, had a head cold. I'm still recovering. But what other performance data are you collecting that's relevant to, identifying performance bottlenecks in in, in the application?
Lazar Nikolov [00:26:38]:
Yeah. So, there's also database, monitoring. So whatever you use, you you essentially has a has a an integration for your database driver. There's a built in for postgres. There's also for Prisma if you're using Prisma. But if not, there's a way you can, you can manually, attach, the query and also the results in the span itself when you're instrumenting. So there's also database monitoring or, like, you're monitoring the queries as well, yeah, aside from the web files. We also measure all of the resources that your page is pulling in a JS files, CSS files, if they're blocking or not, images as well, how how big they are, like, is there, like, a way to opportunities for you to optimize them, etcetera? So there's quite a quite some data getting pulled into the Sentra dashboards for you.
Dan Shappir [00:27:45]:
Cool. One of the biggest challenges with performance monitoring in the field, or RAM, as I mentioned, is the concept of attribution, which basically means, let's say I have a bottleneck. My, larger my LCP, largest content for paint, is high, which means it takes a long time for the primary content to be displayed from the session start, which is fine and good. But I want to understand why it happens and what I should change Or even potentially more challenging, the new metric, INP, interaction to next paint has to do with how often the main thread is blocked by, for example, for example, long running JavaScript code. But there might be a lot of different JavaScript code that is running inside the context of my session. Could be first party code. It could be third party code. For example, various tag managers, pixels, and whatnot.
Dan Shappir [00:28:58]:
So and I want to understand, you know, which one is the one that's causing the most harm, the most damage, and that requires some sort of mechanism of attribution. So, how do you go about that?
Lazar Nikolov [00:29:15]:
Yeah. So, I feel like you have all the tools, to fix all these things. For example, if you're looking into INP, then maybe capturing a profile and looking at a profile, of, you know, of your running application can actually shed some light on what is happening in the background and why your, you know, your website is experiencing INPs, at the time of, I don't know, interacting with the element?
Dan Shappir [00:29:44]:
So it's it's not profiling on the developer's machine. Not like opening the Chrome Dev Tools and running the profile tab. It's actually collecting profile profiles for for the actual real user sessions.
Lazar Nikolov [00:29:58]:
Exactly. Yeah. It's configurable. Yeah. You can also pull in profiles and, you know, use them to debug INPs or whatever it is. Yeah.
Dan Shappir [00:30:10]:
So can you give us, like, a concrete example? I'm sure that you work with various cost, customers on that. So without naming names, can you, like, give us an example of, like, an interesting case that you ran into and were able to debug in this way?
Lazar Nikolov [00:30:25]:
I don't work with customers, so I can't give you like, yeah, I can give you examples from my demos that I create.
Dan Shappir [00:30:32]:
Go for it.
Lazar Nikolov [00:30:33]:
But, yeah, I I usually just use a trace trace view because the trace view basically tells me everything I need to know in terms of how the page loads or in terms of how my operations my custom operation flows are behaving because I define them in in my apps. But it basically, that's that's how I I go about debugging any of the, like, the top level stuff. But then, as I mentioned, like, if I have DB problems, I'll just use the queries product. And the idea is that it's it's all here. It's all connected to the same data. And on the sidebar, you have all of these tools, that we're we're talking about, and it's all connected to the same trace.
Dan Shappir [00:31:30]:
And what about integrations with, various development environments? Like, let's say I'm using, I don't know, Next JS or or Nox or RemX or whatever. You know, there are like a 1000000 frames or frameworks out there, Astro. Mhmm. What kind of integrations do I get? Are you just looking at it as is JavaScript and the web, but it's all the same? Or are you, or do you have, like, specific integrations for the different frameworks and meta frameworks?
Lazar Nikolov [00:32:06]:
Yeah. We do we do have specific integrations with all of the different frameworks and a lot of them that are currently out there, people that are using or not using. We do that because we wanna tap into the framework itself or the library or the tool, whatever it is. We do that because we wanna utilize the the functionalities that the tool itself is providing when it comes to either instrumenting or, monitoring for errors, etcetera. So a lot of the let's say we're talking about instrumenting. Right? We're talking about tracing. A lot of the operations are already going to be instrumented because we tap into the tool itself and also connecting the, clients or the projects, I would say, we don't really care about that as well because when when one project makes an API or a request in any way to, to a different one, we can also the SDKs will automatically create that trace for you. So a lot of the times it's enough to just install Sentry in all of your projects, set up the SDKs and you'll get a lot of data already configured and ready for you.
Lazar Nikolov [00:33:33]:
But if you wanna get into more, details, then you do the you do the tracing, basically. But web vitals, it's automatic. Session replay, you just need to add the integration. The tracing is covered as much as the framework can cover, but if you have more details or like if you wanted tracing to a big, into a greater detail, then you can always supplement, the the the trace with, you know, additional spends.
Dan Shappir [00:34:08]:
By the way, out of curiosity, do you use, like, the built in browsers, performance dot mark and performance dot measure? Or do you have, like, your own wrappers for the spends?
Lazar Nikolov [00:34:20]:
In some places, we do, but we usually use, for example, for core web vitals, there was this, JavaScript library, and I think it was from the the Google Chrome team.
Dan Shappir [00:34:32]:
Yeah. We
Lazar Nikolov [00:34:32]:
we use that under the hood to capture the data.
Dan Shappir [00:34:36]:
Okay. What are things that you are I don't know if you you can answer that if you're not primarily working with customers, but what are the things that you're seeing as the most common sources of performance issues with people who are using Sentry for monitoring?
Lazar Nikolov [00:34:58]:
Yeah. Well, I've, talking with people, I've, I've noticed that there's a lot of, undefined errors, for example, in in JavaScript land where, you know, you're trying to access a property of an undefined variable or object, you you get the undefined error that is pretty much the most common one. And also, like, failed to fetch, fetch errors not, happening, because of, I don't know, the URL doesn't exist or something like that. I've seen that way too many times, maybe.
Dan Shappir [00:35:31]:
Yeah. But those are general errors. I'm I'm asking what sort of performance related issues are you primarily seeing?
Lazar Nikolov [00:35:41]:
I don't think I can, there's a lot of n plus one when it comes to, like, API request and also DB queries. What else? I don't know. No. I haven't haven't seen I haven't been exploring too much of the customer data.
Dan Shappir [00:36:01]:
I'm seeing from what an interesting split between 2 types of issues.
Lazar Nikolov [00:36:09]:
Mhmm.
Dan Shappir [00:36:09]:
Like, there's the issues related to, let's say, largest contentful paint or LCP, which are about how quickly a website loads. Mhmm. And that's not always interesting for all web applications. I mean, it's interesting if you're building an ecommerce website. But if you're building some sort of a dashboard or or something like that, or you're sitting behind authentication, then you don't care about it as much. Basically, RCP is really important when your when your page is ranked. If your page is not getting scanned and ranked, then you you often don't really care as much about that. And in these days of of service side rendering, SSR, it's often not even the RCP may not even be dependent on the performance of database calls.
Dan Shappir [00:37:02]:
So that's, like, one category of performance that I'm seeing, which has to do with how well have I configured my CDN? How well have I configured caching for my files? Am I using properly optimized images? Am I properly loading my phones and CSS and stuff like that? So that's, like, one category of performance issues that I see. And the other, which is one that you kind of mentioned a couple of times, has to do with once that web application is already loaded, how well does it respond to the various operations that I that either user do inside of it? In which case, it is often has to do with, like, when when a certain operation is performed, how many Ajax calls does it get translated into? How how are these are these call parallel paralyzed or or are they sequential? If they hit the database, which they often are, how optimized is that database query, for example? Like, you know, have I properly created the indexes in the for the SQL query that I'm performing? You know, stuff like that. So these are kind of like the 2 categories of performance issues that I'm primarily seeing. Does that kind of match your understanding?
Lazar Nikolov [00:38:28]:
Pretty much. Yeah. Yeah. And then that's how we're also building the product to, to help developers with these two types of, of issues.
Dan Shappir [00:38:42]:
Okay. So if we're going to to the talking about the database issues. So you said you've got integrations with the major, database providers or database platforms, will it work, like, with any back end that I might have? Is the integration in the at the database layer? Or does it need to be some sort of I don't know. We support Node, but we don't support Go. I don't know.
Lazar Nikolov [00:39:08]:
Yeah. We we do that through the backend, I would say. But it, it, it, it depends on what it's, what is the driver that is being used to access the the database. So we're not basically, monitoring the database server or instance itself, but how the the the client that uses the driver to query the database from that point of view. So it it matters if there is support for the client itself or the back end framework or technology. That is the first thing. And then the second thing is what driver does the back end use to query the database for data? So that is the second thing.
Dan Shappir [00:39:57]:
So which backend technologies do you support?
Lazar Nikolov [00:40:01]:
Oh, a lot. We have Node. We have, Django and anything Python. We have PHP stuff
Charles Max Wood [00:40:12]:
like Laravel, etcetera. I've used it with Rails.
Lazar Nikolov [00:40:15]:
Rails. Yeah. A lot of the the the I would say all the most the the the famous ones, like, the most popular
Dan Shappir [00:40:22]:
JVM, I would assume, probably.
Lazar Nikolov [00:40:25]:
Yeah. There's, like, Java stuff. Yeah. I don't know all of them, but there's, like, too many.
Steve Edwards [00:40:31]:
Yeah. Just for what it's worth, we my, company uses Sentry on a pretty huge Laravel and View site. We use it pretty intensely both for we, you know, see our both Laravel and Vue errors, JavaScript errors, you know, bundling errors and, you know, any number of different errors from both both ends of the application. So it's very handy.
Dan Shappir [00:40:51]:
Do you also use it for performance monitoring? Not
Steve Edwards [00:40:57]:
so much. I think we have some other tools we use. We use it mostly just for the error tracking. We get that like he mentioned earlier, we where he's talking about it being annoying. We get emails and the Slack channel updates and and all that kind of stuff. So
Dan Shappir [00:41:12]:
I think the key thing from my perspective, thinking about, you know, making the web faster, what I like about this is there there are a lot of websites that are using Sentry. Like, Sentry has become, from my perspective, almost a de facto standard for error tracking and error monitoring in in on the web today. And if, you know, if some of these, organizations that are that are using Sentry already have some sort of a RAM tool in place and good for them. But if they don't, then why not just use Sentry to get all this performance information and start making your website faster?
Lazar Nikolov [00:41:56]:
Yeah. I mean, I've seen, I've seen clients, that only use us for error monitoring to also, you know, implement our performance, tools, whether they have or or don't have, you know, already other tools in place. So I've I've seen them move towards Sentry as well.
Charles Max Wood [00:42:18]:
Yeah. I've I've seen that on a few other places and with other competitors to Sentry, you know, that either went from, hey, we do the APM or the application, performance monitoring, and then they added the error stuff or vice versa. But, yeah, effectively people start using one part of the tool and then when they run into some issue. Right, so it it'll come up that, hey, our core vital scores are not where we want them. Right? Because that got on the radar and they checked it out. Or, hey, we've got this page that just takes forever to load. Right? And so then they figure out, oh, we've been putting all this data into this system for a long time. And so now we have it.
Charles Max Wood [00:43:07]:
And so now we're gonna go look at it. And so then they start to audit what's going on in their website. And I think that's one thing that's kind of nice. So the tool that I use for most of my websites for error monitoring does not have an APM component to it. And, yeah, there have been a couple of times where I've looked at things and gone, man, I really need something to, you know, pull this this piece out. And so, yeah, it it's really convenient to have them both there.
Dan Shappir [00:43:36]:
Another thing is a lot of site owners and SEOs, especially SEOs, they look at the Google Search Console. And the Google Search Console, you do have a core vitals area within it. So they can see, like, performance issues that they have with various page and their pages in their website. The problem with it is that, the Google Search Console has, like, this 28 day smoothing window. So you often see issues, like, 2 to even 3 weeks after they've actually started. So you say, hey. You know, I've got a problem, but it's affecting your website for it's been affecting your website for for 2 to 3 weeks already. So that's problem number 1.
Dan Shappir [00:44:29]:
And then you make a fix. It then takes another 2 to 3 weeks to actually see that the fix actually impacted. So, yes, you can tell Google Search Console that, like, I made a change. Please review it. But it's still it's still a chore. So that's problem number 1. And problem number 2 is I mentioned before is this whole issue of attribution. So you can see that in certain in a certain page or group of pages.
Dan Shappir [00:44:56]:
I don't know. C l, CLS has gotten worse. But, like, why? What's caused it? No. I you know, what what change have we made over the past 3 weeks? And there can be so many things. It might be somebody from marketing who just made some sort of change in the page layout or content. It might be, I don't know, issues within one of, you know, an improperly configured CDN or maybe, you know, you you misconfigured your your web pack or whatever and you're creating much larger bundles. Or maybe somebody in marketing added another pixel, and and it could be so many things that can have the impact. And and and this whole concept of of getting good attribution, of like Sentry and like the other run providers, at least the good ones, like Sentry and like the other run providers, at least the good ones, can provide so much value over what Google gives you out of the box.
Charles Max Wood [00:46:04]:
I I was just gonna say, because you were listing examples, and when you mentioned added a pixel, that that's the one that I've seen come up. But the trick is, is that a lot of times that doesn't show up in your code because you're using something like Google Tag Manager or something. And so, right, the changes and then the code base, the changes in the tool. And so just just be aware of what you're allowing to modify your page that may or may not be the code.
Dan Shappir [00:46:34]:
Exactly. I mean, you know, you see that, Let's say you've pinpointed the the time of the degradation, and then you start doing, let's say, git bisect, and you find that nothing's changed because it's not it's not in your code. Like you said, it might be that somebody in marketing using Google Tag Manager just added another, you know, pixel that's really killing your performance. But again, totally not reflected in your code base.
Lazar Nikolov [00:47:01]:
I've also seen cases where a parallel happens. So it's not just from this point on or, like, from this commit on then, you know, the the performance gets worse. But, like, there's, like, 2 parallel lines where one of the one group of the users are experiencing okay, are having an okay experience, but then another is not so much okay. And it they didn't have anything to do with pixels or or stuff like that. It had to do with, the state of the application. So I've seen, like banners, that, you know, change the LCP score because they're pretty big at the top and some users take the time to hit the x button, but some don't. Right? So for some of the users, your RAM data is going to report the LCP is going to be based on the banner, but for the others, it's not gonna be the the element. It's not gonna be the banner itself or the background or whatever it is, but it's gonna be some different part of the the page.
Lazar Nikolov [00:48:02]:
So there's, like, also parallels, of of of data.
Dan Shappir [00:48:07]:
And I've seen situations where a lot of organizations do all sorts of AB tests.
Lazar Nikolov [00:48:13]:
Mhmm.
Dan Shappir [00:48:13]:
So they might be testing different, let's say, head title or header messages. And they had different header messages have a slightly different length. And then, due to wrap around and stuff like that, all of a sudden, a different piece of content is the largest contentful con, piece of content to be painted based on where you are in the AB test. So all these things can really drive you nuts trying to figure out, hey, what's actually going on? You know, why is it that all of a sudden my page shows poor performance even though I don't I didn't think that anything actually changed. And and again, attribution is really key, in in in being able to solve these type of types of scenarios.
Lazar Nikolov [00:49:04]:
Yeah. And I I was, I would say that Sentry does handle that in a really good way. For example, in cases where an AB test is happening or in case where some of the elements or state of the application can affect the the, the web vitals, you can always, you know, add tags to the, to the context. So all of the data that's being measured is going to be tagged like a banner shown true or false or that the user is logged in, yes or no, etcetera. So that based on the tag itself, you can filter out specific scenarios of the state of the application, and then you can zoom in in that data and see how the, I don't know, web vitals and all the other metrics look like without these cases in mind.
Dan Shappir [00:49:54]:
So it's not just desktop versus mobile or Chrome versus Edge. It's also, do I have this banner or don't I have this banner? Am am I on this? Am I on the a part of the test or the b part of the test and and stuff like that?
Lazar Nikolov [00:50:09]:
Exactly. Yeah. And you have the you you already have the tooling in the in the SDK and in in the dashboard in century to, to do all the slices, that you need to do so that the data makes sense. Cool.
Dan Shappir [00:50:27]:
Yeah.
Charles Max Wood [00:50:32]:
So you said you haven't gotten into the customer data a ton. I'm kind of curious if there are case studies though where somebody, basically demonstrated, or, you know, had some kind of major shift in their performance or a major shift in their web vitals or a major shift in, hey, this was our moneymaker page. And it went it got faster, and then we made more money. I I I'm just I don't know which of those you might be aware of, but that would be cool to hear about.
Lazar Nikolov [00:51:02]:
Yeah. Totally. I mean, we internally and, like, you know, we always talk with the customers and there's sometimes we even do, events like workshops where we just gather some, you know, get our people in Zoom and we talk with the, with our clients or people who use Sentry. I remember I also did one, with a person who who used Sentry in a React Native application, and we talked about their experience, etcetera. So we sometimes we do publish these kinds of interactions and conversations either through video formats or or on our blog. So there are some stuff like that. Yeah.
Charles Max Wood [00:51:47]:
How do I find those?
Lazar Nikolov [00:51:51]:
Not sure if there's, like, one page, that lists all of them, but if there isn't, then that's a really good idea. And I'm gonna take it up and see if we can build it, but, I'm not sure. I'm not sure. Maybe, either the blog or or the YouTube. There's probably a playlist in the on the YouTube channel.
Charles Max Wood [00:52:10]:
Okay. Any of the rest, do you have questions or should we go to picks?
Dan Shappir [00:52:28]:
I'm I've covered my basis. Lazar, is there anything else that you specifically want to add?
Lazar Nikolov [00:52:36]:
I don't think so. Yeah. I think we had a really good discussion around facing and and performance and stuff. Cool.
Dan Shappir [00:52:45]:
I have to say, just to conclude, that I'm a huge believer in in real user measurements. You're always surprised if you just go by synthetic measurements. You know, it's highly likely that you're not actually testing what your users are really experiencing. So and and so that's number 1. And number 2 is that production will always surprise you. Mhmm. And, I've seen things. So, just trying to rely on on on synthetic tests and simulated environments, that's that's just not enough.
Lazar Nikolov [00:53:25]:
Yeah. Have you seen the, have you seen the reports? I think it's a it's a few years old, but there was a report where it got all of the, Lighthouse data and got all, like, the top I don't know how many websites with their scores, and then, mapped them out with their, data from the CrUX database. And it turns out that, like, 43% of, good Lighthouse scores don't even meet the the minimum of the, web vitals when from the crux database.
Dan Shappir [00:54:01]:
That's interesting. I've I've often seen the reverse, like, pages that actually have good core vitals but seem not to have good simulated scores, especially for mobile because Google really simulates a low end device that's slower in many cases than what users often actually have.
Lazar Nikolov [00:54:22]:
Yeah.
Dan Shappir [00:54:23]:
But the reverse can also happen. I know that, Rick Viscomi from Google, I think, actually even wrote an article discrepancies between the synthetic test and the real user measurements. So, yes, I think that the best option is really to use both. That during the development cycle, you're using the synthetic tools to make sure that you don't degrade before pushing to production. But then you also have tools monitoring your production environment to catch all the things that slip through because inevitably, things will slip through. And like we said, you know, changes might be as a result of things that have nothing to do with the actual code. They might have to do with pixels, with images. I've seen, for example, cache headers for files misconfigured.
Steve Edwards [00:55:14]:
Mhmm.
Dan Shappir [00:55:15]:
So or I've seen situation where somebody accidentally turned off all the compression for all the files at the CDN. So you suddenly are downloading, you know, 5 times as much data than before. So I've seen so many reasons for poor performance that might not be caught by synthetic tools that only check during build times.
Steve Edwards [00:55:38]:
I guess you could say really really say based on the report that that was the crux of the issue. Right?
Dan Shappir [00:55:44]:
Yeah. I think you made the same joke when, when we had Rick Visco on the show.
Steve Edwards [00:55:50]:
Hey. Would it's if it works, you know, keep using it.
Charles Max Wood [00:55:54]:
Some people hadn't heard it yet.
Steve Edwards [00:55:56]:
That's Sorry.
Charles Max Wood [00:56:00]:
All good. Alright. Well, let's go and move on to the picks. Before we do that though, Lazar, how do people find you on the Internet if they have questions or wanna chat or whatever?
Lazar Nikolov [00:56:10]:
Yeah. I mean, I try to keep, a consistent, username, but let me just change it real quick so everyone can see it. It's at Nikolov Lazar, and this is how it looks like.
Charles Max Wood [00:56:24]:
Alright. We'll also put that in the in the comments on our various streaming platforms. And I can type it and in the show notes as well. And that way people can look you up. All right. Well, let's go and do the picks. Steve, do you wanna start us off for picks?
Steve Edwards [00:56:47]:
Going for the high point early again. Okay. I can appreciate that. So, before I get to the dad jokes of the week, AJ, I had mentioned earlier that I thought of a pick. You inspired me to pick when we were talking about your aquariums. Back in the eighties, there was this comedian, and he was the king of puns. He was, I guess you could say one of my idols. His name was Kippadatta, and he has a song called Wet Dream that is all about fish puns.
Steve Edwards [00:57:21]:
And, you know, it starts out how he was driving, in downtown Atlanta Atlantis, and his Barracuda was wasn't working, so he was driving a rented Stingray. Anyway, there's a great line in there where he's trying to pick up some, hot fish in a bar, and he asked her, what's her, what's your sign? And she says, aquarium. And he says, great. Let's get tanked. But, anyway, that's a, it's an all time classic. If you wanna check it out, you can find it. There's actually a video very you can tell it's very early MTV video, style on on YouTube, called wet dream. So the dad jokes of the week.
Steve Edwards [00:58:03]:
Oh, Oh, I just lost them. Sorry. Stand by. Stand by. Did you know? It turns out that if you you can actually hear the blood flowing through your veins, you just have to listen very closely. Varicose veins, very costly. Sorry. I flubbed that one.
Steve Edwards [00:58:24]:
Along the lines of of the fish puns, what do you call a shrimp that is always getting injured? He's accident prone. And then finally, the other day, I went to see my, doctor about this issue I've been dealing with, and he said, well, do you wanna hear the good news first or the bad news? I said, good news, please. He says, we're naming a disease after you.
Dan Shappir [00:58:54]:
Yeah. You don't want that.
Charles Max Wood [00:58:56]:
Yeah. Right?
Steve Edwards [00:58:57]:
Yeah. It's funny. There's, in the fire service, they say, if there's a drill named after you, it's because something really bad happened, which is generally true.
Charles Max Wood [00:59:10]:
Alright. AJ, what are your picks? You are muted.
Dan Shappir [00:59:14]:
I think his first his first pick is the unmute button.
Lazar Nikolov [00:59:18]:
Where is it?
AJ O'Neil [00:59:19]:
Okay. I found it.
Steve Edwards [00:59:20]:
Yes. It is. Yeah.
AJ O'Neil [00:59:24]:
Let's see. I was just, looking to find something to pick.
Steve Edwards [00:59:30]:
It's because he had no idea that we were gonna have picks today. Right?
AJ O'Neil [00:59:34]:
I had no idea. Total surprise. No idea. I was completely surprised. I was taken aback. Oh, gosh. Let's see. Well, this is not really a pick as much of a as a thing that happened.
AJ O'Neil [00:59:50]:
Okay. I got a I got a couple. I I saw the movie Being There. Steve, have you seen that movie?
Steve Edwards [00:59:59]:
Yes. That is such a weird movie. My, when I was in college, one of my Spanish professors had recommended. She really liked it. I'm a huge Peter Sellars fan from the, you know, the Pink Panther movies, the Inspector Flussell stuff that he'd done. But, yeah, this being there is just it's interesting.
AJ O'Neil [01:00:17]:
I I don't know that it's interesting. It's weird. So it's it's it's critically acclaimed. It's on it's on some lists of, you know, best movies you gotta watch because my wife and I, we like, we've watched all of the TV shows and movies that have come out in the last, I don't know, 10 years that are worth watching, and there's not very many of them. And and so and and, like, nothing new comes out. It's all yeah. It's, like, very, very meh. And so we, we decided to go backwards to find some older stuff, some stuff that people really felt like had meaning, was done well.
AJ O'Neil [01:00:50]:
And we landed on being there, and the reviews on it were so high. The trailer looks so weird, but the trailer was a very accurate depiction of the movie. It it really made no sense. It there was no plot. It was basically the the, depicting someone who, is semi autistic slash echolalist, who just says a few things here and there and ends up, like, next to the vice president, essentially. But it is it was I don't know. So it I
Dan Shappir [01:01:28]:
don't know.
AJ O'Neil [01:01:29]:
I that's just something that happened. But something that, something that's good is I finished listening to the first book of The Expanse, and I have to say, overall, I do like the book better than the show. There for the first half of the book, I think I like the show better because the show gives a little bit more it it it like with The Hunger Games, you only get a certain perspective in the book. But then in the movie, they get to tell you things that are going on in the rest of the universe that you can't see from the the main character's perspective. In the book, they do have a couple main characters, but I guess where the second half of the book I I liked it better than the show was that it's just much more focused. The show progressively got worse and worse as they tried to just make the characters more extreme and just introducing more characters and then just having them yell louder and cuss stronger. And and that kinda got old, but the book just stays focused on the few characters that are the important ones that drive the story. It doesn't try to introduce a bunch of others.
AJ O'Neil [01:02:41]:
And and, I mean, like, the book's got plenty of, language and whatnot too, but it's not it's not the same where it's just like, oh, she's on screen now. Cue the f word. Yep. That was her line. Yep. Okay. Now we move on to the next character. Oh, his line is angry drunk.
AJ O'Neil [01:03:00]:
Got it. Okay. And we move on to the next character. They in the book, at least for the first book, they develop the characters a lot better. And so I'm I didn't think I was gonna listen to the whole book series. I I don't know if I will, but I am going to pick up the second book, at some point. I've got a whole backlog of Audible to to do. But, and then last thing is, again, not really a a a pick per se, but just an experience.
AJ O'Neil [01:03:29]:
So while we were on the show here, I did try to get Sentry self hosted installed. I ran into some Docker issues because, you know, Docker. But I'm I'm glad to see that it's available for self install, and I'm glad to see that the install script can kind of resume when it hits a hiccup. I'll I'll play with it a bit more and see if I can find, like, the right version of Docker to host on the right type of VM to actually get it to install because, I I would like to see how that works. But the documentation looks pretty decent. I I do wish that it had, like, just the scripts to run the installers without having to deal with Docker. Just say, okay. Like, you gotta use Debian for this.
AJ O'Neil [01:04:16]:
But if you use like, just tell me what the operating system is that Docker would have used. Let me install that operating system, and let me just, like, run the scripts without having to deal with Docker. Because Docker, man, it's such a pain in the butt. You know? But, but I'm glad to see that it's there. And I was trying it out, and and the documentation looks good. And I like, like I said, I like that it's, it it seems to gracefully restart. As you hit one issue, you can solve that and then restart the install again, and it'll pick up where it left off. And that's always very nice.
AJ O'Neil [01:04:48]:
So, plus 1 to the the Sentry self hosted on that, and, hopefully, I'll I'll get it all the way. And that's, that's the end of my ramblings for today.
Charles Max Wood [01:04:58]:
I just wanna chime in on the Expanse stuff. Also, I'm just gonna put it out there. I very much prefer the Docker setups to the other kinds of setups. But, The Expanse in particular, you're going to find through the whole TV series and through the whole book series that your observation mostly holds out for all of the other books. A couple of things that bothered me a little bit is that they so the book some of the books are spaced out over years. Right? So you you have one book and then a bunch of stuff happens, and then the next book starts. And a lot of times, there's a novella that fills in the gaps. Some of the novellas aren't as good.
Charles Max Wood [01:05:42]:
But the other thing is is that those gaps and the things that happen in those gaps are kind of important. And so the way that they try and shoehorn some of the plot points to keep it more or less continuous didn't really work. And so when they ended the TV series, they actually left off the last book and, you know, a bunch of other stuff that I kinda wish they'd done. So, anyway but overall, they did an excellent job on the TV series. The other thing to keep in mind with the TV series is I think the sci fi channel did the first 2
Steve Edwards [01:06:15]:
3, season 3.
AJ O'Neil [01:06:16]:
And then Amazon picked it up.
Charles Max Wood [01:06:18]:
Amazon picked it up. And when Amazon picked it up, we got better. So
AJ O'Neil [01:06:23]:
There I know there was 1 like, the first season of the TV series was pretty good, and then either the either the 3rd or the 4th one was pretty good. But that yeah. That's I think it was the second one. It was just, like, out of nowhere. It's like they just I don't know if the book's that way. I don't know. Does is the book like a completely different unrelated story for
Dan Shappir [01:06:42]:
the second book? No. I don't I don't remember for sure, but, no, I don't I
Charles Max Wood [01:06:43]:
don't remember it being that. So
AJ O'Neil [01:06:48]:
Yeah. Because it they they go into this thing about Mars and then the the the humanoid aliens. And and then, like, that storyline has dropped, and it's never picked up back up again. So I don't know if that like, if the book has that or if that was just like they were padding the TV show.
Charles Max Wood [01:07:04]:
The the continuity I remember the second season being mostly based on the second book. But, yeah, the continuity in the books is really, really tight.
AJ O'Neil [01:07:13]:
So Okay. Cool.
Charles Max Wood [01:07:16]:
I'll
AJ O'Neil [01:07:16]:
look forward to the second one
Steve Edwards [01:07:18]:
then.
Charles Max Wood [01:07:18]:
Yeah. Alright. Dan, what are your picks?
Dan Shappir [01:07:24]:
Okay. So I have a couple of picks today. My first, since we've been discussing, performance and the impact of perform that performance can have on the success of a website, There's this excellent website for web performance that Google created. Well, web performance and web development called web.dev. And they have a section there web.dev slash case dash studies. We'll put the link in the show notes, obviously. But it's got a whole like, lots of case studies of companies that improve their performance or certain aspects of their performance and the benefits that they've gained as a result of these improvements, like actual numbers and actual testimonials and and figures and stuff like that. So if if you need to prove to your, let's say, management while why it's worthwhile investing time, effort, maybe money into improving the performance of your website or web application.
Dan Shappir [01:08:28]:
You know, you can go there and you'll find a lot of relevant content. So I think this is a useful resource in the context of what we've been talking about today. So that that would be my first pick. My second pick, I've mentioned that, we've been clearing up our library and, I found various books that they haven't read in a while and was deciding which ones to keep and which ones to let go, basically donate, and also which ones to reread. And I'm and I I think I've mentioned before that I'm actually reading a series of books called the saga of the Plios in exile. It's, a series of books from the eighties written by Julian May. She was, sci fi slash speculative fiction author. It's kind of an interesting work in the sense that it's kind of midway between science fiction and fantasy In that, it's it's supposed to be like science fiction based, but it gives a lot of, fantasy vibes.
Dan Shappir [01:09:35]:
But it's, from my perspective, it's an excellent series of books. There there are 4 books in the series since it's from the eighties. They're all written, So you don't have to worry about, you know, an incomplete series of books. It's it goes from start to finish. It's they're pretty thick. Lots and lots of characters. Lots of character development and character interactions. And it's just a great series of books.
Dan Shappir [01:10:00]:
Lots of action and adventure. But also, she really gives, like, the flashes out the various characters. One complaint that I've heard about the books that and I can see where it's where it's coming from, though I although I don't necessarily agree with it, is that the depictions of the LGBTQ, let's call it community or or people that are identify as such, especially trans people, is not ideal. It may have to do also when with when the books were written, but I'm just putting it out there in the case that it impact might impact the decisions of some people to read it. As I said, I think the books are are really good. But, again, this is my own personal opinion. So that would be my 2nd pick. And I mentioned them before.
Dan Shappir [01:10:52]:
It's just that it's, like, it's a long series of books. So it's taking me a while to read through them. And I, by the way, I can't really I can't deal with audiobooks. It's it's I have to actually read the book. I don't know. When somebody is reading it out to me, it kind of feels weird to me. I don't know. It's maybe it's just me, but that's the way it is.
Dan Shappir [01:11:15]:
So that would be my 2nd pick. My 3rd pick, I'm also very much a history buff or history fan, and especially of, ancient history. And given the fact that I live in Israel, also history of the Middle East, which has a lot of history and a lot of ancient history. And I found these series of lectures called the Rise of Ancient Israel with professor Israel Finkelstein. He's a professor of archaeology in the Tel Aviv University. He's done some of the most significant is archaeological digs in Israel, certainly in recent years. And it's a very long series of conversations that he has with one of his students who's actually made this, basically recorded this series. There are 20, 1 discussions, and they are like something like 40 something minutes long each.
Dan Shappir [01:12:18]:
And they talk about, you know, how, the the evolution of of the the ancient kingdoms of Israel, of, you know, King David and and and and before and after. It's it's really interesting. And if you're into that, I do have to caveat this with the fact that, he takes the Bible as a serious source of historical information. But when there's a conflict between the story in the Bible and the archaeological findings in the field, he will side with your archaeological findings. Or or put another way, he sees the Bible as, not as a historical book so much as a book of, you know, religious book and a and an an ideological book that is based on historical events. So
Steve Edwards [01:13:21]:
Considering the archaeological brass fee of the Bible is something I read up quite a bit too on. That's sort of an Oh,
Dan Shappir [01:13:27]:
there's a lot of there's a lot of historical veracity. He he is not he's not denying it. But, again, when there are conflicts and there are some conflicts between result. Potentially. But, you know, the the problem that he likes to state, that he says a lot about archaeology is that you can only go but what you but what you by what you have found. And that maybe tomorrow, you'll find something new that completely changes your point of view. Yep. That's very true.
Dan Shappir [01:13:58]:
But again, you can only go based on what you found. And, while a lack of evidence is not an evidence well, how does it go?
Steve Edwards [01:14:07]:
I know exactly what you're saying. Yes.
Dan Shappir [01:14:09]:
But still, you know, if there's no evidence for particular events where you expect evidence to be abundant, it does say something or at least it raises some in some significant questions. Anyway, I highly recommend it. It's it's an excellent conversation. I put the link to the entire playlist. If you're into that, very highly recommended. Cool. And my final
Charles Max Wood [01:14:36]:
That sounds really fascinating. Yeah. There goes my week.
Steve Edwards [01:14:40]:
Yeah. Right.
Dan Shappir [01:14:42]:
And, Yeah. I would be curious, you know, as to what you get from it. It's it's really, really informative. And my final I won't call it pick. I'll call it a mention. Today is the Holocaust Memorial Day in Israel. And this one's especially hard because we still have a 132 hostages being held in Gaza by Hamas. So it's kind of I wouldn't call it the modern day Holocaust.
Dan Shappir [01:15:15]:
It's not quite up there, but it's it's it makes everything harder. And, you know, we don't even know how many of them are still alive and how many of them have been murdered or tortured to death. Mhmm. And and it's and it's, you know, and you swing between hope hope and and, you know, feelings of of hopelessness, and and it's it's really hard. So, anyway, those would be the picks and mentions that I wanted to make for today. And over to you, Chuck.
Charles Max Wood [01:15:50]:
Alright. I'm gonna put out my picks and then we'll let Lazar do his. So I always start with a board game. In this case, I'm doing a card game. This one's called Hanabi. Hanabi is the Japanese word for fireworks. And, the game's pretty simple. You are dealt a hand of cards.
Charles Max Wood [01:16:10]:
It's usually, it's always 4 cards. You hold them facing everybody else. You don't know what cards you have. You can see what everybody else has, but you can't see what your, what you have. And then what you do is you, you can either play a card. So if you know what you have, or you think you have a good idea of what you have, then you can play the card. And what you're trying to do is you're trying to get stacks of all the colors to go from 1 to 5. And there are 3 ones, 2 of 2, 3, and 4, and then 15 of each color.
Charles Max Wood [01:16:44]:
And so, you can play a card, you can discard a card. So the way we always play is we always let everybody know I'm discarding off of the right hand side of my hand. Right? So if people don't want you to get rid of it, cause it's a 5. And if you discard a 5, you lose because you can't play it then. Right. Then people will clue you, Hey, this is, and so that's the last one is you can give a clue and a clue is these cards are white or these cards are yellow, or you could do these cards are twos. Right? And so anyway, and then you're, you have to kind of keep track of what, where stuff is in your hand without being able to see it. And so, anyway, it's a super fun cooperative game.
Charles Max Wood [01:17:27]:
I like it better than a lot of the other cooperative games because, like when I play cooperative games, there's one in particular that when I play with my wife, it's, it's me and her and anyone else who's playing and she's telling us all what to do. And I just, I don't, I don't love playing the game where I'm watching somebody else play my game. So, anyway, this one's different because you you can't do that because you are missing information. So yeah. And and we usually chitchat while we're doing it. You just have to be careful. Because if if I'm holding my cards up and I know that I have a particular card, I may have inferred that from the clues I got and the fact that I can see the other players hands. And so if you have, if you know what a card is, you can always say, I know that this is the 4 of white because you might've inferred that from the fact that they told you it was, a white card and, you know, from the discard pile and the other hands that, right, it can't be anything else.
Charles Max Wood [01:18:30]:
So anyway, super fun game. So you can buy it. It's like $10 on Amazon. And then the other picks I have, I have a couple of them. So one of them is a movie that my wife and I saw last week or the week before it's called Escape from Germany. And it's a true story about, Latter day Saint missionaries that were in Germany when the war started. And so, you know, as you can imagine, they were somewhat hostile toward, Americans. And they were also hostile toward, missionaries because they were hostile towards certain, kinds of religion.
Charles Max Wood [01:19:20]:
And so, anyway, it's it's just a series of miracles on how they got all those missionaries out of Germany. And I I really, really enjoyed it, so I'm gonna pick that. It's done by TC Christiansen, who's the guy that did The Other Side of Heaven. And so if if you like that movie or that brand of movie, then definitely check it out. And then, the last pick I have besides telling you to go check out, javascriptgeniuses.com is if you go to so Brandon Sanderson last year, he put up a YouTube video and a Kickstarter and basically said, I was locked in my house during the pandemic. So I did what I do. I wrote all these books, but I didn't tell anybody about them. And so it's a series of books he's called The Secret Projects.
Charles Max Wood [01:20:12]:
And, I listened to the first book on Audible. It's called Tress of the Emerald Sea. And, I'll put an Amazon link on, to the Audible, books. You can buy it with a credit. But it's this one's part of the Cosmere. So he has he has a university, writes a number of his books in, and you can kind of see some of these worlds converging. Right? Beginning to converge because you have crossover with some of the characters. Usually it's minor characters, not major characters.
Charles Max Wood [01:20:47]:
But anyway, this one is in that vein. The narrator of the book is Hoid. If you've been following along with Brandon Sanderson's stuff, he wrote the whole thing in Hoid's voice. And Hoid is one of the main characters in this book. But it is it it was it was it was a fun book, really fun book to listen to. So if you're into audiobooks or if you want to just I guess you could just pick up a copy of it. The Kickstarter, he mailed out a book every month along with a bunch of other stuff. But now you can go and you can get the books without being part of the Kickstarter.
Charles Max Wood [01:21:23]:
So a year later right? So last year, if you backed Kickstarter, you got this book in January and you got the next one in February, March, April. And now the first four books are out because we've gone through April. And so I'm assuming that the 5th book in the secret, secret book series is gonna come out pretty quick here because we're into May. So anyway, I really, really enjoyed that book. So, anyway, those are my picks. Alright, Lazar, what are your book? What are your picks?
Lazar Nikolov [01:21:56]:
Speaker 3 (1h 13m 13s): question. Do the pigs need to be non technological or
Charles Max Wood [01:22:01]:
They can be technical or nontechnical.
Lazar Nikolov [01:22:04]:
Okay. Because I only got technical. I'm a I'm a That's
Charles Max Wood [01:22:06]:
it's all good. I'm I'm always into new stuff, and I'm starting to get into AI. So I'm gonna start picking some of that stuff too.
Lazar Nikolov [01:22:13]:
Oh, cool. Yeah. Yeah. So I I I didn't know that I needed to prepare picks, but, something on top of my mind. I'm gonna mention Sentry, of course. Check it out. The free tier is generous enough for you to get started and, it it's it's it's generous, so you should check it out. But, when it comes to picks, I got one interesting and that is a project by, Joanne Leon.
Lazar Nikolov [01:22:38]:
I'm sorry if I'm butchering your name, but it's basically and I'll drop the links here so you can check it out. It's basically a collection of web performance snippets that you can install, not install, but I move into your web browser so you can, you know, check out what is the LCP elements or whatever it is, that that the snippet provides. There's also IMP. There's also the a whole loading category, but it's basically at dev time before you, commit what you have. If you wanna check out how the performance looks like on your machine, you can check out all of these, beautiful snippets. So these are at dev time. One more new thing that, I came across and and the and, Henri Elvertica told me about this is the RAM archive. It it's it's it's basically like crux, but it's data taken from Akamai, and it's put together in a database where you can query it.
Lazar Nikolov [01:23:44]:
And I haven't been playing too much with it, but it's basically, yeah, you can use you use this database to see run data, and you can split it into different frameworks, etcetera. So, can plug that in. And then also I've been playing with, a project called UI dot acetonity. So it's a collection of UI components for React built with Tailwind and Framer Motion, and they look really good. And I tried using them, but some of them are, are really making an impact on the performance. So I'm looking into these, components right now and and figuring out how I can or if I can make them a bit more performant. Right? So instead of using Framer Motion, can we do that with plain CSS? So we're not introducing or shipping too much JavaScript to the client. So these are the the things that are on, you know, top of my mind.
Lazar Nikolov [01:24:52]:
And I I would have been more prepared, but I'm sorry.
Charles Max Wood [01:24:56]:
No. It's all good. Thanks for coming. This was a lot of fun. It's good to kinda dive into some of these tools that a lot of people use. I also found a lot of applicability for if people were using, like like I said, things that are like Sentry but are not Sentry. Some of those features are there. Some of them are not.
Charles Max Wood [01:25:16]:
But, yeah, I I think I have a much better idea in some of these areas, especially on the APM, the performance side on what I can grab. So, so thanks for jumping in and until next time folks, max out.
Sentry's Impact on Web Vitals Understanding - JSJ 632
0:00
Playback Speed: