CrUX and Core Web Vitals - What to Measure on the Web with Rick Viscomi - JSJ 486
Rick Viscomi joins us from Google to talk to us about the Chrome User Experience Report (CrUX) and the HTTP Archive. He explains what it tells us about how the web is built, how it performs, and what we know about the web today.
Special Guests:
Rick Viscomi
Show Notes
Rick Viscomi joins us from Google to talk to us about the Chrome User Experience Report (CrUX) and the HTTP Archive. He explains what it tells us about how the web is built, how it performs, and what we know about the web today.
Panel
- Aimee Knight
- AJ O'Neal
- Dan Shappir
- Steve Edwards
Guest
- Rick Viscomi
Sponsors
Links
- JSJ 334: “Web Performance API” with Dan Shappir | Devchat.tv
- JSJ 428: The Alphabet Soup of Performance Measurements | Devchat.tv
- Is my host fast yet?
- Twitter: Rick Viscomi ( @rick_viscomi )
Picks
- Aimee- SparkPost
- Aimee- BigQuery: Qwik Start - Console
- AJ- SendGrid
- AJ- Tuscan Dairy Whole Vitamin D Milk
- AJ- The Twelve-Factor App
- AJ- webinstall.dev/fzf
- Dan- Great TV
- Dan- Keep daylight savings time all year round
- Rick- Vsauce - YouTube
- Rick- Uranium Ore
- Steve- The State of CSS Survey
- Steve- GitHub | State of JS 2020 Questions
Contact Aimee:
- Aimee Knight – Software Architect, and International Keynote Speaker
- GitHub: Aimee Knight ( AimeeKnight )
- Twitter: Aimee Knight ( @Aimee_Knight )
- LinkedIn: Aimee K.
- aimeemarieknight | Instagram
- Aimee Knight | Facebook
Contact AJ:
- AJ ONeal
- CoolAJ86 on GIT
- Beyond Code Bootcamp
- Beyond Code Bootcamp | GitHub
- Follow Beyond Code Bootcamp | Facebook
- Twitter: Beyond Code Bootcamp ( @_beyondcode )
Contact Dan:
Contact Steve:
Special Guest: Rick Viscomi.
Sponsored By:
Transcript
DAN_SHAPPIR: Hello everybody and welcome to another episode of JavaScript Jabber. Today on our panel, we have Amy Knight.
AIMEE_KNIGHT: Hey, hey from Nashville.
DAN_SHAPPIR: AJ O'Neill.
AJ_O’NEAL: Yo, yo, yo, coming at you from suddenly all of the sudden freezing Pleasant Grove.
DAN_SHAPPIR: Steve Edwards.
STEVE_EDWARDS: Hello from Portland, where yes, same here AJ. It's in the past couple of days, it's dropped down to really, really cold at night and in the day.
DAN_SHAPPIR: And I'm Dan, Dan Shapiro coming to you from really nice and warm Tel Aviv, where the only thing I can complain about is that daylight savings is over. So it gets dark early, but otherwise I'm wearing a t-shirt and life's great. And our guest for today is Rick. Let's call me. Hello, Rick.
RICK_VISCOMI: Hi everybody.
When I went freelance, I was still only a few years into my development career. My first contract was paid 60 bucks an hour. Due to feedback from my friends, I raised it to $120 an hour on the next contract. And due to the podcasts I was involved in and the screencasts I had made in the past, I started getting calls from people I'd never even heard of who wanted me to do development work for them because I had done that kind of work or talked about or demonstrated that kind of work in the videos and podcasts that I was making. Within a year, I was able to more than double my freelancing rates and I had more work than I could handle have a profitable but not busy or fulfilling freelance practice, let me show you how to do it in my DevHeroes Accelerator. DevHeroes aren't just people who devs admire, they're also people who deliver for clients who know, like, and trust them. Let me help you double your income and fill your slowdowns. You can learn more at devheroesaccelerator.com.
DAN_SHAPPIR: Rick, you came to, you're coming to us from Google to talk about crots and performance and performance monitoring, correct?
RICK_VISCOMI: That's right.
DAN_SHAPPIR: Excellent. So maybe you can say a few words about yourself.
RICK_VISCOMI: Sure. So I am in developer relations at Google. And the focus of my work is on what I call web transparency, making it possible for the whole web community to understand how the web is performing, how the web is built, and how users are experiencing it.
DAN_SHAPPIR: Great. That's really important because I think that the key aspect of being able to actually improve performance is first and foremost, to be able to understand what your performance is, to be able to actually monitor your performance.
RICK_VISCOMI: That sounds like a really- That's right. There's a little saying about having the data so that you can actually measure the things in order to, let me start that over. There's actually a saying about, you can't optimize what you don't measure. So having the data to inform how you're actually performing on the web is super important.
STEVE_EDWARDS: Yeah, but covering the web, that sounds like a pretty huge topic. I mean, the web is nothing small, shall we say.
RICK_VISCOMI: That's true. The data sets that we look at are in the scale of millions, actually. So the two data sets that I work on directly are Chrome User Experience Report and HTTP Archive. And we're looking at about 8 million websites on a monthly basis.
DAN_SHAPPIR: That's quite the number. So you mentioned two things. You mentioned the Chrome User Experience Report. I think it's also referred to as CRUX for short. And you also referred to the HTTP Archive. Can you explain what that is both of these are, how they're similar, how they're different?
RICK_VISCOMI: Sure. HTTP Archive was created about 10 years ago by Steve Souders. We're actually approaching its 10th anniversary next month. The mission of HTTP Archive is to track how the web is built. It started off by looking at the Alexa 1 million list and early on a very small subset of that, and using web page test to record pretty much everything that could possibly be measured about a web page how many bytes it loads, what resources, and how fast it is. And it has grown over the years. I joined it about three years ago in 2017 to help maintain it. And one of the big pivots in HHPRCF history has been making the data available on BigQuery, where anybody in the web community could access it and ask questions, whatever they're curious about. We've added more telemetry to the data set. So for example, you could integrate with the WAP ELIZER technology detection tool, and you can find out what percent of websites are on CMSs, what types of JavaScript frameworks are they using. And then you can slice the data in these interesting ways to see, OK, well, of certain types of websites, are they loading more bytes than others? And slicing that in that way is a great way of understanding how the different parts of the web are built, how they're experienced is answered by the Chrome User Experience Support. That data set was launched in late 2017, and it's based on real user experience data from Chrome. Users who opt in have their experiences sent back up to Chrome, where they're aggregated and anonymized and released as a monthly report, also on BigQuery. So it is possible for some really interesting intersections to take place. So you can see, of these CMSs, how are users actually experiencing them? And you can have interesting head-to-head comparisons that way, too.
STEVE_EDWARDS: And when you mentioned the scale of the data that you're looking at, millions of bytes, and you mentioned BigQuery, that was the first thing that came to my mind. It's been a while since that first came out. I remember hearing about it. Could you, maybe for users who are listeners who aren't familiar with it, give a description of what BigQuery is and how it's used?
RICK_VISCOMI: Sure. It's part of the Google Cloud platform. And you can think of it like a big database in the sky. So all the analysts who share the data set are querying the same data in the cloud, and it operates at a massive petabyte scale. Some of these tables in HTTP Archive, we actually stored the raw response bodies for text-based resources. You can imagine with eight million websites and on average, like 100 resources per page, that's a lot of responses. Some of our tables are now exceeding 15 terabytes, and we have these going back every month for the past about four years. So it's a huge data set. And being able to crunch it in a way that won't take you days is super important. And the BigQuery platform allows us to do really complex analyses. One of the things that we're doing right now is the Web Almanac and looking at the way CSS is built. And we've started running JavaScript snippets in BigQuery SQL to help us analyze and understand how to do that CSS is being used. For example, what are the popular colors, the named colors, like papaya whip and red, light blue, all of these things, we can actually parse those out of the CSS using BigQuery, using some JavaScript. So it's a really powerful tool that enables you to get really rich insights. And the data set itself is free, but it is worth noting that BigQuery users do pay per terabyte. It's about $5 US dollars per terabyte if you use it above and beyond the free tier, which is about one terabyte a month. So you can get a lot of really interesting insights if you're willing to pay for it. Part of my job is to make sure that all the interesting stuff is exposed to people in a way that's at a high level, like in a report on a website so that you don't need to query it yourself, but you can still access the interesting findings.
AIMEE_KNIGHT: We actually use it too at work for something different, basically to house a lot of our billing information for different services and then pull details about that and create different visualizations from it. So I'd like you using your team.
RICK_VISCOMI: Yeah.
DAN_SHAPPIR: Can you, so you mentioned, I'd say kind of cutesy example of being able to extract the various colors. But I know that there are a lot of other queries that you guys are running or have run. And you also mentioned the Web Almanac, which is built using a lot of these queries. So maybe a few words about some of the additional interesting queries that you've run and what is actually the Web Almanac.
RICK_VISCOMI: Sure. I'll start by talking about the Web Almanac. It's a project I created last year to try to bring to the surface a lot of the really interesting insights about the state of the web. One of the problems with HTTP Archive is that even though all the data is there, getting it out and in a consumable format has been pretty challenging, you need to have the SQL wherewithal to be able to interact with the data and find what you're looking for. We also have to have the expertise in the particular domain to understand what it is you're looking at. So I tried to bring the two of those together with the Web Almanac project, where we have experts from the web community guiding 20 plus chapters or aspects of the web, CSS for example being one, and pairing those experts with data analysts and other peer reviewers. And in this way, we could extract all these I kind of like call it data mining. So you have a mine or a wealth of information and we're just mining it for the nuggets of insights that we think are interesting for the community, not just interesting, but also useful. Insights about how CSS is being used is actually really helpful for CSS standardization bodies. Knowing how CSS is used in the wild can help guide deprecations if something is not very popular at all, it might be safer to deprecate. So having that insight is super helpful in that way. It's also useful in other ways like just academic research, and community bloggers, and community conference presenters to have hard data. That's a shared source of truth across the whole web community. I'm really excited because we're almost wrapped up on the 2020 edition. We hope to launch it in early December.
DAN_SHAPPIR: That's super cool 2019 edition was one of my picks in a past episode. And it does definitely contain a wealth of useful and interesting information for anybody who really cares about the web. Another amusing tidbit that I think I recall seeing there is the highest Zen index values that people use in the web.
RICK_VISCOMI: Yep.
DAN_SHAPPIR: Which I found to be very, very interesting. Like how many nines can a person stick in a HTML attribute or actually CSS attributes, but same thing. So yeah, the HTTP, the Web Almanac is actually a website, but going back to the HTTP archive and the CRUX data, as you said before, these are accessed using big queries, a BigQuery queries, for example, but I think that with the CRUX, there are other ways to actually access or get to this information, at least for my own website.
RICK_VISCOMI: That's right. So individual websites, about 8 million of them are available in the BigQuery CRUX data set. And one thing I should also note is that HTTP Archive is based on those same websites from CRUX. So we have this overlap between them to enable the intersection of how it's built and how it's experienced. There are other tools that are either built on BigQuery or other internal data from Google. For example, PageSpeed Insights does expose some of the data from the CRUX pipeline. It also enables some things that you can't get from BigQuery, for example, URL level data. You could say, given a URL, PageSpeed Insights will give you both field and lab performance, lab being based on the Lighthouse tool. And that's really helpful so that you can see, how are users experiencing my page, and what can I do to make it better?
DAN_SHAPPIR: Why is page level information not available via BigQuery? Is it privacy, something else?
RICK_VISCOMI: Yeah, I guess it's a question of privacy. We didn't want to enumerate every web page under a website. We wanted to make it a model where if you knew the URL, you can ask for it and you can get the data, but it shouldn't be possible to say, give me all of the websites under reddit.com or something, wherever people are visiting and you can get a sense of like what content is being consumed on a monthly basis.
DAN_SHAPPIR: I believe that some of the information is in Crux is or the crux database is time-based. What I actually mean by that is that you look at a certain period of time in the past. You don't necessarily see it up to date. Is that correct?
RICK_VISCOMI: The BigQuery data sets are monthly 28-day aggregations. So we will release a monthly data set on the second Tuesday of the month, and it will include the previous calendar month. So you're never really getting a real-time look on BigQuery. Although on PageSpeed Insights, you are getting a trailing 28-day aggregation. So up to, it's not necessarily today, it's the most recent complete day. So you could see something like how users experienced my website last week, but keep in mind that it is aggregated over 28 days. So it does kind of smooth over the experience into a greater average or aggregation.
DAN_SHAPPIR: I think this data is also available in the Google search console, correct?
RICK_VISCOMI: Yeah. Search Console recently released the Core of Vitals report. And we haven't talked about Core of Vitals, so it might be worth defining those. Earlier this year, Google announced the Core of Vitals program, which is a set of the most important user experience metrics that developers should be focusing on. And this year, they are Cumulative Layout Shift, or CLS, Largest Contentful Paint, or LCP, and First Input Delay, or FID. And Search Console has a report called the Core of Vitals report that shows you which parts of your website, as an authenticated site owner in Search Console, which parts of your website are having issues in any of these three metrics. And then you can drill down and say, okay, I have a mobile issue with CLS, and it'll give you example URLs for you to drill down into and find out how to optimize them. It'll link you to PageSpeed Insights where you can see the Lighthouse audits and recommendations.
DAN_SHAPPIR: For those of our listeners who are interested, I actually covered, actually we had the episode in the some time ago actually titled the alphabet soup of web performance metrics, where I covered most of the metrics, including these three. But instead of forcing everybody to relisting to that entire episode, could you give an explanation of what these three metrics that you mentioned are, what they mean?
RICK_VISCOMI: Yeah. I'll start with the easiest one to explain, FID, First input delay measures the time at which a user first attempts to interact with the page to the time when the browser is able to respond to that input. For example, you might have a website that's doing a lot of JavaScript in the background, and it's making the CPU really busy. So the user experience there will be a noticeable delay in the amount of time between click and something happening. FID is only measuring the amount of time that the CPU was busy. It's not necessarily measuring the time that the event handler received the request and processed it and then did something visible on the screen. That's a bigger challenge, but we're at least capturing part of that experience with FID. LCP, Largest Contentful Paint, it's a little harder to describe because something being contentful is somewhat subjective. The largest thing is at least objective. So you can imagine landing on a news webpage and the largest thing that matters to the content would be probably be the hero image at the top. And LCP is measuring how long did it take from navigation to that hero image actually being painted to the screen. That's a signal to the user that the page is ready, it's ready to be consumed. And so we want to get that number as small as possible. CLS, cumulative layout shift, is one of the more complex metrics also because the API, what's called the layout instability API, is pretty new. So CLS takes each individual layout shift that happens. On that same news page, let's say that the hero image doesn't have placeholder height and width. So when the page first loads, imagine there's no space between the header and the block of text, and then that hero image loads, maybe even progressively, and you can see the page kind of stutter as it gets larger. That's a layout shift, and that's pretty bad for the user experience. Imagine that they started reading already and they lost their place because the page jumped. So that kind of jittery experience is measured by CLS because we take those individual layout shifts, we calculate a quantifiable score somewhat related to the percent of the viewport that moved, and we add up each of those layout shifts across the entire user experience. So this number is one, it's kind of hard to explain because it doesn't have a unit. LCP is in terms of milliseconds, FIT is in terms of milliseconds, but CLS has no unit. So we just say, you have a CLS of 0.03, and it's kind of left to the developer to understand what that number means. We do say that there are thresholds that are recommended for developers to try to keep their core web vitals under. And if you're under it, we call it good. If you're over it, you could be in the needs improvement category. And if you're even beyond that, we call it a poor experience. I'll try to remember each threshold off the top of my head. For LCP, I believe it's 2.5 seconds. For FID, it's 100 milliseconds. And for CLS, it's 0.1.
DAN_SHAPPIR: Now, you mentioned at the beginning of your description that these metrics are the metrics or the top metrics for this year. Does this mean that next year there might be different metrics?
RICK_VISCOMI: Not necessarily, but we do commit to taking a fresh look at the core of vital metrics, at least, or I should say, at most frequently on an annual basis. We won't update core of vitals twice a year, for example. That's not to say that things will change every year if we find a really good stable set of metrics and nothing will change. But we will continue to look at not only are the metrics measuring the right things, but are the thresholds fine-tuned. And are they set up in a way where websites are able to be successful with them? There was a really good blog post by Brian McQuade at Google about the science behind Core of Vitals. And he explained why the thresholds were chosen where they are. We want Core of Vitals to actually represent a good user experience first and foremost. And we also want it to be achievable by well-performing websites. And it's pretty important that we not only have metrics that are one, but also the other. So we need to have, we need to make sure that the thresholds, when there are a little bit of wiggle room, for example, we can look at user research. I think there's an old Nielsen study that's like, 100 milliseconds is the threshold for somebody to perceive something as instantaneous. So for a metric like FID, we want it to feel instantaneous if we click on something on the page. So that lends itself to a really natural threshold for FID, but we do have a little bit of wiggle room because everybody's different. Everybody has their own differences in perception. Maybe it's plus or minus 50 milliseconds or 75 or something, let's say. So we want to make sure that if we do pick a threshold that it's meeting both of those criteria.
DAN_SHAPPIR: As I recall, in most of these tools, you actually show the scores that you get as the sort of a histogram, where it's kind of color-coded according to those segments that you mentioned before about being good, needs improvement, or poor, correct?
RICK_VISCOMI: That's right. Yeah, the histogram is a really useful way to look at performance. Sometimes people could be misled, maybe, into thinking that performance is a single number especially if you're using a lab tool, maybe web page test or Lighthouse, where it gives you a single number, developers really need to keep in mind that that's a specific test configuration. It's an entirely different question on whether it's actually representative of real user experiences. The way that performance is experienced by real users in the field is a distribution. There are people on fast connections, there are people on slow connections, there are people who have warmup caches, cold caches, and every single difference between people, there's a difference in experience. So you can imagine geography being a big issue. For metrics like CLS, the size of your viewport matters. It's not always about the speed of the webpage, but if you're on tablet versus a phone, you might get different experiences because the viewport sizes are so different. So that's why performance and user experience, I should say, change so much, and it's important to look at the distribution. Core Web Vitals have a threshold for an absolute number, like 2.5 seconds LCP is what we consider good. But then there's a question of how good is good? How many user experiences actually need to meet that threshold for the entire page experience to be considered good? Core Web Vitals settled on 75 percent. So we can say at the 75th percentile, are users meeting this threshold? That's actually consistent for all three of the core web vitals. And the histogram lends itself well so that you can see in a cumulative distribution function, you can imagine a line creeping closer and closer to 100% as more user experiences are included. And at the point that it crosses the 75% line, you want to make sure that that's below the good threshold. A tool like PageSpin Insights will show it in an even easier and simplified view where we call it the tri-bin histogram. You only have to worry about three different bins in that histogram. It doesn't matter between 100 milliseconds and 1200 milliseconds. What really matters is all of the experiences below the good threshold of 2.5 and all the experiences above the poor threshold of I think it's four seconds. And so you'll see the green, yellow and red are representing those types of experiences, the quality of experience you could say.
DAN_SHAPPIR: I think it's really interesting in Batespeed Insights or PSI that the synthetic score that is computed off of virtual machines somewhere in the Google Cloud is placed alongside these field metrics. So you get a simultaneous view of both, like you might say, a normalized type of score that you get that you can compare to, let's say, other variations of your website or other websites. But you can also see the actual field data for your website so you get both these views together and they don't always match. Because I think for example in mobile, PlaneSpeed Insights actually uses or Lighthouse actually simulates a fairly, what might be considered these days a fairly low-end scenario. I think it's the Moto G4 on a 3G network and it might be that the majority of, let's say if you're a website in the United States, most of your users might be on an iPhone, for example, which would have a much better experience than that. Now, whether or not you should ignore the lighthouse score is up to you, but that kind of explains a discrepancy that you might see, correct?
RICK_VISCOMI: Yeah, and this is actually a source of pain and confusion for a lot of developers, the appearance of conflict between lab and field. And it's important to communicate to developers what the differences between the tools are. I like to explain it as lab is describing the opportunities for improvement and field is describing the quality of experience. So when light up, sorry, when PageSpeed Insights gives you a lab score of 44, that's saying that you can improve certain things about the way that the page is built in order to improve your performance. But that's not necessarily saying that users are having poor experiences. And sometimes those two are conflated. And it could lead to confusion if you were to say, this page is slow because it has a low Lighthouse score. Whether that's true really depends on how users are actually experiencing it. And you might have pages that are built identically, but experienced differently. So that's why it's important to look at both.
DAN_SHAPPIR: I also think that you were talking about the opportunity that the synthetic score represents. A possible reason for the difference might be that people with lower-end devices just don't use your website because they find it too slow. So the synthetic test, if it shows you a low score, that actually represents potentially an opportunity to increase your audience that you might not see just by looking at your actual real user measurements.
RICK_VISCOMI: 100%. Yeah. I think that's one of the real benefits of synthetic testing or lab testing using a tool like Lighthouse with a slow emulation or webpagetest.org slash easy Will default to low-end conditions. That's a really effective way of simulating what it would be like on a Low-end device from let's say an emerging market so that you can understand what Developer or I should say user pain points might be if someone like that were to access your website and that's a view that People who are privileged in tech might not necessarily see often. If you're accessing your own website on your own personal MacBook Pro on your fiber optic connection, your experience is not necessarily reflective of your users. So it's important to really not only look at your field data, but also simulate what it's like for the users who do experience the most pain.
DAN_SHAPPIR: I do think that it's not always accurate to think of it just as third-world problem because for example, a friend of a show, Kyle Simpson likes to say, to tell that whenever he travels overseas, at least before the pandemic, he used to travel quite a bit, giving lectures all over the world, that when he travels, his mobile connection, which is usually excellent when he's in the States, falls back to something that's kind of equivalent to 2G. And as a result, he experiences very poor connectivity, very poor performance very often in situations when he actually needs it the most because for example he wants to get updated flight information and he needs to access an airline's website and like I said he has the equivalent of a 2G connection and the website for some reason wants to download a 4-megabyte background image that he doesn't really even care about and that can block his entire connection or eat through his entire data limits. So uh. So these are definitely things that you need to take into account and that synthetic measurements can definitely help you with.
RICK_VISCOMI: Absolutely. I like to think of performance as a type of accessibility. And in the accessibility web accessibility space, well, I'm sure it applies to all accessibility, but I've heard it in the web accessibility space where there's a concept of situational disability. You might not have paralysis or you might not be blind, but there are situations that you might be in where your visual or motor abilities are impaired. You can imagine if you're holding a baby, you are effectively only able to use one arm, or if you're driving, your attention is divided and you don't have as much time to look at the screen. So I do think that there's a parallel there in performances.
DAN_SHAPPIR: Don't look at the screen while you're driving. Please don't.
RICK_VISCOMI: Android Auto, for example, will simplify and enlarge UI on the screen to make it easier to see what you're doing. It actually prevents you from typing on on-screen keyboard. So there are ways that technology adapts to the situation you're in to make it easier and adapt to any type of situational disability. I would say on the performance side, it's absolutely true that you might not necessarily have systemic performance problems for you as a user, but depending on what you're doing, if I'm commuting on a train, for example, I might be going from tower to tower and my performance could be degraded just situationally maybe that's a situation where it's even more important for me to have offline friendly experiences for the website to be built as a PWA with service worker caching and all that.
AJ_O’NEAL: So one thing that I've wondered about, and I wonder if anybody has found a way to measure this realistically, cause it seems kind of impossible, but so a lot of times people like to tout how users are spending more time on their site. And sometimes I wonder how much of that extra time is because you did something wrong in their trying to figure out what to do versus they're actually engaging in the site more. Yeah, that's a good question. It reminds me of Google Search. And there's kind of a conundrum of, well, if a user takes a long time on Google Search, does that mean that they haven't found the answer they're looking for? So it might actually be the inverse, where the less time they spend, the better the experience is, because they found their answer and left. Abandonment is a strange signal.
AJ_O’NEAL: Then how does that work for your your advertising revenue. I guess it's all worth it. I'm not on the end side. So it doesn't matter.
DAN_SHAPPIR: But since you brought up Google search, I'm going to ask you a question that you'll probably refuse to answer, but I'll ask it anyway. Or maybe you don't. Maybe you will. I don't know. So, does the Crocs data or alternatively the Lighthouse data impact your website ranking?
RICK_VISCOMI: I can't comment on ranking directly. I'll just point you to the search blog post that talks about page experience and core root files.
DAN_SHAPPIR: Nice save. So, I'll switch to a different topic. Sometimes when I look at the PageSpeed Insight values for a specific website, I won't actually see any field data at all. That whole area is just blank. Why is that?
RICK_VISCOMI: Yeah. So, that could happen when there's insufficient data. Which means we don't have enough data points from real user experiences in order to make a confident assessment of how that page is being experienced. We don't give away the exact threshold of the number of unique users, but there is a threshold and we try to make sure that it's low enough to be inclusive of a lot of websites that are less popular, but also high enough that we can have statistical confidence in the results.
DAN_SHAPPIR: So basically what you're saying is that there is a certain threshold, you know, that you're not saying what it is, but that if your website gets below that threshold number of visitors within a certain period of time, then you just won't get crux data for your website.
RICK_VISCOMI: That's right. And that's one of the reasons why we have a 28-day aggregation as opposed to a daily or weekly. Because we want to make that window as wide as possible to capture a whole wealth of samples so that we can bring them all together and come up with that histogram that then turns into the assessments. That same threshold is also applied at the website level, and you can imagine a website has more data points than an individual page does. So your experience on PageSpeed Insights might be there is no URL-level data, but there is origin-level data. So you can still at least get something there in a way that is still helpful for users.
DAN_SHAPPIR: By the way, going back a second to AJ's question or comment about trying to measure engagement or lack thereof, FID in addition to measuring the delay, I think also, I don't know if Crux collects this information, but the actual web performance API that measures it, and I assume Crux is built on top of the web performance API, which by the way, I actually talked about in another past episode, and I'll post a link to that as well, but before I get too sidetracked with myself. So another measurement that FID actually provides is when it actually happened relative to navigation start. So one thing that you might look at is whether or not your particular session has FID at all. So you might look at the percentage of sessions that even have FID versus sessions that don't, and that's kind of like a bounce rate. And the other thing that you might look at is when does the first interaction actually happen? Not the delay, but when it actually happened. And if its value is really high, that might indicate that people are finding it problematic to actually locate what they want to interact with in your website. And in this context, it's worth mentioning that scrolling or zooming don't actually count as interaction.
RICK_VISCOMI: Going back to what you said earlier about APIs, it's true that Crux, one of the core principles of Crux is that it's built on web platform APIs. So when we wanted to introduce a cumulative layout shift, we made sure that there was a layout instability API available to all developers, which does play into also the idea that Crux isn't a replacement for your own first-party ROM solution. We encourage all websites to have their own field data source looking at their own field data because it has a lot more insights about their users. It's not limited to just Chrome users who opt in. Every other OS and browser is included. You can corroborate the data to make sure that they're aligned in that way, but you need to make sure that those RUM vendors are still able to measure the same metrics that we're doing, and in a way that's similar. So for FID, one of the things that's really interesting when we released FID, I was looking at the distributions. And I was zeroing in on the websites that had the best FID experiences. And I found that some of them actually had poor first contentful paint. So we haven't talked about FCP yet, but it's kind of the predecessor to LCP. And I found that one websites had zero or very small FID, but very high FCP. I think this is illuminating a type of user experience that's still pretty bad. Even if FID doesn't highlight that, where let's say the page takes a very long time to load and the user's looking at a blank screen. There's nothing to click on, so I don't think we can expect the user to try to click on anything. And even if they did, there's no JavaScript or anything to respond to it. So for slow-loading experiences like that, one metric will say this is a good experience, another metric might say it's a good experience. And that's why it's so important to have core Web Vitals where you look at all of the user experience metrics at the same time and make a qualitative decision about was this overall a good experience.
Are you ready for core web vitals? Fortunately, Raygun can help. These modern performance metrics play an important role in determining the health of your website, which is why Raygun has baked them directly into their real user monitoring tools. Now you can see your core web vital scores are trending across your entire website in real time and drill into individual pages to focus your efforts on the biggest performance gains. Unlike traditional tools, Raygun surfaces real user data, not synthetic, giving you greater insights and control. Filter your score by time frame, browser, device, geolocation, whatever matters to you most. And what makes RayGun truly unique is the level of detail it provides so you can take action. Quickly identify and resolve front-end performance issues with full waterfall breakdowns, user session data, instance level diagnostics of every page request, and a whole lot more. Visit raygon.com today and take control of your core web vitals. Plans start from as little as $8 per month. That's raygon.com for your free 14-day trial.
DAN_SHAPPIR: If I understand what you're saying you're saying that if a website takes a long time to load, and it's let's say on a desktop rather than mobile device, and that person might just be tapping on their mouse out of boredom, and that counts as a really good fit because there's no JavaScript yet, so they're just tapping on empty space, and that counts as a French interaction. It's really fast because there's nothing to do.
RICK_VISCOMI: It depends on what the main thread is doing. I'm not totally sure, so don't quote me on this, if the user tries to interact with the page during the blank screen and the CPU is pegged phase, that might actually be a slow feed because it will take the browser and website a long time to be able to respond to it. I think the user behavior in those cases is more like there's a loading spinner, or there's, I guess a loading spinner would count as a first contentful page. But let's just say there's a blank screen and it's taking a very long time to render anything. I think the user behavior is that they won't even try to interact with it because it's a very clear indicator that nothing will happen. Nothing is ready. There's nothing to do or interact with. So the start time to your point of having multiple metrics from the API, the first input time, not the delay itself, happens solely in the page. And that ties into something else that I wanted to bring up, which was the importance of calibrating your lab tests based on field data. So if you have a very long first input time, maybe that's something that we could calibrate in the lab testing. Maybe we can synthetically simulate a user click so that after eight seconds or whenever the first paint happens, we could simulate a user clicking on some part of the page and get some of those first input delay-like metrics in the lab. Because one thing we haven't mentioned yet is that FID doesn't actually exist in Lighthouse today. We have proxy metrics like total blocking time or time to interact. And those proxy metrics measure the amount of time that the CPU is spending doing scripting tasks, but it's not necessarily measuring anything related to the user experience or user interaction, because in a simulated environment, there is no user. That's another source of this discrepancy between lab and field, CLS being another one, because there was no user sitting in front of the screen scrolling the page down incurring more layout shifts. You might actually find that CLS in the lab is a lot better and more unrealistic.
DAN_SHAPPIR: You mentioned a few times that Chrome users actually opt-in to have crux collected from their sessions. I don't actually recall whether or not I did that. Where does it happen? Where do you opt in or where do you opt out or how does it work?
RICK_VISCOMI: I believe it's when you are first installing Chrome, but there is also a setting under chrome colon slash slash settings where you can go in and check. I think the name of the setting is allow anonymous usage statistics.
DAN_SHAPPIR: And do you have any idea what percentage of Chrome users actually opt in or not?
RICK_VISCOMI: No, I don't have data on that.
DAN_SHAPPIR: You also mentioned that real user measurements can vary a lot by things like the device die, by geography. Does Crux allow you to segment on those sort of things?
RICK_VISCOMI: Yeah, Crux includes a few dimensions to help with that and measure apples to apples in a way. Two of them that are built into the BigQuery data set are form factor and effective connection type. For form factor, that's based on the user agent string, and we give you whether it's tablet, phone, or desktop form factors. It doesn't go into which type of device or operating system or anything. It's more of a measure, a coarse indicator. As for effective connection type or ECT, those are given in terms of what you'd recognize as mobile network connection names. So 4G, 3G, 2G, slow 2G, and also offline. That doesn't necessarily mean that the user is on a mobile network. It's just a way of communicating what the performance is equivalent to. That's what makes it an effective connection type as opposed to just the actual connection type. Most of the experiences that we see are categorized as 4G. The bar for 4G is actually low. So 90-something percent of experiences do qualify as that. And the other thing that you touched on was geography. So we do have country-level data. We have separate tables, not actually in the same table as the global one, so that you can break down and see, in a specific country, what are the origins or websites that users are visiting, and how are they experiencing those websites. And it is actually interesting to see countries with more or less origins in them and how they're being experienced the infrastructure in different countries does actually make it apparent in the performance data, how users experiences are different.
DAN_SHAPPIR: You mentioned that CRUX is not necessarily a replacement for having your own built-in ROM collection system. And again, ROM actually means real user measurements, for those who are wondering what the acronym stands for. It's collecting the data from real user sessions in the field. So we at Wix have found I think it sounds like three main reasons to actually use ROM, our own ROM solution over just using Crux. We kind of use both. You mentioned one, which is collecting data for browsers that aren't Chrome. Although it's worth noting in this context that many of these other browsers don't actually have the web performance APIs, or at least not all of them, that allow you to get all the information that you would get for Chrome using Crux. I think that, for example, LCP and CLS are currently only really available in the Chromium-based browsers like Chrome and the New Edge or Brave. So that's one reason. Another reason is that usually, you can get finer grain segmentation with your own ROM solution. For example, not just going by, let's say, country, but also being able to go maybe by city or by ISP. And another interesting thing is that you talked about the fact of how you aggregate data over a certain period of time. That can be limiting if I want to make a change and I know that I have sufficient traffic and I want to see very quickly if it's having a positive or negative impact. If my data is aggregated over a lengthy period of time, that might be difficult to do or maybe not even possible with your own ROM solution, you might be able to look even at hourly levels of performance again, assuming you have sufficient traffic.
RICK_VISCOMI: That's really important. The monitoring aspect is something we pretend to do with Crux data. I built the Crux dashboard, where you can see month over month how the performance is changing. But it's not actionable to the point where you're going to catch a regression and go fix it. I don't want people to use it that way because it's just not fast enough. The fact that we release it on the second Tuesday of the month even means that you're not going to see things maybe like a month late. That's not good for your users. You really do need to have your own first-party data that you can hook monitoring and alerting onto to be able to get that confidence to know that your users are having consistently good experiences.
DAN_SHAPPIR: Yeah. Even in an ideal world, it's actually something that we do do at Wix. We actually associate performance measurements with running experiments. So we are actually able to do A-B tests on certain changes and then compare the performance of these two or three or even four, you know, how many scenarios that are relevant to see which one is the winner and whether a particular change is beneficial or not. So yeah, these are all definitely useful things, but I definitely think that if let's say you're starting out or you don't have the budget or for whatever reason, then having crux is definitely superior to not having anything at all.
RICK_VISCOMI: Absolutely.
DAN_SHAPPIR: And also having Crux makes it possible for you to compare yourself to your competition, because you have run data for yourselves, but you don't have run data for your competitors. But you do definitely have Crux data for all these sites that are in the HTTP archive, which again, if you're sufficiently successful and so are your competitors then you should be in that database.
RICK_VISCOMI: Absolutely. That's one of the biggest value propositions early on with Crux to get people to take notice of it, was competitive analysis. I found when I was working at a small shop, a very small website, it was a startup and they had big competitors, and it was hard to get them to focus on or care about web performance. One of the things that was really motivating was to make a web page test video comparison of all the pages loading next to each other, and ours, lo and behold, was the slowest. And what you know, after that, they were like, okay, make us faster, at least as fast as our competition. Granted, that was a lab test, and you can configure the test in a way that may or may not be representative of real users. But the great thing about Crux is that it's a shared source of truth of real user experiences. It's all the same metrics, and it's accumulating everybody's experience. So there isn't that same bias that you could have in the lab. And using that data, you can create these really compelling case studies of the differences in performance across competitors. I've seen some websites also doing not necessarily an A-B test, but changing their infrastructure of their website and then looking at the change over time. That's not controlling for all the variables, but it is still interesting to see in a public data set how that change happens. I was sitting in a conference hall, and one of the presenters, I was going on to present later that day. And one of the presenters talked about a particular website and how they made a change and saw big improvement. I was able to look them up in the Crux dashboard, generate a time series of performance. And by the time my talk came up, I showed a screenshot of that performance actually getting better. So I could corroborate what that presenter was saying with actual data. And the great thing is that I'm not given privileged access to this website's first party rum. This isn't some secret Google data. This is public data that's available to the entire web community. And people like performance consultants find this data set really useful because you could identify websites that have real performance problems. And you can create a little pitch deck and say, I see that you have performance problems in x, y, and z. And running this lab data, I can see that these are the opportunities for improvement that you have. And that's a really compelling way to pitch your services to somebody because you can come up with a whole plan without ever having to get any privilege first party access.
DAN_SHAPPIR: Just imagine what would have happened if instead of corroborating his assertion, you found that he was actually mistaken.
RICK_VISCOMI: Yeah, that would be bad.
DAN_SHAPPIR: Yeah. So one more question that I had. I'm not sure how many of our listeners for this episode were so familiar with the crux. But I assume that probably a majority of them were at least heard of Google Analytics, if not used it. How does Crux compare to Google Analytics? Is it the same data? Is it similar presentation of the data? What's the difference between the two?
STEVE_EDWARDS: Yeah, I think that's the real crux of the problem there, Dan. I do say so myself.
RICK_VISCOMI: Pun intended. Completely.
DAN_SHAPPIR: With Steve, puns are always intended.
RICK_VISCOMI: Google Analytics and Crux are different data sources. GA is a first-party RUM tool. And it's beyond performance, really. It's about conversions and things. So the data sources are different, and one is a public data set, the other is a private data set. They can kind of, in a way, play together. There's a Web Vitals JS library that enables you to hook core Web Vitals up to your existing RUM tool. And Google Analytics is one example of a RUM tool that you can beacon your user experiences back from the website back to the Google Analytics server, aggregate them, and get some insights and reports from it that way. On the reporting side, there is a difference. The whole Crux pipeline is meant to be publicly consumed in dashboards and databases and things. And GA has an API. It has its web interface. It also has a useful data connector for Data Studio similar to the Crux dashboard. So there are ways to get the data to play nice together. But they are different sources. I still want to emphasize that. And I don't believe to this day that Google Analytics supports Core web Vitals by default. You would still need to use a tool like Web Vitals JS to get that data in.
DAN_SHAPPIR: Crux does support data beyond Core Web Vitals though.
RICK_VISCOMI: Yes, that's true. One of the ones that I'm most excited about is TTFB, Time to First Byte. The metric is unfortunately named because it sometimes conflicts with a different similarly named metric from a user clicking a link to the first byte of that response arriving on that user's machine. It ignores all the front-end time of parsing and evaluating the JavaScript and setting up the HTML, turning it into DOM and all that. But it's a really useful way of, especially for the loading performance metrics like LCP, FCP, by looking at the amount of time spent on the backend. Last year? Yeah. In 2019, we added TTFB to the Crux dataset. And I was at a WordPress conference and I was showing some experimental data. And I had grouped performance that I... So I would look at HTTP Archive to identify who the hosts were of a particular website. That's part of the how it's built aspect. And I would join that with the Chrome User Experience Report dataset to understand how are those hosts actually being experienced and grouping everything together in a way that would tell us how our users experiencing back-end performance on these hosts. And I got some people coming up to me like, you can't do that. That's not fair. Like, what about users in different parts of the world? And what if the website is served from a particular geographic location? I said that would be true for a lab-based TTFB research study. This is using TTFB from the field these are actual backend times that users sit at their computer and actually experience. And once they realize that, they're like, this is actually really useful because this is something that's never been possible in the whole web performance or web hosting industry before. The only way to know how your hosting competitors are performing is to maybe run your own site on their host and ping it from some data center somewhere or some synthetic location. But having real user data actually unlocks a lot of those interesting opportunities to understand the state of the web in a way that's actually, I keep coming back to this, experienced by real users because that's really what matters. That's why we are focusing on performance because we want to make things better for users, not just some number on paper corresponding to a computer in Patrick Meenan's basement on the webpage test infrastructure on Chrome on a certain time of day. We want to look at the holistic experience that everyone has and a data set like Corrupts will account for that.
STEVE_EDWARDS: Yeah, when you talked about time to first bite, the first thing that came to my mind is my son counting the time till dinner.
RICK_VISCOMI: That's true. B-Y-T-E bite.
STEVE_EDWARDS: Yeah. Oh, sorry.
DAN_SHAPPIR: I think you actually set up a website. What is it? Is my host fast yet? Which actually presents this data that you just mentioned.
RICK_VISCOMI: Yes, that's true. So I have kind of a stack ranking of hosts. It's a challenge to maintain because identifying who the host is requires us to know information about how hosts are identified. And not all hosts want to be identified. There are some security implications of a host promoting the fact that they are using a website to say that they're using a certain host, especially if there is a known vulnerability with that host or specific server infrastructure or things. So you can imagine websites being more secretive with that information. But for the hosts that we do have information on, I've actually seen hosts contribute fingerprints that we can use to identify whether a website is hosted by them. And whether the performance is good or bad, I think it's really good and encouraging to see that hosts are on board with this type of information. It's done in the spirit of web transparency and there's no judgment. We're not naming and shaming. We want to make user experiences better. And going back to the first thing we said at the start, you can't improve what you don't measure. So first we need to know how good or bad the experiences are. And then we can focus on what can we do to make it better? And as a follow-up to trying to make things better, actually going back and seeing what the differences were. Are we making progress on things? And if not, then we need to change course.
DAN_SHAPPIR: In this context, I'll just say that I'm totally on board with you talking about web transparency, and I'm glad that so is my employer. I actually updated the code that identifies WixBix sessions to ensure that we are properly identified.
RICK_VISCOMI: Yeah, thank you for doing that.
DAN_SHAPPIR: Well, it's my pleasure. Before we finish and move to PICS, anything that you can say would like to say about future plans for Core Web Vitals and for Crux?
RICK_VISCOMI: There's not a lot that's been announced for 2021 plus for Core Web Vitals. I could kind of talk about things that I'm excited to see from the Crux dataset. We have an API that launched earlier this year, and you can think of it like the successor to the field data in PageSpeed Insights. Actually today, PageSpeed Insights is based on a data source that's similar to Crux, but not exactly. So users of both APIs have noticed discrepancies between them and that's caused some confusion. I'm looking forward to the PageSpeed Insights team migrating to the Crux API for tooling consistency. The other thing is that the Crux API is a little bit more limited in a way. It just has the four metrics that are available in PageSpeed Insights, FCP, LCP, FID and CLS. But the Crux dataset, like you mentioned, has more metrics. We talked about TTFB. There are also those legacy metrics, like onload, DOM content loaded. There's even a user experience metric on the acceptance rates of notification permission prompts, a measure of user experience, but not necessarily performance. I'm looking forward to more metrics like that that get into user experience, not necessarily performance. I'm looking forward to the Crux API providing more dimensions, like country level. It right now gives you the trailing 28 days, but it doesn't necessarily give you historical data. One of the strengths of BigQuery and tools like the Crux dashboard is being able to see how your performance is changing over time. If you wanted to see last month's performance for a specific URL, you would have had to save those API results. There are workarounds like I've released Apps Script, where you can hook a Google spreadsheet using Apps Script into the API and ping the API on a regular basis automatically, save those results to a spreadsheet and generate graphs and dashboards and things. So you could track those things over time, but it's only available once you start asking for it.
DAN_SHAPPIR: Cool. Anybody else wants to bring anything up? You yourself, Rick, is there anything that we neglected to cover?
RICK_VISCOMI: I think that's everything we covered a lot.
DAN_SHAPPIR: Yes, it was definitely very, very informative.
Hey folks, if you love this podcast and would like to support the show or if you wish you could listen without the sponsorship messages, then you're in luck. We're setting up new premium podcast feeds where you can get all of the episodes released after Christmas 2020 without the ads. Signing up will help us pay for editing and production, and you can go sign up at devchat.tv slash premium.
DAN_SHAPPIR: Well then I guess we can move to pics, favorite part of the show. Let's see, who should I start with? Well, let's go ladies first. Amy, how about you? Can you share some pics with us?
AIMEE_KNIGHT: Yep. Let me see though, because I'm not quite ready yet.
DAN_SHAPPIR: Should I switch over to somebody else?
AIMEE_KNIGHT: Yeah. Give me a second.
STEVE_EDWARDS: Yeah. I'll jump in and save her again.
DAN_SHAPPIR: What a gentleman.
AIMEE_KNIGHT: Thank you. Thank you, Steve. Thank you very much.
STEVE_EDWARDS: So I got an email today and it is about a survey. It's the state of CSS and which the state of certain things surveys. I always find it real interesting. They're obviously not as has been discussed ad nauseum, perfectly scientific, you know, randomized samples of people's opinions on things, but it's just a good way to gather information from developers on, you know, what they're using, what they're not using, what they like, what they don't like, so on and so forth. So I'll put the link in the show notes. It's really pretty straightforward. StateofCSS.com. And for what it's worth as part of the email, they are also, uh, give us feedback on nailing down questions for the state of JavaScript survey that they will be putting out as well. So this would be a good chance if, for instance, you think that in previous surveys, they haven't covered certain topics of JavaScript and you want them to ask questions about it, this would be a chance to provide your input before the survey is actually put out.
DAN_SHAPPIR: Cool, AJ, how about you? We'll give Amy a bit more time to prepare. You always have good picks.
AJ_O’NEAL: I can find my unmute button. There we are. Okay, so first thing I'm going to pick is SendGrid and I'm picking SendGrid because I've used Mailgun for the longest time and the, like their slogan was transactional email, which would make you think that it's like instantaneous. Like if you need a transaction to happen and you need to complete it, but the free tier just doesn't work for that anymore. Like if you needed to do something like log in or you need something where you need the user to get the email within a matter of seconds, I I've just found that Mailgun is not cutting it. And so I switched to SendGrid for a particular application and SendGrid delivers the email instantaneously. SendGrid is also owned by Twilio. Twilio now owns Authy and SendGrid and maybe a couple of other things. And so by virtue of the nature of Twilio, I have a little bit higher confidence that the business model of SendGrid is not going to change in such a way that, that instantaneous emails become less important to them for their base user. So picking Sangrid, I'm also picking, what is this? Tuscan whole milk, 128 ounces. So this is one of those Amazon reviews that's just ridiculous where it, I think, I think I did, did I mention this before? Where, you know, this is this, I think this came out before Amazon was doing groceries and like even the idea of two day shipping on milk, you know, it's just, it's ridiculous.
STEVE_EDWARDS: Was this what the Tusken Raiders from Star Wars drank? Was that where it comes from or is this something else?
AJ_O’NEAL: That's probably what it is. That's probably what it is, but it's not blue. So it couldn't be. And the top critical review is this is a fine milk, but the product line appears to be limited in available colors, a really fine white.
STEVE_EDWARDS: What colors are milk? Always important to know.
AJ_O’NEAL: Okay. So another thing I'm going to pick is 12 factor.net. Now people take this stuff too extreme. So I just need like, I have to preface that. Like this is, it's not a religion folks, or it is a religion, but I don't think that you should treat it as a religion. I think you should take the principles of it that are good and use them in your life and in your apps to enhance it, but not like bow down to the 12 factor God. Cause ultimately that's an imaginary God that's made up. So we don't, we don't need to treat it that way, but I think that it is worth looking at 12 factor.net looking at their guidelines and considering them as guidelines, but generally speaking, following, following these principles of separation of concerns for configuration and deployment and stuff like that is going to lead you to happiness. So there's that. And then lastly, there's a tool called FZF, which is fuzzy finder, you can integrate it with them, but it works well on its own. You can integrate it with other things as well. Basically it's like find, you know, you just go open up your terminal type FZF and then you start typing and it fuzzy finds while it searches. So you can go into a folder and it's, it's like a live updating search. So it's not like something that I use every day, but it is something that comes in handy when it comes in handy. And I've got that up for easy installation up on web install.dev slash FZF along with, uh, I mean, it's not really that much of a cheat sheet because it's an extremely simple tool, but just like a couple of, of ideas of how you might use it.
DAN_SHAPPIR: Amy, we're, we've circled back to you.
AIMEE_KNIGHT: Okay. Sorry about that. Yep. So some of these picks I think are going to come off the fly for me or on the fly. I'm going to, I know because AJ just picked SendGrid. I have to pick Spark Post who's like the competitor to SendGrid because Spark Post is where I used to work. It was my first job out of coding bootcamp. And so I'm like eternally grateful to the people that I worked with there and everything I learned and honestly believe they have a really good product too. So take a look at Spark Post too. But my action-
AJ_O’NEAL: I want to know, I want to know like what- From the name, it doesn't sound like SendGrid. It sounds like it's marketing emails or something, which maybe SendGrid is too. But like what, what makes it, what's, what's the pitch? Why, why would I pick this over SendGrid?
AIMEE_KNIGHT: I mean, I hadn't heard of it. Okay. It's been so long since I've been there. I'm probably not the best one to answer that. I know that when I was there, I know that they were stealing a good bit of competition over from SendGrid. Some of like the really big senders, like when I was there. Anytime you get a transactional email from it was Facebook, Pinterest, Twitter. I'm not recalling who the other senders are, but they, I also believe I don't, this was five years, six years, seven years ago. So I don't remember the numbers, but they were definitely like the largest sender of transactional emails.
AJ_O’NEAL: I have never heard of them. That blows my mind.
AIMEE_KNIGHT: They used to be called message systems and then their cloud offering their on-prem offering was message systems. And then the cloud offering with Spark Post. So I had a lot of fun working on all the code there. It's pretty awesome. I learned a ton. But the other thing I was going to... So I was going through my notes of different things and my stars on GitHub recently. But honestly, based on the conversation that we were having, I do want to pick Qwiklabs if you're interested in digging into BigQuery. So Qwiklabs, Rick could probably get into it more. He probably knows way more about it than I do from the internal side. Quick Labs is how I've learned a lot of the different GCP services, and they have a bunch of different BigQuery ones. It's basically a way to spin up all these different services and play around with them. And typically, you won't incur costs, or if you do, you'll get a credit to do that the first time or so.
RICK_VISCOMI: That's cool. I actually haven't heard of them.
AIMEE_KNIGHT: Oh. OK, well, highly recommend Quick Labs.
DAN_SHAPPIR: OK, so now my turn. Since AJ brought up interesting reviews on Amazon, there's one I have to mention. It's a slightly maybe not safe for work. So if anybody's concerned about that, they can jump forward a few seconds in this podcast. So LG a while back released a really large screen TV that they were selling for over $90,000 on Amazon. And that got some interesting reviews. So I'll just read the first paragraph out of this one. My wife and I brought this after selling our daughter Amanda into white slavery it's missing the remote, but oh well, for 20k off, I can afford the universal, right? The picture is amazing. I've never seen a world with such clarity. Amanda, if you're reading this, hang in there, honey. We'll see you in a year.
AIMEE_KNIGHT: That's terrible.
RICK_VISCOMI: Oh my gosh.
DAN_SHAPPIR: And it keeps on going. It's just an amazing review. I, you know, I can't read all of it and I just post the link. I highly recommend it if you're looking for some laughs. My other pick is a non-technical one. I actually kind of mentioned this at the beginning of the show. It's the fact that in Israel, daylight savings is over. And I really hate that. Ideally, I would wish for daylight savings to be a whole year round. It's kind of what they do, I think, in some countries. I think in Spain they do it. And it's really amazing because you have daylight really late into the evening and I really love that. And it's really annoying when it gets dark around the 5 p.m. for no good reason. So that would be my other pick, a suggestion to those who control such things to maybe just keep daylight savings around, hold you around. Rick, how about you? Do you have any picks for us?
RICK_VISCOMI: Yeah, the talk of Amazon reviews reminds me of one that's really funny. There was uranium ore I don't remember the exact numbers and the science behind it, but it was really funny. It was like, I opened my order of uranium ore like 65 million years after it arrived and I was surprised to find that half of it was missing and like one star review because of the half-life of uranium.
DAN_SHAPPIR: Yeah, yeah, yeah, yeah. We're all science nerds here.
STEVE_EDWARDS: That's awesome. I got to list yours out.
RICK_VISCOMI: So my pick would be, I've been watching a lot of YouTube with all of the COVID lockdowns and things. And I've been getting into Vsauce a lot. I think it's really fascinating. He does a lot of space-related things and I am fascinated with space, but also things about humanity and interpersonal relationships and basically like crazy questions you might have that nobody has answered before, but has always been in the back of your mind. And as an extension of that, I am not, I'm a very visual learner, but I recognized that on the Vsauce channel, he is quoting books a lot. And for some reason, I can't get into books. There's something about just like sitting in one place and reading, maybe my attention is too low. But I've tried to go on Goodreads and get a bunch of books recommended by Vsauce. And one of them is called Everything Bad Is Good For You. And it's a really interesting way of looking at media. And in particular, as far as I've gotten in the book so far, he's talked about the author Stephen Johnson has talked about video games and there has kind of been a reaction in society Against video games and that it's bad for kids and it's driving them to violence and crazy things like that even For nonviolent video games, it's distracting or it's teaching them the wrong things and one of the interesting arguments that he made was Imagine if video games came before books, what would this same society be saying about books and I won't quote it, but I'll paraphrase. It's like these books are driving our children into one-dimensional narratives. They're restricted by whatever the author is trying to tell them. And then later on in the book, he's like, it's not lost on me that I'm teaching you all of this through this book, but it's a really interesting way of just thinking about things differently and from different perspectives. And that and the Vsauce channel both do that really effectively for me.
DAN_SHAPPIR: Cool. Before we finish, Rick, if anybody wants to get in touch with you, what would be the best way to go about it?
RICK_VISCOMI: The best way to reach me would be on Twitter. My handle is Rick underscore Viscomi. My last name is spelled V-I-S-C-O-M-I. So Rick Viscomi on Twitter is probably the best way. My DMs are open. Feel free to reach out to me there too.
DAN_SHAPPIR: I have to say that you post, you don't, you're not the most frequent poster. I know some guys at Google that post about 10 times for every one of yours. Alex Russell come to mind. But when you do post, it's definitely worth it. So I highly recommend following you on Twitter. And with that, I think we conclude another episode of JavaScript Jabbas. Thank you very much, Rick, for joining us. And bye-bye, everybody.
STEVE_EDWARDS: Adios.
RICK_VISCOMI: Bye.
Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit C-A-C-H-E-F-L-Y.com to learn more.
CrUX and Core Web Vitals - What to Measure on the Web with Rick Viscomi - JSJ 486
0:00
Playback Speed: