Powered by RedCircle

Nuxt's Most Recent Developments with Daniel Roe - VUE 222

  • Guests : Daniel Roe
  • Date : Aug 15, 2023
  • Time : 1 Hours, 24 Minutes
Daniel Roe leads the Nuxt core team. He joins the show alongside Cody and Steve to talk about everything "Nuxt". He begins by talking about the recent updates and new features with Nuxt 3. Moreover, he explains how it can improve developer experience, advantages, and many more!






Steve (00:01.482)
Hello everybody, welcome back to Views on View. I am Steve Edwards, the host with the face for radio and the voice for being a mime. And if you can see us on YouTube, you'll understand that. But I'm still your host. And with me, I have my shiny new co-host, Mr. Cody Bonsky. How you doing, Cody?
Cody Bontecou (00:19.169)
Hey everybody, thanks for tuning in today. Very excited about our episode.
Steve (00:24.736)
How's the weather? I hear it's pretty rough over there in Hawaii. You hanging in there?
Cody Bontecou (00:30.883)
Besides the lake that we call our ocean, it's not bad.
Steve (00:38.306)
Not bad, good. And as our very special returning guest, a frequent guest in one of the faves, Mr. Daniel Rowe coming at us from the UK, correct?
Daniel Roe (00:49.798)
Yes, that's absolutely right and it's a pleasure to be here. Thank you Steve.
Steve (00:50.922)
Steve (00:54.194)
Very good to have you back. Daniel knows a few things about Nuxt. And the phrase I like to use to describe him, one of my favorite ones I learned from a movie a long time ago is the tallest hog in the trough. At Nuxt, is that right? You're the big boy on top.
Daniel Roe (01:12.89)
I have no idea what that means. Like I am, I am... The toughest hug.
Cody Bontecou (01:14.486)
Steve (01:18.15)
hog in the trap. You're the big dog. You're the top man, the expert, the guru.
Daniel Roe (01:26.63)
Okay, I used just the other day, I used the expression, a dog in that fight in conversation. And I think the other person just had a similar total, what does that even mean? What did you just say? What is the example here? So, cool. I cannot claim to be the tallest. Sorry, say it again.
Steve (01:46.686)
So tell us, what is your role at Nexten? Okay.
Cody Bontecou (01:50.969)
Thanks for watching!
Steve (01:52.306)
What your role at Next, why don't you describe what your role at Next is. I know something that's recently changed, or at least maybe in the past year.
Daniel Roe (01:58.75)
Yeah, so I lead the Nuxt team, the open source team building Nuxt. So we're a small team, but of great, great contributors. And I picked up, I started leading the team in January of this year. I've been on the core team for a few years now. So people like Sebastian Chopin who...
with his brother Bilt NUXT in the first place, Pooja Partha, the architect of Nitro and the Modules ecosystem and so much that we know and love about NUXT. They're still around. We also have some new team members, so we have Harlan, we have Lucy, Alex, Anthony. You know, it's a good team. It's a good team. And quite a few more as well.
Steve (02:55.526)
Excellent. So there's a couple of things that we were going to catch up on today with Daniel. One is as of today's recording the next 3.7 should be out within the next couple of days and so we wanted to cover the various features that will be added as part of that release.
And then also as a follow-up, Cody and I a couple of weeks ago had discussed an article written by Daniel on Nux Server Components, and we had some outstanding questions, and so we figured that Daniel would be the perfect person to answer those for us. So to get started, Daniel, let's, I want you to hit the highlights on what you see as the really big stuff coming, or even maybe some smaller stuff coming from this version 3.7 release.
Daniel Roe (03:47.478)
So, well, I mean, there's some exciting stuff to come. And we've been releasing things pretty regularly. So in general, a minor feature release about every month or so. So the fact that we're now at 3.7 by, what's the month, August. So we're trying to pack them in. And this one brings a new CLI. So we have basically rewritten
the Nuxt CLI in something called Kitty, which is an open source framework for building CLIs built by Puyia, no other. And so that's going to actually mean we can move and iterate a lot faster with Nuxty. So initially you shouldn't see huge changes, but it should grow and build quite quickly. And so we're accepting contribution there in a different repo.
We also now support a lot of very, very cool things on the edge. So we support native request response objects. You can directly return streams from your Nux server endpoints. So if you're building stuff with ChatGPT or you otherwise want to deal with streams, your life will be much easier. So that's fun. We've also done some stuff about optimizing the HTML that we render. So it's possible to granularly decide what resources are prefetched or preloaded.
on the server and you can also opt into an experiment. So there's a really interesting little JavaScript tool from Rick Viscome at Google, which basically grades your head data, like your links, meta tags, script tags, style tags, and even some meta tags are more important to be earlier on in your HTML stream and some can be later on. And Rick wrote a little tool that
assigns priorities to different ones and you can actually opt in this new Nuxt version to those priorities. So everything about your site will be reordered to be optimal in terms of browser rendering speed and performance. So things that need to be upfront are upfront and things that are further back are further back. So, you can give that a try. I don't know if that's interesting to you but any little way of improving performance is something that...
Daniel Roe (06:11.286)
that I am wild about. So that's pretty, that's pretty fun.
Steve (06:15.47)
So how does that, I was digging that a little quick. So it's your classic, you know, web development.
conundrum I guess how to make things faster you know only get the light small stuff out there in front and then and then the heavier stuff give you a little time to load it reserve space for it on the page that kind of thing so how does this feature do that for you is that something where it just does everything for you behind the scenes and you don't have to configure it or is there some manual configuration involved in it
Daniel Roe (06:49.602)
So right now you can reorder your head tags as you wish. You can assign tag priorities to each of them. But basically, Rick wrote this little library based on the work of Harry Roberts, who's...
Also, another Brit, and he, great guy, and has done a lot of work on web performance, deep dives into understanding how browsers work and pass these tags. And so basically, there are some things that are very important to be at the beginning of the page, like the character set of the browser, like of the HTML that's going to follow, the title of the page, it's really important. If you're going to pre-connect to any origins,
that done as soon as possible after loading the page so the browser can start those connections and often pre-connects are then followed up later in the page by the actual downloading later on like within your CSS or within a style tag and so you want the pre-connect to be as early in the HTML as possible so the browser can initiate that connection and then by the time you get five lines, ten lines down to the...
the downloading of the asset, it's all ready to go already. I mean, we are talking, this is a matter of milliseconds. This is hyper optimization. But the point is we have an opportunity as a framework to do hyper optimization on behalf of people. So why wouldn't we? So, and I mean, so there's more. Then you all want to load your scripts. So any async scripts you want to start, then any synchronous JavaScript or CSS comes next. Anything you want to preload.
Then after that, then deferred scripts, then any prefetch or prerender resource hints where you're telling the browser. So prefetch is quite light as a hint.
Daniel Roe (08:43.274)
In practice, browsers do then immediately go into it, but it's not. It's more sort of, you can put this in the background. So it's not that important to be upfront. And then a lot of things that actually we think are important for SEO, like open graph data and stuff like that, that's not actually important at all for the browser. That's purely for search engines. And so it can come after all of the stuff that I've just mentioned. And so now if you were doing that yourself and
manually say, oh well I'm just inserting this tag here, I think I should assign it a priority of 6. It becomes very quickly time consuming and fragile as you try and balance that out across a huge code base. Whereas this is literally just you set a flag to true and it does it for you. Now in order to make that happen...
Harlan, whom I mentioned before, is one of the core team, actually entirely refactored how we render tags in Nuxt. So how we render these prefetch tags. Previously we had, now, if I'm going too deep into this thing, tell me, because I'm totally, I mean, I could be talking about this for another hour, but basically previously we had a package called view bundle renderer, which is not just used by Nuxt, but mostly.
Cody Bontecou (09:52.389)
Thank you.
Daniel Roe (10:02.122)
And view bundle renderer is in charge of rendering a server-side view application into HTML, and then, which is just like a div, that is a div. But then it also gets some data from the view renderer. So when the view renderer renders a component, it says, hey, I rendered this component, and it sticks that bit of information on a shared context. And so after this is all rendered, then we also want to render like...
links to the scripts that are required to hydrate these components back on the client. And also, these resource hints, these link tags that say, oh, preload this other script because it will be needed pretty soon, because it's maybe another page that's linked to here, or it's a dynamic component that wasn't actually used but might be. So there are all of these, or things like
download this CSS file because it's needed for these components to render. So there's all of this and all of that is put in a chunk of HTML. Then you might also have things you want to render in your HTML as a user. You might want to put in a script, you might want to put in like a title and metadata. And actually those were totally separate until now. But Harlan refactored it all, so now it's all using the same infrastructure under the hood.
So it's actually now possible as a user to interact with the resources that have been rendered by the bundle renderer and other way around, which is how we can now say we're going to sort everything. Because previously we couldn't. We had a sort of this chunk of HTML that was being rendered. Now we can actually sort everything. And we also found that in doing this, we actually sped up the rendering, which we I was I wasn't expecting that. I was thinking it would be faster.
to just render this chunk of HTML, but actually it's faster to do it this way. So it's faster and more performant in terms of better ordered. So it's always nice when you get a sort of win-win, and this definitely is.
Cody Bontecou (12:08.497)
So with this new Kapo plugin, is it as easy as just turning it on and forgetting about it? Nuxt handles the rest of the optimization.
Daniel Roe (12:18.846)
It is, yeah, you just turn it on. It's an experimental feature right now. If we think it's great, we might turn it on by default in future. That's typically what we do with these experimental flags so that people can play around with them, test them on real world sites and report back to us about performance. And then, you know, we enable it. So this is just set experimental headcapper plugin to true. And all the, it will all be in the release notes.
Cody Bontecou (12:45.625)
Yeah, you just answered my question. I have one more question. On the capital JS GitHub, he says that this can affect the perceived performance of the page. Is this just like a client side performance? Like it just seems faster or are there actual performance benefits occurring?
Daniel Roe (13:06.326)
So yeah, there are actual, so I mean, some of it is actual performance benefit and some of it is perceived, you're right. So it's something like putting a pre-connect earlier on in the page, that is an actual performance, that will make a difference. Like one of the things that you can do when...
I think it's the performance tab in Chrome DevTools if using a Chromium-based browser. You should be able to run a...
Cody Bontecou (13:32.813)
Daniel Roe (13:37.426)
a page load recording and actually have a look at what happens when your page loads. That is totally worth doing. And then one of the things that you'll see is what is blocking the render of your page at any given time, because some things will synchronously block the render of your page. And so let's see, I think there's a different, there's a tab, where is it? There's a...
I think it might be that performance time. But basically, so you can see which individual resources have blocked it, what does the waterfall look like, and so on. And so, CAPO legitimately does solve some of those issues or make some of those better. So, like moving that Preconnect early, where the preload is.
Some of the resources that you load are synchronously blocking so that the browser will not continue to do the next thing until it finishes that. And so you want those to be in the right place because they will block. But I mean, we are talking about milliseconds. I mean, this is not necessarily going to save you. It's not going to. This is going to save you, you know, 50 milliseconds or something. But that's worth saving, right?
Cody Bontecou (14:52.129)
Oh, for sure. Especially like you said, if it's just built into Nuxt and we get it for free. I'm, I'm a big fan.
Steve (14:58.45)
You know, when it comes to milliseconds, I never cease to be amazed how much people obsess over a few milliseconds, whether you're, you know, in such disparate environments as web performance or athletics. You know, you talk about the one that always cracks me up, and this is slight tangent, but I'll get this off my chest, is, you know, in professional football when they're...
evaluating players, one of the players things is a 40 yard dash, right? And so you'll see people that have times of say 4.55 and that's considered slow, but 4.25 is considered smoke and blazing fast. And you think, okay, what's the time lapse of three tenths of a second? It's really not that much, but it's a huge difference between really slow and blazing fast.
And if you're talking about web performance, you say, oh, this took 100 milliseconds, and this took 200. Oh my gosh, that's so much faster. Well, really not. Maybe if you start adding up all the performance enhancements all together, then you see a noticeable difference. But anyway, that's just my off-tangent rant.
Daniel Roe (16:02.786)
But I'm not sure I agree. I mean, I think, yes, for sprinting, I agree, because I'm not a sprinter. But these kind of speed differences, based on the initial load of a webpage, can I think make a huge difference. If you're not talking about an internal dashboard or something like that.
I think people don't necessarily have a relationship to your site, they don't know anything about it. So some of these changes, for example, might move time consuming. So some of them make a difference to the total amount of time, but some of them move the time around. So what you might get is a visible page sooner, and then it will maybe do a little bit more loading after it's visible. So that is an example. So even if the total page load...
stay the same, you have something, you see it's there. So that gives you reassurance as a user that there is something coming, I can even see it. If it's the right, I mean, ideally we want our sites to be progressively enhanced. So the site should be usable as soon as the HTML comes, even before it's hydrated. And so you should be able to read or interact or do some things with the website before the rest happens. And that means people don't close the tab, which I mean that...
Steve (17:24.071)
Daniel Roe (17:26.966)
I think it is at the level of 100 milliseconds is make or break. But maybe that's just because we are so conditioned to speed. I'd be willing to reflect on society a little bit.
Steve (17:37.98)
Steve (17:41.374)
Well, no, I mean, don't need to go down that route. But I mean, to your point, the, I think the point I've heard Brian multiple times is that, you know, a lot of times, you know, go to like a quote unquote third world place or somewhere they're dealing with, you know, a 10 year old phone on a very slow network, you know, that isn't the 5G on a MacBook M1, you know, then yeah, you are going to have speed issues. My point is, you know, if
Let's say you're in that environment where it's super slow on an old phone is 100 milliseconds really gonna make that much of an interest between you know five seconds and five point one seconds I don't know. Maybe it is maybe it doesn't I don't know because I haven't really been in that environment, but
Cody Bontecou (18:31.353)
Right, that is kind of the argument, right, for these things is what could be milliseconds for us could be seconds for people or even a completely unusable website is a lot of primarily the case, I think, and a big push for these lower JavaScript bundles, at least from what I've been reading. One area I recommend is Theo, T3 Theo.
Daniel Roe (18:31.616)
Cody Bontecou (18:59.321)
He just did a video on how like we shouldn't be going for zero JavaScript bundles. But if you read the comments, it's very much people saying, you know, this guy was saying like how he was living in Venezuela and most of the web just wasn't useful because of how large these bundles have become and how unoptimized the web has become. And so it is refreshing to hear the perspective from frankly the majority of the world.
We're just in our little bubble with our super powerful computers.
Steve (19:33.506)
Right, and that's the whole episodes. We could go down the arc of web development and JavaScript bundles and frameworks set to address them for sure. So we'll.
Cody Bontecou (19:36.959)
Daniel Roe (19:44.572)
I do think I agree with Theo in that zero isn't the aim. But I think that less is a very good aim. Like the...
I think every developer is different. So I think you have some developers who will basically grow their bundle, but like by default, I need a feature, I'm installing a package, I've done it. The bundle eventually becomes 10 meg.
that's okay, I have the features I need, I added them. So like there's that. And that is a very, very real thing. And I think when people are wanting to say, we need less JavaScript, what they're trying to say is, if your instinct is just to add more, let's stop, let's rethink this, that there's a real cost to that. But I think there's also the other kind of developer who has a website which is 10 kilobytes of JavaScript and they want to get it to zero.
And the difference between getting it from 10 kilobytes to zero or even 70 kilobytes, so next to 70 kilobytes, like getting it
70 kilobytes down to zero kilobytes, it probably makes no difference at all to your end user. Like that is not a significant change. And so from the next point of view, we think shipping a runtime of 70 kilobytes is a reasonable thing for what it enables. And I wouldn't say to people try and make it zero. I would say, yeah, optimize it as much as you can, but at some point there is a cost to optimizing, like a real cost in terms of time. And if you're
Daniel Roe (21:19.168)
do. I will spend as much time as I want to make that thing as fast as I can make it. But that's a luxury that not every company has because it it's already it's already doing the right thing. So if you are hyper optimizing then maybe you need to be told chill out it's not as important to like a couple of kilobytes.
But if you're at the other end, which is probably where a lot of us are when we're working for a company, and the default is just to add megabytes and not really care because we're on our MacBook Airs and everything is fast for us, then that is actually a very, very true and telling insight that we should have less. We should be making things more performant. So I don't know if any of that made sense, but I think basically zero isn't the aim.
Cody Bontecou (22:04.197)
Daniel Roe (22:10.838)
But... Less is what we should go for.
Steve (22:15.92)
All right, so with that, we'll get back off this tangent and head back to our list of 3.7 items before we take up the whole episode with this.
Daniel Roe (22:25.63)
So there's some nice developer experience things that we've also got. But one of the things that people will be looking for is we have the latest VEET update, which includes experimental support for Lightning CSS, which is a very popular, it's Parcells CSS integration. It's very nice, it's incredibly fast, and a lot of people will be wanting to try that out.
So that will be fun. We've got some nice type related developer experience things. A lot of end to end TypeScript goodies coming in next 3.7. Some of them upstream from Nitro NH3. So it's now possible to have typed inputs to server roots. So we've always had typed outputs.
but now it will be able to have typed inputs. And I've spent a lot of today, like with my head wrapped up in knots, trying to make sure this all works totally seamlessly, backwards-compatibly, and smoothly when we update to the latest version of Nitro. But yes, you'll be able to call a root, like hit the fetch, fetch.
API user. And of course you'll get the response back, but you'll also be told you have to provide a body of this and the method has to be that. And if you don't, it will be a type error. So that I think will save a lot.
Steve (23:50.878)
For what it's worth, I think your head looks perfectly normal and not tied up in knots, but that's just my perception.
Daniel Roe (23:57.076)
Well there you go, the power of T. And we've also shipped a few server components updates as well. I'm not sure how much I'm going to call those out, but there's some cool stuff coming with server components in Nuxt in 3.7 as well.
Steve (24:01.334)
Cody Bontecou (24:01.637)
Thanks for watching.
Steve (24:13.802)
So before I go on, I wanted to go back to your kitty, that tool that you said. So is kitty written in Python? Is that, did I say that correctly? Or what is the underlying language for it?
Daniel Roe (24:26.258)
It's written in JavaScript or TypeScript, but I mean, I do feel like the animal theme would make sense for it to be Python. Now, instantly, I don't know if this is true, but I was told that Python, the language, is named Python because of Monty Python.
Steve (24:28.311)
Oh, okay.
Steve (24:48.154)
It could very well be.
Daniel Roe (24:49.942)
Did, have you heard that as well? I was, I, that, that blew my mind because obviously I love-minded Python. But I had no idea that the, that Python, I only ever associated that with a serpent. You know, I didn't realize it might be related. But yes, Ketti is, is a, is a project in UnJS. So UnJS is a, an organization and it is a GitHub organization and, and a team and group of people. And it's all about universal.
Steve (25:02.951)
Daniel Roe (25:18.422)
JavaScript. So we build so much as part of building Nuxt that is reusable and extensible. We need to build it for Nuxt, but we don't want it to be just for Nuxt. And I would, by the way, also much rather use an existing solution if someone has it. But often these are things that don't exist or we think there's some constraint or requirement.
in building a web framework that we need something new. And so, Kitty, for example, it's part of NJS. Anybody can use it. You can use it if you're not using Nuxt. And it's a really nice way of building a CLI. So you can define commands. They're strongly typed. It automatically provides things like descriptions, sort of.
help if you have issues with running a command. And it's very minimal and very, very nice. You're not having to worry about passing arguments from the process or anything like that.
Steve (26:15.166)
So for those of you who are Googling trying to find Kitty, it's C-I-T-T-Y, not K-I-T-T-Y. I was wondering why I couldn't find anything until Daniel gave me the link. So are there any other, other than the developer experience or having the type capabilities that Kitty adds, any other new features or tools that using that for the CLI adds or brings?
Daniel Roe (26:39.546)
So it does things like passing values for you. So for example, I mean, everything's a string. If you type it on the CLI, it's all a string. But sometimes you want to take Boolean arguments, you want to have numbers, and it handles a lot of that. It's also, it has dynamic.
It's based around, you can have dynamic imports, for example, so that you only load the code of a command when it's needed. So it can be really, really fast to start up. I think I mentioned that the usage and help were auto-generated, which is quite useful. But in general, it's just a very, very lightweight wrapper that makes it really easy to build a CLI. And we have other tools like Consola, now has a whole suite of
of prompts that you can use to actually get information out of your end user. And that is using Clack under the hood, the interactive prompts. But it also does a lot of other things too. Clack, if you haven't come across it, is a great project by Nate Moore. Just really beautiful. I'll drop the links to those as well if that's helpful.
But basically, Ketti isn't the only tool that you would use if you're building a CLI, but it is one of them. And it's very lightweight, and just very satisfying to use as a developer.
Cody Bontecou (28:11.077)
I'm curious, why build another CLI? Is that to be extremely light to integrate into Nuxt? Or is there?
Daniel Roe (28:21.078)
Well, just to be clear, the main thing that we did was rewrite it in Kitti. So the body of the code is largely the same. So we...
Daniel Roe (28:34.802)
It's not like a Nux 2 to Nux 3 thing, this is more of a refactor in terms of pulling the code out, making it simpler and easier to understand, and using the new KITTY CLI builder to do it. You can already go ahead and try this on the edge release, it's Nuxi-ng, but we've been building new features here as well, so you can add new modules to your projects and even search the modules.
integrations on the CLI and there's more to come. Now the reason we've pulled it out from the main Nuxt Monorepo is we want to release more frequently. So this is a tooling infrastructure kind of question rather than... So basically if we can release frequently with the CLI, it enables us to do some nice things. We would like to do things like cache
So the CLI Nuxy has always been zero dependency. So it inlines all its dependencies, which means it's an ideal candidate for something that might be a global tool or might be used in more than one place in your system. So we're exploring what can we do? Can we make this global? What can we do in terms of caching data? We always want to pick the version in your project, but maybe we also make available a more global CLI for your next projects. Maybe we do things with that CLI that are not just
run the dev server, run the build server, but are things that are much more search, like code mods, things that access data that's outside your system, like adding modules or adding integrations or installing templates or doing things like that.
So if you check out the new nuxt slash CLI repository, there's some really good discussions that Pooja has created particularly, looking at some of the vision and the possibility that the new CLI enables. And those are definitely worth checking out.
Steve (30:40.446)
So that sort of segues into some questions I had. So.
Steve (30:46.666)
CLI stands for Command Line Interface. So it's basically a tool that you're gonna run.
Steve (30:57.138)
Generally, it's not some sort of GUI, but that's what a CLI is for. Now, Daniel, one of the, you know, my usual interaction, at least with Nuxt or Vue with a CLI, has been sort of installing and getting a project up and running. You know, Nuxty, it's been so long since I've done it, you know, something, Nuxty installer, Vue used to have had its different CLI versions in the past. So in addition to just sort of in...
from a general standpoint, in addition to installing and getting a project up and running, what are some of the other capabilities that you have with a CLI once you have a project up and running?
Daniel Roe (31:40.361)
So at the moment the CLI does, it has a range of commands even now that will do more than just get the project running. So you can for example add...
boilerplate. So we have, I think, components, composables, layouts, plugins, pages, middleware, and API roots. And all of those can be added just by typing Nuxi add component or something like that. And let's see, what else do we do? We have a cleanup command. You can analyze your build. You can enable or disable dev tools. Basically, I see the CLI.
as an alternative tool, but doing the same kind of work that the DevTools does. So the DevTools in your browser enables you to do things like pick modules and install them. It tries to surface information that you might need to know. It tries to make your job as a developer easier. And the browser is one avenue, like it's a GUI. But we don't all prefer...
to interact with GUIs. And the DevTools can do incredible things. When you install a module, it installs it in your package manager, and then it also modifies your NuxtConfig to use that new module. And the sky's the limit there in terms of some of the kinds of things we could enable. There's a great project called Magic AST, which basically allows reading and writing from JavaScript files, just like...
an object. So you read it, you set a property on the object, you write it back, and it has changed the file and added that property in the file. That's pretty cool. And the genius behind that is Anthony Fu, who's been working on that. So I'll drop another link there. And that's used by DevTools and it can also be used by the CLI too. So again, I'm thinking of this as not... This is an alternative...
Daniel Roe (33:47.562)
venue and a way of interacting with your project and your next projects. I can imagine a lot of nice features. Maybe you have multiple ones on your disk. You could search between them. You could have...
you know, presets of things that you do in a project that you can store and sort of replay when you need to. I think there could be a lot of cool developer experience things that operate on the command line level. But I mean, it's also an invitation for people to sort of imagine some of this. And one of the things that's been most asked for is actually the initiation of a new project.
and a wizard or something that takes you step by step and does more than install just the minimal Nuxt 3 boilerplate that we currently give people. So that's definitely on the list.
Cody Bontecou (34:36.133)
I remember starting off as like a Nuxt dev. One of my biggest issues was CLI related. I might start a Nuxt app and have a prettier config come with it or not come with it. And then trying to have that like reinstallation reset up was always a big pain to the point where, as a beginner it was typically easier to just restart with.
the appropriate configurations. But that's fascinating. Like you said, it's like the Nuxt dev tools. That's kind of what I was imagining when you were first telling me the features because that's, yeah, programmatically, you can navigate these modules or the builds. I imagine you can kind of lint some of the typing of your Nuxt links. There's just like some neat features in there.
Daniel Roe (35:34.106)
Yeah, I'm glad you like it. I mean, it's really, this is not the announcement here in 3.7 is not a, hey, look at this amazing new dev tools. It's more of a, we've refactored the dev tools. Now it's time to start building some of the cool features on top of it. But I do think it will be really nice to see what comes.
Cody Bontecou (35:57.037)
Yeah, definitely. And that's, I mean, Twitter is a great place for that. That's where I see most of my Nuxt DevTools interactions. All of a sudden this crazy gif comes out and it's like, whoa, I did not know you could do that. So that's, it's fun to watch it being developed.
Daniel Roe (36:12.088)
Daniel Roe (36:16.266)
Thank you. I mean, I really can't take credit for it, you know. And it's Anthony Foo, and it's also a lot of other people in the Next DevTools community. People have even started making versions of DevTools for other platforms. There's Vue DevTools now, which is...
looks very similar to NextUp. I mean, it's a collaboration, it's not stealing, it's open source, working together. It's a good thing, which is just amazing to see. But there are so many people who have helped make that possible. And I do think sometimes you have these sort of sea change events where suddenly...
Cody Bontecou (36:43.098)
Daniel Roe (37:00.714)
everyone is thinking, whoa, we could do this and this and this. And it feels like the dev tools, like when Anthony announced that earlier this year back, uh, if you just Amsterdam and only unveiled the first version of next dev tools, it felt like that. And sort of suddenly people were thinking, wow, this, this really changes things and started contributing. I mean, being able to have in the dev tools in your browser, full access to your API routes, uh, being able to see like every time a composable in your app is called, how long it took, what the arguments were to it.
Steve (37:25.236)
Daniel Roe (37:30.664)
Like it's really re-imagining the debugging experience, like the developer experience as a user. Like these are things that we've never been able to do. These are not making it slightly better. These are completely changing the experience.
So, you know, it's just whenever I've showed the DevTools to somebody, I'm like, okay, so then you do this, like genuinely. And I'm not maintaining them, you know. I'll be like, hey, you can do this and why don't you try clicking that? Yes, just click anywhere on the page. It will take you to the component in your project where that line of code was rendered from. It's just that's it. That's incredible. Like imagine being onboarded to a new project and you literally just have to say, oh, that thing there doesn't look right. I'm going to click it.
Cody Bontecou (38:08.563)
Daniel Roe (38:15.846)
Okay, now I know exactly where in the codebase that thing was. The amount of time I have spent previously trying to find things in a codebase, you have to become very good at regexp and deep greps all the way through just to find things, and now it's just a click, literally, on a page. That's just... Anyway, before I go to...
Cody Bontecou (38:36.205)
Yeah, we're very, I was just going to say we're very spoiled in the view in Nuxtworld and DevTools. I've been working on a Next app for my day job and the DevTools that I have to interact with are, they seem archaic, frankly, to what we have.
Daniel Roe (38:55.474)
I'm sure, I mean, the thing is, Next has focused on developer experience for a while. I'm sure there'll be some new announcement. And I would love to think that we've, maybe we've inspired them. Like that's the thing about the sort of whole ecosystem we're in. Like we're all pushing at boundaries and hopefully we're all inspiring and helping each other.
Cody Bontecou (39:20.233)
Yeah, that, I mean, that's no, that, that wasn't an attack on next. That was more an invitation. I would, I would love to be using a similar experience in my day to day.
Daniel Roe (39:24.735)
Oh no.
Daniel Roe (39:31.326)
Yeah, no, I didn't take that as an attack at all. But yeah, I expect something nice will be coming soon from the team now.
Cody Bontecou (39:40.441)
Steve (39:41.842)
That was brutal, Cody, that was just brutal.
Daniel Roe (39:44.054)
That's right. Let's make sure to tweet it out Steve. You know, Cody lays into the whole next team, you know.
Cody Bontecou (39:49.91)
Yeah. Just ripped them apart.
Steve (39:50.074)
Into next JSS.
Steve (39:54.698)
Right. Okay, so we talked at the beginning about server components and you mentioned some of the fixes. Server components. So we were talking before we started recording about server components, maybe the difference between React server components. So before getting into maybe some of the enhancements and the blog post and so on, can you give us a...
big picture description of what they are in Next and what their purpose is.
Daniel Roe (40:29.95)
So, Nuxt Server components are... So, normally a Nuxt app is rendered initially on the server. So it renders HTML. And then from that point forward, it is a fully client-side app.
So that's why you get this sort of really smooth, quick transitions between pages. Everything is interactive because you're not hitting the server anymore. So you only hit the server when you maybe need something from a server route, an API route. And that's the normal pattern for a NuxtApp. And it gets even, you can then statically generate a page and, but from that point on, but it's fully client side.
Server components are a way of basically pinning individual component renders in an app to something that is server related. So imagine you load that first page and now you navigate to a different page and on this different page one of those components is a server component. Well in this case instead of having some code in the client bundle that renders that component,
the app is going to say, hey, I'm going to just get it from the server instead. So the rest of the page will be rendered by the client as normal, but this one bit will be always rendered on the server, no matter whether it's the initial request or subsequent requests. So that's what it is. There's basically no extra requirements. Every time you change the props of that component, it's going to go back to the server to get the data because it's rendered on the server.
And what will come back is some HTML and some other bits of JSON. So maybe some information on the state of the app. So the state can go to the server, be transformed, and come back. There might be some information about CSS styles that might be associated with a component. But basically, that's what comes over the wire.
Daniel Roe (42:28.426)
and that's not what comes over the wire with a React component. A React component doesn't serialize it down to HTML, it serializes it down to an AST, so an array of nodes and things like that. So there's some differences there, and React server components are also deeply integrated with the whole streaming story in React, and also how suspense works, which is not really the same motivation or...
background for how Nuxt Server components are working. You would typically use this. So this shouldn't be, so that you have to, to do this in Nuxt 3, you enable a flag and then you just add a.server.view suffix to a component. And apart from that, it should just behave normally. You can use it normally in your app. There's no extra bits to do.
But you sh- and it's simple, but I wouldn't advocate that you do that just for any random component in your app because Typically it is a good thing for it to be rendered on the client because that makes it fast and responsive But you the reasons though why you might want to do that are where you have a very heavy component. Maybe it needs to Apply syntax highlighting to some markdown
Or maybe it needs to interact directly with a database. And so actually every time you use it, it actually does need to be rendered from the server. So maybe it needs to consume a secret, but it's not gonna include that in its output. I don't know, you might have some reasons why you always want something to be rendered on the server. And when you reach for those kinds of things, then a server component is actually a really good choice.
So typically the best cases are where it takes a lot of JavaScript to render something. That's a very good choice because when you make it a server component all of that code is gone from your client-side bundle. So you get a very minimal bundle that only really has to handle the fetch, obviously the rest of the next app, and then just rendering it when required.
Daniel Roe (44:45.522)
So maybe some bits of your app are easy candidates for that, like your footer. If you have a lot of HTML, well, you can make that a server component and not have to re-render it, not have to manage that in your JavaScript.
Steve (45:00.074)
Okay, so I've got a few questions here. So a server component is rendered on a server in sense. So in other words, there's not gonna be, is it true then that you wouldn't have any code in there and some sort of hook that you would interact in that component file once it's been rendered? It's just, if it's.server, that's what you get. And so how would you do it?
How would you develop it or where would you put your code if, okay, I've got all this stuff that's on the server, but once it's client side, then I want to add some JavaScript or some interactivity to it. Can you do that within the same dot server file or is that possible or how does that work?
Daniel Roe (45:43.796)
So I mean, within the.server file, you can do things like add scripts to the DOM in the same kind of way that you would in a normal Nuxt app using use-head. So you can actually add a script that you wanted to do something dynamic. But that's not really the direction we're going in. We do plan to support arbitrary interactive components within server components.
And Julian on the team is actually looking at that already. And I've also been looking at it. So that's on our roadmap to have interactivity within server components. We currently do support interactivity within server components via slots at the moment. So you can use a sort of view native approach to render a slot which might have some interactive elements within a server component.
and that will just work. You can have named slots, you can have default slots, and even though the rest of the wrapper around it is static, you have these islands of interactivity. So that works right now, but I think we can make it even better with the idea of arbitrary components that can be interactive. And so they would basically have their state serialized, or their props serialized, and then hydrated individually on the cloud.
But that's work in progress.
Steve (47:08.746)
Okay, so before server components, how would, say back in Nux 2 or even Nux 3 pre-server components, how would you handle, how would you have handled a case where you have a component where you want some stuff coming from the server and then maybe want to add some more interactivity on the front end? I remember, I believe it was an async data hook or method that you could use and there was a context argument.
that would tell you whether it was server or client side and you could do things, you know, depending upon your context. Am I remembering that correctly? Is that how you would have done it before, server component?
Daniel Roe (47:49.942)
You're absolutely right, that's how Usasync data in Nuxt 2 had that context and you could access where you were. It's also true in Nuxt 3, so you have access to Process Client and Process Server. Actually that's another change in Nuxt 3.7, we are now introducing ImportMeta.Client and ImportMeta.Server, which is basically in line with JavaScript and not just Node.
So this process is a node thing, whereas import meta is a thing that's present in both node and in the browser. But that's not super important. It's additive, so it's not changing your code, but it means you can use import meta.server instead if you want to going forward. But yes, you can access whether you're on the server or client in your code. And so you can actually do different things based on the context and use async data.
is also a hook in Nuxt 3, just like async data in Nuxt 2, but it's even more powerful because you can use it in components as well as in pages and perform data fetching wherever you want. That is also then is the basis of something called payloads in Nuxt 3. So if you render a static site, so by default what would happen if you render a static site in Vite or any other non-Nuxt framework? You basically might have some HTML.
but then all the data fetching is gonna happen afresh on your client. Whereas often when you're rendering a static site, the data you don't want to fetch for every user every time they go to any page. Actually the data is rendered once, fetched from the API once.
at generate time and actually you want the whole thing to be based on that data, not just on the initial request, but on even subsequent navigation. And so we use something called a payload in Nuxt3 to do this, where we extract the data from all the user-acid data calls when you're generating the site, and then stick them in little payloads, which then can be fetched. So when you transition from one page to another page, it makes a request, give me the payload for this page, and then
Daniel Roe (50:02.27)
it doesn't need to rerun the async data call on the client side to get the data, because it's all in the payload already. So that is quite a useful feature anyway. So I think a... but the thing with Nuxt 2 and Nuxt 3 without server components is even though you have access to whether something is on the server or client, you can't force it to be on the server.
So if you were transitioning from page one to page two, before server components, that async data call is only ever going to happen on the client, unless you have payload extraction enabled, in which case it was pre-rendered at generate time. But so you can't make the async data call happen on the...
the server. So even if you wrap something and you know if this is server then I connect to my database. The problem is if it's running on the client you have no database so you'll have no data and so the server component makes it just work so you know that file with the.server.view is always going to be rendered on the server no matter where it's used whether it's the initial request or subsequent navigation. But again
I'm not suggesting that every component that needs something from the server needs to become a server component. There are other ways of getting that data. You can also just hit an API route. For example, you could have your API products route and grab the products from there. And so then that route is what contacts the database. It doesn't have to be a server component for that.
Cody Bontecou (51:39.401)
And when would you use a server component over just a traditional fetch request?
Daniel Roe (51:46.89)
So I think for me, it's about complex logic. It's about heavy dependencies. So I would use it to reduce my client-side bundle. I would use it to move logic that doesn't need to be in the bundle. So for example, my personal website has blog articles. I write those blog articles in Markdown. From my point of view, I don't see why the client side, why every user who visits my blog.
should have to have a markdown parser.
that passes the markdown. I don't see why they should, even if I don't pass the markdown there, even if I just give an AST, I still don't see why they should have to have an AST of markdown in order to render some HTML to read my blog because it's static content. So for that, I think it's a great candidate for a server component, which just renders the HTML and fetches it like an asset and then displays it. So I think that's a good example. I do think that there are,
There are some cases, again, this isn't, you're not reaching for these straight out of the box, but there might be some cases where you have to have it rendered on the server. Some examples might be things like, so open graph images. So Harlan, one of the team has it as a great module, Nuxt.ogImage, which allows you to create,
OpenGraph images for your pages and it uses server components and Next Islands under the hood to do that because obviously that has to happen on the server. So there are situations like that where it's a useful tool. But again for most people, don't immediately reach for a server component. Ideally what will happen is that...
Daniel Roe (53:36.57)
like libraries, like Next Content, would use them under the hood as appropriate. Or modules like Next OGEmage will use islands under the hood to make things just work. But yes, if something's heavy, lots of JavaScript, it doesn't need to be there. I would move it out from your client-side JavaScript to a server component.
Cody Bontecou (53:59.293)
And just kind of on that, something I haven't quite wrapped my head around is the difference between having my front end call, like a Nitro server route versus just having that logic take place in a server component.
Daniel Roe (54:15.838)
So yeah, that's a very good question. And I think in some case it's...
Daniel Roe (54:26.318)
There's probably very little difference in one respect in that the actual, well, I guess the difference is where your interactivity is happening, where your rendering is happening. So imagine you have a list of products and you want to render an HTML table of the products.
So one option is on the client side, you have this component, it makes a fetch request to the server, it gets the data back, and now you have, you can render the list of components. Now maybe you want to do things like change the sorting of the components and you want to like add one of them to the cart.
immediately this probably shouldn't be a server component. It makes sense for this to be on the client because it needs interactivity. It wants this is something which, yes, it's using data from the server, but the component itself makes sense to be rendered in the browser and interactive in the browser. You don't want to click the resort and have a network request go to the server, which renders the different HTML table, comes back and swaps it in. That feels like that's the wrong way around.
like the footer of your website which only gets rendered once and never changes is a much better example of something that would be a server component because it doesn't need to be interactive actually you don't need to ship any of that code of how to render your footer to your client it just lives in the DOM it's rendered once it shouldn't necessarily be there it will save you a kilobyte we're not talking about huge amounts of savings but
But maybe that's how you think about it. So are you consuming the data in an interactive component or instead is the component itself the data that needs to be rendered from the server?
Cody Bontecou (56:20.645)
Yeah, that actually makes perfect sense.
Steve (56:22.566)
Well, OK, so common use case I run into on.
Steve (56:34.75)
some type of record, doesn't matter, a few thousand, whatever. And in JavaScript, when we went to this all JavaScript on the front end, one of the common ways to handle that was to dump everything into your JavaScript component. And then your pagination, it would just sort through. It could be really fast because everything's already right there. But if you get into huge records, you run into page load times.
slowing things down because you're dealing with so much. So generally what you want to do is return the first 20 or 25, however long your list is, right? Then you hit a page, link, a pagination link down at the bottom of your table. And then it's going to go hit the server and get the data that you need. So that you're not loading 2,000 records into a page and slowing things down. Now you said just now that what you don't want to do is hit your nitro route, you know, go to the server.
make a request with the pagination values in there, give me 26 through 50, for instance, and then come back, because that's going to be ugly. So is that a case like that is something, is that something where Nuxt is going to be a tool, a valid tool for something like that with server components? Or are you saying that you would just want to make your fetch request straight from the front end, you know, using a fetch request to an API?
Daniel Roe (57:46.665)
Thanks for watching!
Steve (57:55.649)
Does that make sense what I'm saying?
Daniel Roe (57:59.546)
So, well, I mean, maybe try me, but I think, I would say that for pagination, it feels like it would make sense to, yeah, make a fetch request. And basically, I mean, you can do that in advance as well. You can have the logic there that sort of preloads the data for the next page before you click it. It can...
So Nitro, the server, the NUC server, can add quite a lot of interesting caching logic and even statically render server routes were required. So I would absolutely hit it. You could either have that backend for frontend pattern and use Nitro to create a server route which then returns information from your database.
or connects to a different API. So you could do that, or you could have your frontend directly connect to the API. Either way. But if you're asking me that example of paginated data, even if you have an external API, I would probably render that in a client component that hits the Nitro endpoint or hits your backend.
And if the backend needs authentication, then I would take your Nitro root and your Nitro root hits your backend with the authentication, but caches the data. So it's not, it can be extremely performant, more performant than if you were just hitting your backend. But I'm not sure I would render that in a server component. But you know, it will always be, it will be different in whatever, like you'll know your constraints. And like you'll know the situation.
where you're in. And it might be totally fine to do it with pagination. I'm just imagining I would want there to be more interactivity going on. And at some point, the savings that you make.
Daniel Roe (01:00:01.504)
it's not necessarily worth it.
Steve (01:00:05.246)
All right, so now with server components, one of the features of Nuxt, and I've seen people use Nuxt over regular view because of this is the file-based routing, right? You just set up your directory structure for your pages and that builds your routes for you and you can have dynamic parameters and so on and so forth. So from that standpoint is a.server file treated the same as a.view file. So you just put it in there and if you go to that page, it's gonna go hit the server for it. And if you put a.view component, then it's just gonna render that on the front end.
Daniel Roe (01:00:37.481)
From the developer experience point of view, you shouldn't have to be aware of differences in how these are rendered. It should just happen behind the scenes.
Daniel Roe (01:00:49.45)
I think we do have some, we just merged in 3.7 a feature where you have lazy server components where you can actually provide a loading state. So if you go to a page on the client side which has a server component while the component is being fetched, if it doesn't already, if it isn't already there, so it's not a generated site, a generated site would prefetch it. If the component isn't already there it can show a loading state and then have the server component.
put in. But apart from that, I think you're not having to handle the fact that it's a server component or otherwise worry about it. You're just using it. It will even be auto-imported into your code in the same way as any other component and type-hinted with prop support in the same kind of way. I guess you probably have to be careful not to pass non-serializable props.
So you probably shouldn't be passing a function to it, because we're not going to be able to send that to the server and back again, but that would be the only real constraint.
Steve (01:01:54.014)
So you were talking about the page loading. Is that basically the view three, what they call suspense, where you can say, okay, I'm waiting for the page to load. So here's what you want to show. And then once it's loaded, you show the data. Is that, or is that something custom?
Daniel Roe (01:01:57.282)
Thanks for watching.
Daniel Roe (01:02:04.934)
Exactly. So we do, yeah exactly, so we use suspense extensively in Nuxt to basically allow asynchronous tasks to complete before displaying the next state in the browser. So this is used definitely for fetching data between pages or other things, any other asynchronous task you might want to complete. And it means we can basically block
navigation until that finishes. So you click a link, all of the data fetching that needs to happen happens, and other things, and then the new page comes in. And the thing that makes that possible is suspense under the hood, which basically, yeah, that's right. Exactly.
Steve (01:02:48.586)
Okay, and then my last question on server components was in regards to how it works. And you sort of touched on this earlier when you talked about React server components. I had sort of mentioned to you incorrect analogy where the way you were describing how you could embed server components within your site from somewhere else. And my thought was, hey, that sounds sort of like iframe. And you said, no, that's not true because of what we're actually sending, right? So if you...
Can you give that explanation as to sending an AST versus just actual rendered HTML or how the server components handle that?
Daniel Roe (01:03:25.59)
So, yeah, so server components basically are a JSON. The actual network request that's made is a JSON request. It has some HTML, it has some head metadata, like it will have some styles, might have some other scripts or CSS that it's going to put in the head for you.
and it has some state, some application state, which also will be serialized. And so that's what the actual response contains. But, and this is going to then be applied into your DOM. So the actual component is going to be in line with the rest of your code, and it's going to be rendered by view in the same kind of way. It's worth saying,
that we also have merged another feature which is the remote source capability for server components. It's also experimental, you have to opt into it, but it means that you'll be able to, for example, have a fully static site and fully statically rendered, it has some server components. Maybe you have a preview mode and you are making some changes in a branch on your repo.
Cody Bontecou (01:04:24.876)
Thanks for watching!
Daniel Roe (01:04:39.786)
and you have a little server up and running, maybe you can point your live site to your preview branch and actually get your preview branch to render some components on your live site. So you could do something like that with remote sources. Obviously it's up to you how you build or what you build, but the capabilities there to basically for any individual server component say, I'm going to point this component to a different source.
So it's going to have, it's going to be this component, but it's going to have a different source. So not the website it's hosted on, but some other site. Be careful with that. Like that's, like you need to be in control of this. Don't, don't, it's obviously that's going to become part of your DOM, but yes, it's open for people to start building on.
Steve (01:05:15.82)
I was gonna say.
Cody Bontecou (01:05:34.257)
But you don't have to be in control of the source, right? You just probably should be, but an ideal, like you could theoretically hook into somebody else's server component and render that on your site as well.
Daniel Roe (01:05:42.742)
You put...
Daniel Roe (01:05:51.306)
You could. The things that would prevent you from doing that would be things like cause headers. So they would have to explicitly add the right headers to say this site is allowed to render a server component. But yes, you could do that, that would be possible. So you would, which is, that's got to fall. So there would be some interesting, I look forward to seeing people rendering my articles from my website via server component. But.
Cody Bontecou (01:06:19.673)
Yeah, or I mean, I imagine like a Stripe, just like render Stripe component and you just have everything hooked in. You send the component in API key or something of that nature and you're good to go with. Obviously Stripe might be a little bit more complicated because of the products and everything, but.
I don't know, I just, I think there's a lot in there. A comment component, headless comments that you somehow manage on somebody else's database.
Steve (01:06:52.074)
It could reinvent Disqus with Nux Server Components, right?
Daniel Roe (01:06:56.022)
Well, exactly, and doing it with a view-native thing. So because it doesn't have to be, because the solutions at the moment are sort of a JavaScript thing that runs, that like injects some divs into your page, which are often, or an iframe or something, and often it's not deeply connected with your page. It's not styled, it's not...
Cody Bontecou (01:07:00.613)
I'm gonna go.
Daniel Roe (01:07:20.306)
it is an imposition on it in some ways. Whereas it's possible with remote sources and server components, you're actually, you're getting data and you're getting some rendered HTML, but it's actually quite deeply integrated with your view rendering process. And it can be interactive and share state with the rest of your app. So you can have like, because...
Cody Bontecou (01:07:36.505)
Daniel Roe (01:07:42.782)
It's like it's on the same level. It's not like a different app. It's the same app, but just individual components are being rendered somewhere else, but they can be rendered using data from your app, and they can provide data to the rest of your app through state, which is built into server components and how they work. So I think it's definitely, I think, a very interesting field. But I mean, it's market experimental, so people can experiment and do cool things with it.
Cody Bontecou (01:08:12.089)
So when you mention state, this includes like a pinya store. You can.
Daniel Roe (01:08:19.422)
it includes the, so Nuxt has these sort of primitives which are state and data and some others too, but the state is accessed with the use state composable and data is accessed with the used async data composable or use fetch, which is another kind of data. And so basically that can be,
hooked into within your Pinyar Store. So you can create a state and then set it within your Pinyar Store. But Pinyar itself, I'm not actually sure whether Pinyar Store is its state of that, but it could, it could.
Steve (01:08:55.334)
So we're talking about these embedded server components again in my head's in knots. So if I, let's say I have, I'm the source, I have a server component, I wanna be able to share it with other sites. It's just the fact that it exists, make it shareable. You said there's a feature flag. I assume once it, is there a way, even when it's out from behind a feature flag, where you'd have to say, okay, I wanna be able to share this or not, or is that all controlled from your server?
with your course headers and things like that. Is there a way to say, I want to share this one, but not this one, for instance.
Daniel Roe (01:09:31.971)
Right now, we offer no easy way for you to say, my server components are shareable. By default, they won't be. You would have to write some code in your server roots to add the cause headers for the server components and you might have to do some more stuff as well. So this is at the level of, in terms of remote sources, so server components are experimental, but they can be used by people in production now. Look at this.
as long as they're willing to pay attention to the API. We're not promising to adhere to Semver with experimental features. But remote sources are an experimental part of an experimental feature. So I would, like at this point, it's there so that the feature is there for people to experiment with. It's not there as a, you can click a, you can apply a flag and now your site is,
Steve (01:10:11.488)
Daniel Roe (01:10:27.916)
can be rendered remotely. Like, we're not at that level yet.
Cody Bontecou (01:10:33.23)
If you were to...
deploy a component that somebody else could then interact with and pull into their other site, how would you do that? How would you approach a deployed server component that's usable to the public?
Daniel Roe (01:10:51.038)
I love the idea. I think it would be fun. I could create a little API and people could start pulling it in. So it would, I think the main thing is core setters. And then probably I would want to, I would want to, I would probably add some.
Cody Bontecou (01:11:03.823)
Daniel Roe (01:11:09.822)
some middleware on my end, maybe a rate limit, it, I'd definitely cache them so that I'm not freshly rendering a server component every time. Server components can be cached almost infinitely because they're hashed based on the props they receive. So as long as the, I'm not stateful, and I'm probably not going to be stateful. I mean, you could be, you could be looking at headers and authentication details, but.
If I were doing it, I would probably say, okay, I'm gonna choose to render non-stateful ones. So I'm going to cache them for forever as far as I can. And yeah, I think of course that doesn't tell people where to connect. Where they basically just have to say source past the URL of my site and that would be it.
Cody Bontecou (01:11:58.585)
So just like in your view template, you would write like a component, like a custom. There's just like a component, right? So you do like component source, this foreign URL of some sort.
Daniel Roe (01:12:14.758)
So the way that it works is that, you want to do this, don't you? You're going to get off this call and...
Cody Bontecou (01:12:20.469)
Yeah, of course. Well, yeah, this, this to me sounds fascinating. I, you know, there's, I see it could be limitless.
Steve (01:12:22.954)
How could you tell?
Daniel Roe (01:12:34.05)
So basically the source is a prop of... I'm just gonna pull the code up just to make sure I don't mislead you. But basically you can provide source as a prop to Nuxt Island. Nuxt Island is what's used under the hood of server components to render them. So Nuxt Island will basically perform a network request to an underscore Nuxt Island endpoint.
in your server and it will handle sticking the HTML in. Server component wraps that because the server component wraps that with an API which is just like your normal component and it will handle sort of taking all the props that are passed to it, bundling them up and passing them to Next Island.
So currently the only way that you can do this is not with a server component, but with a... but with...
Daniel Roe (01:13:42.826)
just confirm that, but by using Nuxt Island directly. So basically what you would do is, and this is something we could very easily change. It would be a new feature that would basically allow you to globally set like a source for server components or something like that. But at the moment, you would have to do it directly with Nuxt Island. And Nuxt Island takes a couple of things. It takes the name of the component, it takes whether or not it's lazy.
It takes an object called props, we have all the props of the component. It takes a context object, which includes things like the state of your app and other things. If necessary, you don't have to pass it. And then it takes this source, which is the URL of the thing that's going to be rendering it. And by default, it is just the current server. So all you would need to do is use this next island component. You'd give it the name of the component, any props, and the source. That would be enough.
and you would be able to render it. But that's not exactly like a normal server component. It's just this extra step because you're using this thing called Nuxt Island, which is the thing that's used under the hood by a server component. But honestly, if you're interested in...
Cody Bontecou (01:14:54.749)
And is that, sorry, I just see you have a Nuxt Island component in the docs. Is that the same Nuxt Island you're referencing? Okay.
Daniel Roe (01:15:02.242)
That's it.
Daniel Roe (01:15:06.134)
That's it. DM me. Like if you've got any questions, like let's make this happen.
Cody Bontecou (01:15:11.205)
Sure, yeah, I mean, I'm glad, because I read your article when you posted it the other day, and I saw this was experimental and there was a PR, but now that it's merged, it's definitely time to play.
Daniel Roe (01:15:23.246)
It's time to play. That's true.
Steve (01:15:25.366)
Alright, so we're going a little long here, so we'll start to wrap this up. I had to pick, Stan, before we wrap things up, anything else you wanted to touch on real quick with regards to server components or the 3.7 release?
Daniel Roe (01:15:41.134)
Do you know? I think we've dived into quite a lot of stuff, so nothing more for me.
Steve (01:15:44.274)
have. Right. Alrighty. So with that, we'll move to PICS. PICS are part of the show where we get to talk about anything we want to talk about could be tech related, non tech related, you name it. Cody, you got anything to talk to us about today that has piqued your interest?
Cody Bontecou (01:16:02.885)
Oh, yeah, I've been going into a deep dive on the programming language Elixir this last week, which I just highly recommend people check out. It's a functional programming language built off of Erlang. And honestly, my favorite aspect is it has something called pattern matching, an incredible pattern matching API that is, it's really neat. You don't write conditionals. You never write an if statement. You just redefine.
the function with kind of that conditional logic within the function parameter and the language is smart enough to just decide which of the functions to call. So for example, like Fibonacci would just be three separate functions. One takes the parameter of zero, one takes parameter one, and then one takes like, what is it, n minus one, something of that nature. But it was just amazing to see.
this compared to like the JavaScript way of like switch statements or condition if statements. So yeah, it's fun, highly recommend it. Supposedly it's the go to in terms of like highly scalable real time systems. It's what like WhatsApp and Facebook Messenger is built in. And so that was just kind of a rabbit hole I've been in still in actually.
Steve (01:17:27.238)
Alrighty, is that any other picks or is that it? Okay, alright, I will move on to me and don't have anything other than the dad jokes of the week, which are the highlight of any podcast episode that I do. So first off, I was in an interview one time and the interviewer said, I wanna ask you a question and your answer has to be quick. I said, okay.
Cody Bontecou (01:17:30.953)
Oh, that's it. Yep.
Steve (01:17:55.818)
She said, what's 12 plus 37? I said, quick. All right.
So, side note, I remember reading this excerpt from a book long time ago called, was it Disorder in the Court? I think, and it was stories from a gal who had been a court reporter for years and years and just different interactions. And there was one case she talked about where the attorney tells the witness that's on the stand, okay, now all of your answers need to be verbal. In other words, you can't move your hands. And so every question he asked, the guy said, verbal, verbal.
You know, but, um, you know, I've noticed that when I drink alcohol, but everyone says I'm an alcoholic, but when I drink Fanta, nobody calls me fantastic.
Cody Bontecou (01:18:33.525)
Oh no.
Steve (01:18:47.288)
And then finally, here's a question. What do you call soft tissue between a shark's teeth? A slow swimmer.
Cody Bontecou (01:18:58.414)
Steve (01:18:59.326)
Right? Right. Thank you. So those are the dad jokes of the week. Daniel, what do you have for us for picks?
Daniel Roe (01:19:07.61)
Okay, a couple. So one Nuxt-related. Check out Nuxta, which is a VS Code extension for Nuxt. If you haven't come across it, it's really, really cool. It's built by a member of the Nuxt community, and it's super nice. I would check it out both for the... Check out the project. It's open source now, and you can see how the...
VS Code extension was created. There's lots and lots of things there from snippets to little rich bits inside your next config. You'll be able to click things to create a plugin or install a module or anyway, check it out. It's pretty neat. I've been diving into, on a different topic. I've been diving into a lot about TypeScript performance and constructing elaborate and complex and magical types as a library author.
that are performant for end users. And actually I have this fun project, which I am working on, which I am not yet ready to announce. So I won't mention that, but I do want to highlight the fact that Microsoft has got some incredible tools for analyzing performance in TypeScript. And one of them is called TypeScript Analyze Trace. So you can analyze your TypeScript, if you're a library author, I see you glazing over, Steve.
If you're a library author, you can run traces on your TypeScript, on the TypeScript compiler, and then run this TypeScript analyze trace tool on it. And it will basically try and sift through all of the chaff of the trace. So you don't have to look through the flame graph and just tell you, check this file, check this file. This thing is installed in your project more than once, maybe it's causing a problem. And that is a very interesting thing. If you ever encountered something where like, it feels like it's a bit slow.
Steve (01:20:30.099)
Daniel Roe (01:20:58.358)
Maybe you're a bit worried there's an issue with your types. Check out TypeScript Analyze Trace. It's a pretty cool tool.
Steve (01:21:05.47)
Yeah, I haven't quite dived headfirst into the TypeScript world. I know what it is and I have to interact with it once in a while and I get the concept, but just I've never fully dived into utilizing it in everything I do. So, then maybe that's why I looked glazed over.
Daniel Roe (01:21:24.242)
Well, no, maybe it's just that still the ringing in my ears from the dad jokes, you know, I can't. But, um... The applause. And, okay, here's another pick. And this is a book called A Deadly Education, and I actually haven't read it yet. So I'm recommending this just based on the fact that someone else recommended it to me strongly, and the first line of the book. This is the first line of the book. I decided that Orion needed to die.
Steve (01:21:30.392)
From the applause? The applause? Is that what you meant? Yes.
Daniel Roe (01:21:53.674)
after the second time he saved my life. That's very, very strong. So I've started reading the book and it's already extremely promising. It only gets better from the first line. So there you go. Totally, totally random third pick.
Steve (01:22:09.95)
Sounds good. Alrighty, oh, and then real quick for the, we'll put links in the show notes, but for the Nuxtr, VS Code extension, it looks like it was nuxtrs, plural, dot nuxt, dot org, dot com. I'm already forgetting. But.
Daniel Roe (01:22:26.653)
Cody Bontecou (01:22:26.909)
Oh, sorry, go ahead, Daniel.
Daniel Roe (01:22:29.898)
That is an entirely another site, which I should have mentioned as one of my picks actually, because Nuxtr is... You can go to nuxtrs.nuxt.com and you can...
Steve (01:22:33.915)
Oh, sorry, I guess I was wrong.
Daniel Roe (01:22:49.394)
authorize it with GitHub and see your score as a Nuxta. So how many pull requests, how many helpful issues and helpful comments you've made in the Nuxt ecosystem. And if you are a Nuxta, if you've got a pull request or a helpful issue or a helpful comment, then you can unlock a special badge on the Nuxt Discord by clicking another button. So that was, it's a super fun project. So yeah, check that out.
Steve (01:22:55.001)
Steve (01:23:19.37)
Alrighty, well we are way long here, so I'd like to say thank you to Daniel for coming on again. He's been on multiple times and it's good to have him let us know what's going on in the next world and all the juicy little goodies that are available with every release. So thank you for coming, Daniel.
Daniel Roe (01:23:37.334)
It's always a pleasure, Steve.
Steve (01:23:39.378)
So if people want to follow you and read your wisdom and give you money and so on, where are the best places to do that?
Daniel Roe (01:23:48.086)
Well, I don't really know where to start, but you can find my website at row.dev, and that has links to most things. I'm on Twitter as well, Daniel C Rowe, and most other places it's just Daniel Rowe, and if you have any questions, if anyone wants to chat, I'm really happy to get a DM. That's always a nice thing in my day. So yeah, be in contact.
Cody Bontecou (01:23:51.151)
Steve (01:24:14.022)
And there's also the Nux Discord room. That's where I usually ping Daniel is in there. So I know he hangs out in there as well if you got questions or just want to help other people.
Daniel Roe (01:24:26.219)
That's a great place to come.
Steve (01:24:27.482)
Right. Alrighty, and thank you, Cody, for joining me. Always good to have a partner in crime on here.
Cody Bontecou (01:24:33.797)
Yes sir, it's happy to be here. This was a great talk.
Steve (01:24:37.594)
All righty, with that we'll wrap it up. So thanks everybody for joining us on Views on View, and we'll talk at you next time.