Gal Schlezinger and Edge Functions - JSJ 536
Today we talk with Gal working on developer tooling for the last decade. Previously working at WIX, and now working at Vercel, he has created an open source FNM fast node version manager within that operates within Rust. We talk about Vercel’s Edge Functions, which allows users to insert routing strategies with user code without having performance hits.
Special Guests:
Gal Schlezinger
Show Notes
Today we talk with Gal working on developer tooling for the last decade. Previously working at WIX, and now working at Vercel, he has created an open source FNM fast node version manager within that operates within Rust. We talk about Vercel’s Edge Functions, which allows users to insert routing strategies with user code without having performance hits.
On YouTube
Sponsors
Links
- Develop. Preview. Ship. For the best frontend teams - Vercel
- Edge Functions - Vercel
- Bun - fast JavaScript & CSS bundler
- fnm
- solving puzzles using TypeScript types
- Gal Schlezinger
- Twitter: @galstar
Picks
- AJ - None Dare Call It Conspiracy
- AJ - WHO KILLED BITCOIN? - Documentary
- Dan - How To Use Google CrUX To Analyze And Compare The Performance Of JS Frameworks
- Dan - A deep dive into optimizing LCP
- Dan - War in Ukraine
- Gal - Raycast
- Gal - Working with smarter people
- Steve - Podcast from syntax.fm
- Steve - Dad Jokes
Transcript
STEVE_EDWARDS: Hello everybody and welcome to yet another exciting episode of JavaScript Jabber. I am Steve Edwards, the host with the Facebook radio and the voice for being a mime, but I am still your host with me today on our panel. We have AJ O'Neill.
AJ_O’NEILL: Yo, yo, yo. Coming at you live from the dungeon. Mike is a little hot, Steve.
STEVE_EDWARDS: Little hot still. So the dungeon being your new digs and your new home right there in lovely Utah. in lovely Utah. That is correct. And also we have Dan Shapir.
DAN_SHAPPIR: Hey, coming at you live from lovely Tel Aviv.
STEVE_EDWARDS: Awesome. And our very special guest today, all of our guests are very special. Today's very special guest as always is Gal Slezhinger. I think I said that right. Coming also from near Tel Aviv. How you doing Gal?
GAL_SCHLEZINGER: Yeah, I'm good. It was almost correct. It was Schlesinger, but that's fine. Schlesinger,
STEVE_EDWARDS: dang it, I thought I had it. Damn, messed it up. The pressure of being on the recording does that to me. Okay, so before I get started, Gal, why don't you give us a little background on yourself, who you are, why you're famous, what you do, who you work for, anything else you think's important.
GAL_SCHLEZINGER: Yeah, okay, so my name is Gal Schlesinger and I've been working on developer tooling for the last decade, I would say and the most impactful open source project I have released is called FNM, which is a very fast Node.js version manager written in Rust actually. And for the last couple of months, I've been working at Versel, which is very exciting for me. And yeah, so specifically I'm working on edge functions, which we might talk about. So yeah, that's it.
DAN_SHAPPIR: I also want to mention that prior to Gal working at Versel and prior to me joining Next Insurance, both of us worked together at Wix, which is where I met Gal. And I have to say that Gal was one of the smarter people that I got to work with there, which says a lot.
GAL_SCHLEZINGER: Thank you.
Hey folks, this is Charles Max Wood from Top End Devs. And lately I've been working on actually building out Top End Devs. If you're interested, you can go to topendevs.com slash podcast, and you can actually hear a little bit more about my story, about why I'm doing what I'm doing with Top End Devs, why I changed it from devchat.tv to Top End Devs but what I really want to get into is that I have decided that I'm going to build the platform that I always wished I had with devchat.tv and I renamed it to Top End Devs because I want to give you the resources that are going to help you to build the career that you want. Right? So whether you want to be an influencer in tech, whether you want to go and just max out your salary and then go live a lifestyle with your family, your friends, or just traveling the world or whatever. I want to give you the resources that are going to help you do that. We're going to have career and leadership resources in there. And we're going to be giving you content on a regular basis to help you level up and max out your career. So go check it out at topendevs.com. If you sign up before my birthday, that's December 14th. If you sign up before my birthday, you can get 50% off the lifetime of your subscription. Once again, that's topendevs.com.
STEVE_EDWARDS: Okay, so as Gow mentioned, he works on edge functions and he's at, we're currently at Vercel. So as a starting topic, I wonder if Gow, if you could tell us a little bit about Bercel itself what it is, what it does, and then we can get into the edge functionality. I know in my experience, I've heard, always heard of Bercel in conjunction with Next.js, a React app. But looking at the docs, you can obviously use it with many other types of frameworks, whether it's Nuxt, which is my world, it's VELT, and so on. So why don't you give us, can you give us a little background on Bercel and what it does and why it's so excellent?
GAL_SCHLEZINGER: Yeah, sure. So first of all, I would say that I'm not a dev rel, so everything I say It was just like a Vercel user and now as an employee, but probably there are better ways to describe Vercel. So Vercel is a deployment platform and we also provide many open source tools, just like Next.js, but we also support a variety of front end frameworks. And we, uh, let's, let's see how I can describe it. So essentially we're trying to make the web faster by having, like using many primitives from cloud providers and and making sure that developers can use them easily because maybe if some of you try to use Like lambdas it's not that easy to use them as In versel for instance and if you want to mix and match like the best tools to deploy your app right now You will have a lot of stuff a lot of boilerplate You will need to play with and nextjs and sorry and versel provides the best In my opinion, the best developer experience for developers to deploy their app and make sure that it's fast, that it's running globally and like just better. And yeah, I think I'm not sure how to describe it. Otherwise,
DAN_SHAPPIR: I think in this context that, uh, and, and we spoke about it in a previous episode with the Tejas, it's this democratization of the web that previously, if you wanted to have an, a web application that had good performance and high availability and robustness and uptime and security and whatnot, you really needed to have experts on your team to ensure that you use all the infrastructure appropriately and in an optimal way. And I think that tools like Versel and also others like Netlify, I think, are striving to make it easier for everybody to get all these features and capabilities and to just have an optimal experience regardless of their level of expertise. Would you agree with that, Gal?
GAL_SCHLEZINGER: Yeah, so I was a very big Heroku fan and when I tried Roussel for the first time, Roussel has a cool feature called preview deployments. So on every PR, you get a fully working deployment that is working just like production, but for your own PR. So once I tried Roussel for the first time, I couldn't go back to use to using Heroku. So even though I really like Ruby on Rails, I just started writing everything in JavaScript so it would work on Vercel. And so like from what I see, the fact that Vercel provides these primitives and these like features that like on recent years, we see that a lot of providers do that as well, like preview deployments, you can see that the web is going forward very fast.
DAN_SHAPPIR: So can you elaborate on some of those features and capabilities that you get with Verstel? I think you mentioned deployments from PRs. I think you get CDN caching. What else?
GAL_SCHLEZINGER: Yeah. So we get very good caching. We have stuff. And the preview deployments. I think that preview deployments are the most amazing thing that happened through the web. I think that if any of you try to develop native apps, whether it's on Windows with on Mac or on iPhone, you can see how, in my opinion, how bad is it compared to develop something on the web with web technologies. And I think that Vercell provides the same thing for deployment. So preview deployments are just like we have a hot module reloading. Once you try that, you must have it. And the caching is obviously very good. And Vercell also has something called ISR incremental static regeneration, I think, but it's called ISR, which is basically, like having a service side rendered content, which gets cached on the CDN. And while there are ways to do that using like cache control headers, ISR is more like storing the output, like the HTML or JSON data on Redis, which is duplicated across the globe. So it's like a smarter cache.
DAN_SHAPPIR: It's also smarter, I guess, in the sense that you don't need to wait for a cache duration, but you can actively flush it and replace it with new content, correct?
GAL_SCHLEZINGER: Yeah. So on the very latest Next.js version, if I recall correctly, but very recent versions of Next.js, you can also invalidate the cache or revalidate the cache manually. So it's not just validating based on time. You can also have a webhook that resets or re-triggers the build of a specific page.
STEVE_EDWARDS: So is this, when you're talking about this incremental rebuild, is this in the context of an app with an active server running that's being queried for your content, or does that also include a static site generated, a static generated site, you know, where you built all your HTML ahead of time and deployed it, and now you want to be able to update specific pages without having to rebuild the entire site.
GAL_SCHLEZINGER: So let's take Next.js for example. So you can build multiple pages. There is a function called get static paths, if I recall correctly. And you can tell all the paths that you want to build on the build like on your CI. But then if there's a new path, you can just build that on the fly and you can make sure that the webhook will build that for you. So users won't have to wait. They will just see a version cached in the CDN and it will all work fast.
STEVE_EDWARDS: So I'm still a little confused, Gal. So we're saying that all of this, I understand the caching and the replacing of specific items in a cache. They say the two hardest things in computer science are naming things, cache invalidation, and off by one error. So caching is definitely in there. But again, we're still talking in the context of an app that isn't statically generated. It's something that's got a server on the back end, and that server is being queried whenever the cache is invalid or the cache can't fulfill the request. Is that correct?
GAL_SCHLEZINGER: So unlike Heroku and stuff like that, Versel works on serverless deployments. So instead of having a server that's waiting for requests, it actually uses serverless functions like AWS Lambda or stuff like AWS Lambda. So you don't have a server that's listening. You scale to zero if you don't use your app.
DAN_SHAPPIR: So basically when a request comes in, a load balancer picks it up and then just spin, either assigns it to a waiting function or spins one up and then assigns the request to it, something like that?
GAL_SCHLEZINGER: Yeah, something like that. So you can imagine that if we have a deployment for every commit that you push to your repo and we can allow you, you can actually go back one month and just see your website one month ago. So we wouldn't run thousands of servers waiting. It's not good for the environment and it makes like less sense. So everything once you, you query it, if it's warm, you will just see it live. And if it's not, we will start the serverless function for you. And I think this is like the most powerful thing in Versailles. So if you want to do that on your own and at Wix, we try to do stuff like that. It's very hard to, to nail it. And Versailles does it for its customers. I was a very happy customers. I was a very happy customer before I joined Verso. This is why I joined.
DAN_SHAPPIR: And in terms of coding, I mean, I'm familiar with writing code on Next.js where you basically just assume that it's running as a node server that's just up and running and accepting incoming requests. Now, obviously, Next.js does the routing for you, so I guess it might not be something you notice, but... Do you need to change anything in the way that you use Next.js when you work serverless like that?
GAL_SCHLEZINGER: Not really. You don't need to change anything because we do all the magic after you build. So if you'll take a look, there are frameworks like Nuxt or SvelteKit or even Solid Start for Solid.js. So they all have their own builders or a way to build stuff for Versel to make sure that they leverage the all the primitives that Vercell provides. And you don't need to change anything. So Next.js just works. And if you deploy Next.js to Vercell, it's like a really good match. It just works the same way. Just like you wrote, built it on your own machine, you deploy it to production and it chooses all the best practices automatically. You don't need to think about anything. You don't need to think whether you're going to split it to a new function or you you will use like this sort of caching strategy. It will just work with Next.js and its own building blocks.
DAN_SHAPPIR: And I'm guessing that all the static assets like images or the JavaScript files that need to be downloaded to the browser are just pushed automatically into a CDN.
GAL_SCHLEZINGER: Yeah. And if there are things that need to have like post-processing or there's an image component in Next.js and I I'm not doing front end that much, but I know a bit about the features of Next.js because my blog is built with Next.js. So there's an image component which actually uses, when it's deployed to Vercell, it uses the image optimizations of Vercell. So this is something that happens after the fact. If you run Vercell locally or Self-host, not Vercell, if you run Next.js locally or Self-host
DAN_SHAPPIR: I was familiar with the image optimizations that exist in Wix, like automatically converting to modern image formats and automatically resizing images so that for best fit. Are we talking about similar types of optimizations in this case?
GAL_SCHLEZINGER: Yeah, yeah. Works the same way. We have a service that does that and every Next.js deployment that goes to production just uses that if it uses the image component and this way we lower the LCP, if I remember correctly.
DAN_SHAPPIR: Yeah, although I have to say, and interestingly I'll touch on it in my picks, but it was confirmed by a talk given by Phil Walton in the recent Google I.O. that in most cases, LCP is less about the size of the image and more about when the download starts, relative to the flow of the page load, the waterfall of the page load. But anyway, obviously it's a good thing when you can reduce the image size. You know, I've seen really bad scenarios where image were ridiculously, ridiculously huge and adversely impacted the overall load time and LCP, especially. And you know, the topic that we were going, that we're going to focus about today, unless you have more to ask in this context, Steve is, is edge functions. And can we move on to that?
STEVE_EDWARDS: Yeah, go for it.
DAN_SHAPPIR: So, yeah. So I think, so I think my, my first question really is what is it, what are edge function or what is edge computing and how do edge functions differ from just, you know, the plain old serverless. You know, it's funny to think about serverless as something that's kind of old hat, but I guess that in web years it is. So how, how do, how does edge functionality differ? What's, what's new about it?
GAL_SCHLEZINGER: Yeah. So when I talk about edge functions, I will talk about Versal edge functions. So it's like, like capital edge functions and not like edge functions. Maybe other companies have their own version of edge functions that do not apply to what I'm gonna say. But edge functions are essentially pieces of JavaScript or Wasm code that is deployed next to your CDN and enable what we call dynamic at the speed of static. So if you want something like A-B testing or feature flags or authorization for some routes. And you want to make sure that the routes, like a piece of your routing is happening in a function call, like in, in your, like, there's a code that's applying your routing. This is what you will go for in Versel and it's fairly new. It's in beta right now. And we're working very hard to make sure that we, we release a stable version of that. But it's basically allows people to like with next JS middleware that is running on Edge functions, it allows to insert routing information or routing strategies using user code without having any performance hits. So this is why Edge functions, like they're deployed very close to the CDN. So the users, when they navigate to your website, they don't have any, they have very low cold boot time. So they will be able to do stuff like to do dynamic stuff without sacrificing CDN speeds. This is how I see that.
STEVE_EDWARDS: So let me see if I can offer some clarification here for people not as smart as you like me. Now, when we're talking about the edge and CDNs, I think maybe for newer listeners, correct me if I'm wrong, Gal and Dan, but basically these are data centers that are located in different places around the world and allow people to have different speeds. So for instance, I mean, faster speeds that are relative to where they're at so for instance, I'm here in the United States, and if I'm hosting a website and it's only hosted here, if somebody in Asia or Europe or Africa is accessing my site and it's something that's not using caching, real basic site, they got a query all the way back here in the States and that's gonna have some latency issues because of the, merely because of the distance. So what's defined as the edge is basically data centers in different places, so you might have one in Asia, you might have one in Africa, you might have one in Europe. Etc. So if I'm in Asia, it's going to hit my data center in Asia right there, my CDN, which is going to have a copy of my site. So far, am I correct?
GAL_SCHLEZINGER: Yep. Yeah, you're correct.
STEVE_EDWARDS: Okay. So then if I understand edge functions, and Gal, maybe you can talk about how this is similar to something like Cloudflare Workers. That's something I'm a little more familiar with, at least from hearing about, is that it's code that runs in that data center. And so when somebody goes using the Asia example, for instance, if I have some code that's specific in my Asia data center, somebody from there accesses my site and first thing it's going to do is hit my function and from there I can do any number of things. Maybe it's a redirect to a specific Asian version of my site, CJK language, for instance, or I want to maybe display a different image or do any number of things that you can do and it's right there instead of having to come back all the way to the in the United States. And then the same would be true of an edge function in any other remotely located data center. Is that accurate?
GAL_SCHLEZINGER: Yeah. So I can explain. You asked about Cloudflare workers and we're currently built on top of Cloudflare workers. We think they provide some excellent infrastructure and primitives and they are fast, secure, extensible. And we evaluated a couple of other providers, but we chose Cloudflare because they are better right now. And this is what Vercell does. We just choose the right product for the current time. And yeah, so this is like when I said before that if you wanted to just mix and match a couple of providers, that would be harder than just using Vercell, which tries to use the best of all worlds.
DAN_SHAPPIR: So, so I have a question about this or actually have several, but I'll start with one. Historically, in the web, we've had servers and we've had clients, you know, the browser. And since the late 90s, we could run code on both ends. And in recent years, we've had fairly effective, let's call it strategies for utilizing both these ends for effectively and efficiently rendering sophisticated content to users. You know, the JAMstack comes to mind, or these days it's just called JAMstack. As this idea where you generate the HTML that reflects the common slow changing content on the backend and then augment it with personalized or fast changing data on the front end. If I understand correctly, edge. functions kind of come in the middle between these two extremes, probably closer to the front end, but essentially somewhere in the middle. So my question really is, what's the upside? What do I get from using edge functions that I couldn't get by just running code on the server and on the browser?
GAL_SCHLEZINGER: Yeah, that's a good question. So if we'll we'll take like if we would if we will go to the server, like we have like both extremes. So if we will go to the server and we run it on a serverless function like Lambda. So Lambdas have fairly slow cold boot. So if they're not warm, it will take like a second or a couple of hundred milliseconds for them to boot. So if you want to have like a very fast and responsive website that like if you just deploy your website, which is routing for that, which is in the form of A-B testing, feature flags or authorization. You wouldn't be able to do that without sacrificing the CDN speed because you will go and you will wait for that Lambda function to boot. And if you want to do authorization, you wouldn't do that on the client. If you want to do A-B testing, you can do that on the client, but you will see like spinners and stuff. And if you would want to use A-B testing for pricing, you wouldn't want it to be on the client. You would want to be on the server.
DAN_SHAPPIR: Cool. And so really what you're saying is that edge functions are about the speed of the access to remote compute, remote from the perspective of the browser. That it's faster because it's fewer hops to get there and it's faster because cold boot is much faster. I think it's measured in like tens of milliseconds or less instead of hundreds of milliseconds or more, correct? Y
GAL_SCHLEZINGER: Yes, and you can also see that as an environmental, I'm not sure how to pronounce it in English, but environmental thingy, because if you compute, if you have compute for hundreds of milliseconds compared to just a couple of milliseconds, that's also makes the world greener, right? You don't use as much as compute as needed. And because edge functions in Vercel, or just JavaScript, they don't have any C layer or whatever, like they don't have any native libraries you can use other than Wasm. You basically have faster boots because of that and because of our infrastructure of choice right now.
STEVE_EDWARDS: So the quick question then, so that was something I wanted to ask and you just mentioned, Gail. So Edge Functions can only be written in JavaScript, is that what you said, as compared to being able to rewritten in other languages as well?
GAL_SCHLEZINGER: So right now we support JavaScript and we support Wasm in beta. Both are in beta right now. So you can call Wasm code from your JavaScript. So if your language can compile to Wasm, you can run almost everything. But JavaScript is, because we're using, right now we're using Cloudflare, we're running on top of V8 directly. So you can use JavaScript to do that.
STEVE_EDWARDS: And Wasm is WebAssembly, correct? Yeah. From the uninitiated.
DAN_SHAPPIR: Okay. Yeah. So basically anything like I said, anything that can compile to WebAssembly, which today is really almost anything, can effectively run on the edge, which is pretty awesome and makes it a very powerful platform. But that kind of brings me back to another question that I have, which is, I mean, if we can get all these benefits in the edge why don't we also do it in the data center? I mean, you know, obviously it's really, really beneficial to have really fast cold starts. So why don't Lambda functions have them?
GAL_SCHLEZINGER: So I think that the infrastructure is just different. So Vercel allows, and I just, yesterday, I think, I pushed, and not yesterday as the time of the recording, I pushed PR for Seveldkits to run on the edge. So all Seveldkits projects can just run their entire app on the edge. I had some changes from something that Rich Harris did. So right now it will just work. You can just try to deploy a SeveltKit app and it will run on the edge. So 100% of the app will be served from the edge and no serverless functions at all. So we're getting there, I think, but you don't have all the primitives. So if you want to use something like a TCP connection. If you have a Postgres database or MySQL, you won't be able to use that because the only way to connect to something is using fetch, so HTTP only.
DAN_SHAPPIR: Yeah, so I'm going to ask about that as well in a second, but before that, I just wanted to mention, you mentioned Rich Harris and Sveld and SveldKit that Rich Harris has also joined Vercell, I think, so to work on Sveld full-time, I believe, correct?
GAL_SCHLEZINGER: Yeah, I think so, yeah. I can talk to him on Slack, so I think that, yeah.
DAN_SHAPPIR: So that's one of the benefits of working at Vercell, that you can chat with Rich Harris. Cool. So that brings me to another question. I mean, it's really nice and useful that you can access compute really quickly and really close and not on the endpoint with some of the security implications that that might have and trust issues that that might have. But what about the data? I mean, it's all fine and good, but what's the value of compute if you don't have the data? So you mentioned that unlike the server side where I can pretty much access any database that I want, when I'm running in Edge functions, I'm kind of restricted in my data access choices. So what do I have? What do I get?
GAL_SCHLEZINGER: Yeah. So I don't have any really good answers here, but I played with stuff like Ops-Dash, which is a server, a serverless Redis client or a server that you can access with HTTP calls. So it's also distributed all around the globe and you can just fetch and get data and store data and it works. But I think that this is the next place for innovation on the web. So most startups and I know that like, you know, it's like me then that most websites are US only and people talk as if the internet is very fast, but when you run something from Tel Aviv is very slow. So I think that the next way, if, if deployments are so easy right now for, for your content and your functions, the next step would be to, to also provide the data and it is much harder, but I think this is the next step of startups and dev tooling.
DAN_SHAPPIR: What I recall from CloudFlare workers, I believe, is that they have two main data access options in the Edge so that you can be either eventually consistent, something like that, in which case it's really fast access, or you can be actually consistent, in which case it's much slower, and then I kind of question the value of actually doing it on the Edge.
GAL_SCHLEZINGER: Yeah. So we do not have any way to integrate with these interfaces right now, because our first mission is to make sure that people can use middleware, which is a routing idea and not like a serverless functions idea. So you can rewrite stuff and you can redirect users, but you won't, but like having a API functions on the edge is very, it's not even on beta. It's just something that works and I made it work for SvelteKit. Yeah.
DAN_SHAPPIR: So if, so going back to your example, say I want to do some sort of A-B test. So obviously I can do random distribution and I don't need any special data for that. Or I might do it, generate some sort of a cookie on the client and then use that for the duration of the session as the means to do A-B testing or whatever. But if I want to associate the A-B test with a particular user identity, so that, for example, the same user gets the same test every time they log into the system. For that, I do need some sort of data access, don't I?
GAL_SCHLEZINGER: Yep, and we might have a solution for that in the future. Not that I know right now, but for simple cases, like if you want to A-B test based on country or something like geolocation, that is also super, like everything that considered geolocation is supported. And if you want to have like a session thing, so that will work, but we don't offer any storage right now.
Hi, this is Charles Maxwood from Top End Devs. And lately I've been coaching some people on starting some podcasts and in some cases just taking their career to the next level, you know, whether you're beginner going to intermediate and immediate going to advanced, whether you're trying to get noticed in the community or go freelance, I've been helping these folks figure out how to get in front of people, how to build relationships and how to build their careers and max out and just go to the next level. So if you're interested in talking to me and having me help you go to the next level, go to topendevs.com slash coaching. I will give you a one hour free session where we can figure out what you're trying to do, where you're trying to go and figure out what the next steps are. And then from there we can figure out how to get you to the place you want to go. So once again, that's topendevs.com slash.
DAN_SHAPPIR: So when I write code using Next.js, for example, and I want to leverage the edge functionality that Vercell provides. Let's say I'm coding in Next.js, I've decided to deploy my application using Vercell, I've decided to use the edge function. So the first question is, is that something that I need to think about? Do I need to write my code in a certain way? configure my project in a certain way, do both, in order to be able to leverage this edge functionality.
GAL_SCHLEZINGER: So in order to create a middleware in Next.js, so middleware is the only way that is supported to use edge functions right now in Next.js. So in order to use middleware, you need to create an underscore middleware.js or.ts file in your pages directory, and the request will just come to there. Like you need to default export a function that receives a request and returns a response. And that's it basically.
DAN_SHAPPIR: It's kind of amusing because it seems to be kind of similar to how a service worker might work.
GAL_SCHLEZINGER: Yeah, yeah, yeah. We try to make sure that like when we started and I say we, I wasn't in Versailles at that point. But when Next.js started, all we had for writing web servers in in Node.js was the Node.js API for incoming message and server response. And now we have the web, the web APIs, which will also be implemented in Node.js thanks to the winter CG effort. And so, so now we have like better interfaces that we can reuse on the front end and the backend. So we're trying to make sure that new stuff will match whatever is the standard right now. And we don't piggyback on all the APIs of next of no JS, for instance.
DAN_SHAPPIR: Well, now that you've brought it up, you have to mention what a winter CG or winter CG dot org actually is.
GAL_SCHLEZINGER: Yeah. So I will talk, talk about it briefly because I, I was only in one meeting. So winter CG, winter's winter is web interrupt interoperability, I think. So the idea is that we have the web API is in the browser and we have JavaScript run times that run on the server like node.js there's dino and a couple of a couple more like we have the Cloudflare workers for instance which does not use any of them like does not use dino and does not use node.js so we wanted to have the bare minimum apis for server-side code to work across all of them so if we have the web API for that all the browsers need to implement. So we will have the same APIs implemented in Dino in Node.js and Cloud for workers and maybe more, maybe even a quick JS for instance. And some of them, like if you, if you have a request
AJ_O’NEILL: quick JS,
GAL_SCHLEZINGER: yeah, quick JS is another JavaScript runtime. Like we also have bun right then the new JavaScript runtime called bun. So there are a couple of JavaScript times. Was just a builder. And it uses JavaScript core, I believe, under the scenes, because it can also run JavaScript code right now.
AJ_O’NEILL: So wait, what are these things? These are all new things.
STEVE_EDWARDS: Yeah, so I was, what it's worth is, I was just listening to a podcast episode this morning about this exact thing on syntax, and they were talking about it. And one of the things they went through and mentioned was there's approximately 10 different sort of JavaScript environments. And the whole idea, as Gal said, is that I'm trying to standardize everything so that if I want to port JavaScript code from node to...
AJ_O’NEILL: What is it, 2010? Well, we don't have 10 different JavaScript environments.
STEVE_EDWARDS: Well, you got to listen to it.
AJ_O’NEILL: You said it...
STEVE_EDWARDS: Yeah, and there's things like, for instance, Cloudflare workers has their own. Cloudflare has sort of a pared down JavaScript API. You've got things like Bloomberg terminals is one of the things I remember. They sort of have their own environment. There's a number of them that are, you know, fairly minor as compared to Node or JavaScript in the browser. But the whole idea is that, as Gal said, you're trying to standardize APIs across every environment so that code can be portable from one JavaScript environment to another instead of having tweaks that are different here and there. Just sort of-
DAN_SHAPPIR: Yeah, I'll put it in kind of a reverse way. I mean, like if we look back, I don't know, let's say three years ago, we basically had two JavaScript effectively, two really- two real JavaScript environments that mattered. We had a few Internet of Things environments, but they were really in and of themselves. So we had, obviously, the browser, and we had Node. And then along came some additional environments. Deno came along, and then the various edge functions came along. And even some of the serverless environments are not exactly Node. And when you aggregated all of these together, what they started seeing is that we are starting to get a fragmentation of the JavaScript runtimes again sort of thing. Like what we used to have with different browsers is now starting to happen between various JavaScript environments potentially.
AJ_O’NEILL: So I mean, we that we just come full circle because we had Ringo and Rhino and well, and then Deno got introduced and all that.
DAN_SHAPPIR: Yeah, so we have come full circle. Only recently. Yeah, so we have come full circle. So we've had these environments, and then it seemed that Node was kind of eating the world on the back end, or everywhere else that wasn't the browser, let's say. And then it turns out that in some cases, it isn't, because people started taking V8 and customizing it and tweaking it for their particular needs. Like, Gal spoke about the need to have really fast cold starts. That's not something that you can do with notes. So they needed to tweak that environment in order to get that to properly work. Well, with notes, you usually have notes just up and running all the time. You don't need to fast start notes. You can do some tricks with, you know, taking a memory image dump and stuff like that, but it's still, it's not sufficiently fast in many ways. So if you really want like, I don't know, five millisecond cold start, that's a problem to do with Node. And so they started coming up with all these different types of environments, especially I think the Cloudflare workers were an especially important example of this. Now again, you might not care about it. You might say, I just care about browsers and Node and that's it, but these other players that want to expand the reach of their offerings, they're seeing the the ability to standardize APIs, at least a certain subset of APIs, as a means to achieve that.
STEVE_EDWARDS: Gal, what do you think?
GAL_SCHLEZINGER: And one thing that we, we also did at Versel, we released or about to release, it depends on when we will, we'll publish this episode, but we have a package called Edge Runtime, which basically implements winter CG for Node.js that will match the Versel. Edge functions API. So people can just use that in their own frameworks. So next JS is embedding this edge edge, uh, edge runtime package. So everything that you can run on Vercell production, you can run locally. So you can see how we can implement that on SvelteKit and Nuxt or whatever. So they can have, they don't need to run their own copy copy of Cloudflare workers or Dino or whatever, they can just embed it into their own Node.js. application.
STEVE_EDWARDS: So Gal, are you involved in this group? You'd mentioned something about you'd only been in one meeting is are you involved in the group itself? Or are you just talking about Vercell's involvement?
GAL_SCHLEZINGER: Yeah, so Vercell is involved for a long time now. But I was only in one meeting saying hi, maybe in the future.
STEVE_EDWARDS: Okay.
DAN_SHAPPIR: So again, in this in this context, if I look at the API's that are available in this type of an environment, what kind of API's am I getting compared to what I would experience in Node. I mean, in Node, obviously, I can npm install essentially anything and everything. I assume that's not something I can do in these types of environments. I assume that they have a fixed API, more or less. Where are the limits? What are the boundaries?
GAL_SCHLEZINGER: Yeah, so the boundaries are whatever. Like in Vercel Edge Functions, you can run, you can use streams, you can make an HTTP call, but you don't have any file system, you don't have TCP connections. TCP connections do not make sense for databases at all, so there's no reason to add them. But you don't have any TCP stream, don't have UDP.
AJ_O’NEILL: Wait, hold on. Why does TCP not make sense for database? Because you're using sockets?
GAL_SCHLEZINGER: Yes, so if an edge function just starts and then closes itself after a couple of milliseconds, just connecting to the database would be a long time. And if you have millions of requests, because this is happening before the CDN starts. So if it happens for every request and you start a new session for a database and then closes it, most old school databases don't work well with that. Like MySQL and PostgreSQL will be problematic with these connection types.
AJ_O’NEILL: Yeah. So, I mean, but it sounds like you're describing a use case for which this is not a good solution. If you're expecting lots of connections, then it doesn't sound like it would make sense to try to. It sounds like you're just introducing a lot of overhead to, to solve a problem that would be better solved by a traditional server that handles that problem well already. So if you need database, it sounds like this is not.
DAN_SHAPPIR: Yeah, but AJ, you, that's the point that I was trying to get at before. You might not need a full database, but you do need data because you know, if you don't need the, if you don't need data, then you could probably just handle it in the browser itself, for example. The fact that you're doing it a server side means that the server side is going to be able to access some data that you're not exposing to the browser for some reason. And if you do, if you do that, then you need to be able to probably read and potentially also write back to something. And like I said, it might not be a full fledged database, but it is something.
GAL_SCHLEZINGER: Yeah. And you can see that there are some startups that are like wrapping databases or wrapping like all databases with HTTP servers. Like you can see a super base having an HTTP API, which is a part of their offering. So if you use super base.
AJ_O’NEILL: But that's TCP. Oh no, yeah.
GAL_SCHLEZINGER: So TCP, yeah. So you don't, you can't create a raw TCP socket, but you can create an HTTP connection.
AJ_O’NEILL: But HTTP is slower than TCP.
DAN_SHAPPIR: Oh, well just think about it as, think about it as the transport medium. The fact, you know, it could be that edge function is reusing the same, you know, like an entire edge server is reusing potentially theoretically a single HTTP TCP connection for HTTP requests coming in from the various functions independently. So you're just segregating the function from the underlying communication protocol.
AJ_O’NEILL: So are you saying that you're, you're basically just doing you have some sort of TCP service that runs somewhere and then you connect to a local socket. So that TCP connection stays warm, but you have some sort of daemon program that's mapping between the two.
DAN_SHAPPIR: Yeah. I don't know. That's something that maybe God knows, but like God said, if you're going to do a TCP handshake and an SSL handshake, that's just not going to work with really fast startup times.
AJ_O’NEILL: I get that, but then to say we're not going to do TCP, so we're going to do HTTP, that's not anymore.
DAN_SHAPPIR: No, it means that the edge function itself doesn't deal with TCP or more and more precisely doesn't expose a TCP API. So you don't have a socket type object within that server function. Like in Node, you can just create a socket. It's totally up to you. The APIs are there. They just don't provide that API.
AJ_O’NEILL: So there's this thing called sClient that will allow you it's a tool that's available in SSL. There's also a standalone tool and it will establish the TLS part of a connection. So you can basically access something local host without having that overhead and the middle piece has already established the connection. So you could connect many different, I might not use the right word here, but sockets to it without the overhead each time. So I think there's two issues I'm confused about. One, are we trying to optimize away the TCP connection to the database? Which it sounds like, yes, we want to optimize that away. And then two, I think this might be a separate discussion entirely is you allow HTTPS from the code itself. And that might be really slow and that's a workaround for not being able to connect to the database. But that doesn't make a connection to the database that fast.
AJ_O’NEILL: I'll phrase it differently. Think about it this way. And again, Gal, feel free to correct me if I'm wrong. You might have, or you will have a serverless function instance in the Edge running for every current user that you have. So if you were trying to open an independent TCP connection from each one of these, to the database, you would potentially be opening a million different simultaneous connections to your database or 10 million or whatever, and that's potentially going to be a problem. So you want to do, so that's one aspect where you, I would think that you would want to do some sort of reuse between the independent instances of the serverless functions. But beyond that, TCP as an approach makes sense when you're long lived. HTTP, you know, if you look at it just as HTTP itself, logically, not the implementation detail, it's kind of a one and done. You send a request, you get a response. And that's it. The fact that it's implemented on top of TCP, and HTTP 1.1 does the reuse, that's an implementation detail. If you're looking at it from the perspective, just to finish, if you're looking at it from the perspective of a high level API, all I think Gal is saying is you can think about it as like this protocol endpoint or API endpoint where you're saying, this is my request and here's the response that I get.
AJ_O’NEILL: Yes, but that doesn't solve the problem. I mean, that doesn't that doesn't actually increase performance to add another layer on top of something that you're trying to avoid because it's slow. That's where I'm confused.
DAN_SHAPPIR: No, that in itself is not increasing performance. It's enabling the performance that's made available through the appropriate architecture, is the fact that you don't need to start up a new TCP connection for every serverless instance.
AJ_O’NEILL: So you just shim around, so you provide an HTTP module, but it's not the node HTTP module. It's one that does this shimmy do with virtual machine connections or container connections or some such.
DAN_SHAPPIR: Gal, I'm going to defer to you.
GAL_SCHLEZINGER: I feel like I awakened a monster. It's not about performance. It's just like old database. And I'm not sure if I should say old, but like RDBMS is usually having a stateful connections, which are not suitable for this kind of payload. And like the way, the fact that we have a fetch API, like the only way is to fetch something using HTTP is because we have a web API and not like a full Node.js environment because it's not running on Node. So we don't have any access to the sockets. We only have access to HTTP.
AJ_O’NEILL: But that's that's an oxymoron because HTTP is a stateful connection on top of socket. So unless you were doing something where you are writing a custom HTTP module that doesn't use a TCP stack, which I think, um, we, what was that one we had where somebody was, they were building everything in a browser environment. The whole IDE was built in the browser environment.
GAL_SCHLEZINGER: Stack blitz.
DAN_SHAPPIR: Stack blitz.
AJ_O’NEILL: stackplates. Yeah. So they, if I understood correctly, they're, they're shimming out a bunch of stuff in order to, you know, so you're not actually using, if I remember correctly, you're not actually using node when you use them, you're using something that matches nodes API perfectly, but they shim out all of this stuff under underneath in order to, to get, to make the weird, the weird stuff happen. And that's what I'm asking. Are you, are you actually doing that? Are you, are you, because I get it if you don't want to allow TCP connections. There's a million reasons to not allow TCP connections. So I get that. What I don't get is if you're saying TCP is non-optimal, therefore we're gonna add three other layers on top of TCP and the overhead will make it optimal. That's where I'm at a loss.
GAL_SCHLEZINGER: If you use HTTP, use TCP. Yeah, that's obvious. But TCP, like raw TCP sockets are not exposed to the clients because the only way to operate is with the web API. So it's not about performance, but if someone wants to connect to a database, Postgres or MySQL, it wouldn't work because they are not prepared for this kind of scale. This is why SuperBase have an HTTP API that you can call from the browser and from the edge functions directly. This is what I was suggesting. I'm sorry for making a mess here.
AJ_O’NEILL: Well, no, no, no, this is good. I mean, like I am interested in the technical details. I want to know what, you know, how something actually works and why. So this is not making a mess. This is doing the opposite. This is, this is clear.
DAN_SHAPPIR: Yeah. But again, AJ to your point, the whole point of these edge functions is that for architectural reasons, you can't run node there. You, you take V8, you add some APIs a lot more limited than the node, and that's what you get to run. And that's what the Winter CG is trying to standardize, like a minimal set of common APIs that you can count on being available everywhere. And they'll probably also be available in node by installing an appropriate NPM module in node. So if somebody like Vercel just uses these APIs, And then tomorrow they'll be able to run their offering, not just on cloud flare, but on other edge infrastructure as well that also supports, it's kind of like a terraform sort of a thing.
AJ_O’NEILL: Yeah. I'll be interested to see how that rolls out in practice. Cause generally speaking, you know, when you have, it'll be interesting to see if it catches on as a standard. Cause they tried to do with this with node in the past, right? Microsoft joined forces with the node team and they were going to make this you know, and Mozilla was in on it too. So you had Microsoft, Mozilla, and ostensibly at least node people, if not also Google people all working on this problem of how do we create standard APIs and node and, and it failed flat at the interface layer. We never got the, the Jaeger monkey node. We never got edge node, or I guess edge node actually was real, but only ever in beta or alpha, I don't know if it, I don't think it ever made it to, to a production release. The Mozilla one, I don't think even never made it to a beta release. And then after that we had, uh, was it node API and then in API.
DAN_SHAPPIR: But if you think about it, these things never added a whole lot of value. I mean, it was, it was nice that you weren't like that. We weren't living in a V eight only world, I guess. But at the end of the day, uh, if, if,
AJ_O’NEILL: why is it nice that we not live in a cloud flare on the world that's.
DAN_SHAPPIR: You know, because with note is a project that's going to be long living in any event. And there, you don't pay anybody in order to be allowed to use note and you want to use it on digital ocean. You can, you want to use it, you know, on Amazon, you can, you can use it anywhere you want. If I'm tied to cloud flare tomorrow, cloud for decides to, I don't know, quadruple their price. And you know, I'm. You know, pardon my French. Fuck.
AJ_O’NEILL: We'll edit that one out. But so why, but why, why then would it be to cloud players advantage? Where's the, I mean, this almost sounds like a Trojan horse scenario.
DAN_SHAPPIR: Well, they're even open sourcing what they're doing because from their perspective, they're trying to, I guess be, they've decided that to win, they want to be the node of the edge computing world. Maybe they're thinking that they can win on functionality, that they have a sufficient lead in terms of their capabilities. I don't know. You'd have to ask their CEO, I guess.
GAL_SCHLEZINGER: I can talk about Cloudflare, but I can talk about Versel. And from what I know, Versel wants the web to be open. This is why Vercel does so much for the open source community. So I think that CloudFair just does it as well. They care about web, about the web and the open web, and they want to make sure that like it moves on.
DAN_SHAPPIR: My big, sorry, go on Ajay.
AJ_O’NEILL: No, you, you go first.
DAN_SHAPPIR: Yeah. I was going to start to pull us back into a direction that we spoke about before, because I, Gal, you did give certain concrete examples of where edge computing can be really useful. But I'm kind of still on the fence here because it seems that I'm potentially really complicating projects for gains that a lot of applications out there don't necessarily need. I mean, if you get it out of the box with Vercell and XJS, then maybe why not? But I'm still thinking of whether it's just insufficient to have code running on on the server and on the client and cache the static stuff in regular old plain old CDNs. Well, you know, do I really get so much value from this this capability to run in the edge?
GAL_SCHLEZINGER: Yeah, so that's a good question. So if you want to, let's say, implement authorization or authentication, you want to have a password protected website or a password protected section, how would you do that without the server side? And you want to make sure that you have performance of like a CDN, you want to make sure that everything is snappy and fast. You probably can't do that with client side, probably because it's authorization, authentication. You don't want to do that. And you can't do that on the server because once you go to the server, if you want to block all the static assets that are password protected, you need to make sure that the authentication phase comes before the CDN and this should be like the fastest CDN fastest code evaluation mechanism that you can access, right?
DAN_SHAPPIR: Basically what you're saying is that at present, unless I'm really lucky to be living next to an Amazon data center, if I want to have an application that has authentication on access to certain parts of it, then basically I just can't have it as a fast application.
GAL_SCHLEZINGER: So you can deploy your functions to multiple regions in Vercell on some, like when you're a pro, I think, or enterprise account, you can deploy to multiple regions. But still, you will have the cold boots of lambdas and not the fast, like a couple of milliseconds cold boot of an edge function. So basically, you will still have something that slows you down. And this is why edge functions work very well for the middleware idea for the routing idea. And it's just like if you had an Apache server back then that you had the HTTP rewriter plugin. It's just like that, but as if you can write JavaScript to power it.
DAN_SHAPPIR: And talking about tooling, and I know that we are starting to run long, so we're probably nearing the end of this episode, but it's just so interesting. You mentioned the, what's it called? Underscore middleware? What was the name of the file in?
GAL_SCHLEZINGER: Yep.
DAN_SHAPPIR: So the underscore middleware, I guess, is the code that you know that explicitly will run in the edge. And that's where you kind of are limited in the APIs that you use. So two questions there. One is how does the development environment safeguard me from using inappropriate APIs? And number two, how do I debug this thing?
GAL_SCHLEZINGER: Yeah, so first in Next.js when you create the underscore middleware file, when you build your project and or you run it locally. So when you run on dev mode, you will get warnings for everything you try to import or access like process.next tick or process.version. And if you try to build it, Next.js will fail your build. So it will, because most of Next.js is running on a Node.js context, we were trying to make sure that you know that this code can't run on the edge. So this is how it works.
DAN_SHAPPIR: So basically you're going, just sorry to interrupt, but you're basically just validating my imports, if I understand correctly. That if I'm importing something that I'm allowed to import, then that's fine, but if I try to import anything else, then I get an error.
GAL_SCHLEZINGER: Yep. So Next.js is using Webpack right now for bundling the project. So We're not using the node target on webpack for this. So it will fail loudly.
DAN_SHAPPIR: OK, so I understand that part. And how about debugging?
GAL_SCHLEZINGER: So all the logs, I mean, you can't really debug in production, as far as I know, right now at least. But you can console log and source maps will work. So it will point to the correct line of your code. Even after all the, like I had a blog post that I read in about how to manipulate code and keep the source maps correct. So this, this is based on the work we did on edge functions. So all the, whatever that we do on the edge functions source, you will get the correct line numbers in your logs and everything just works at Vercell. And thanks to the edge runtime package, you can run everything locally and test everything locally. There's also a Jest environment. That is. the Edge runtime just environment, so we can run unit tests and not just end-to-end tests to make sure that you have everything that fits production because we want to make sure that the developer experience when you build is the same when you deploy.
DAN_SHAPPIR: So to clarify if I understand correctly, you're basically simulating the Edge environment in your local development environment, correct? And also in the local build environment be or testing environment.
GAL_SCHLEZINGER: Yeah, sort of. I would say that like the two pieces are connected. So our production mimics the edge runtime package and the edge runtime package mimics our production. So we want to make sure that we have, and we have tests that make sure that there are no differences between the two.
DAN_SHAPPIR: Cool. And if I do logging or stuff like that in the edge functions, though, that's, that's collected and made available to me.
GAL_SCHLEZINGER: Yep. On the Vercell dashboard and any project can create a log drain so you can access every log on your favorite platform.
STEVE_EDWARDS: Excellent. Well, we're getting short on time here and I have a hard stop here pretty soon. So we're going to first of all, thanks Gal for coming. This has been some great information. And with that, we will move on to PIX.
Hey folks, if you love this podcast and would like to support the show or if you wish you could listen without the sponsorship messages, then you're in luck. We're setting up new premium podcast feeds where you can get all of the episodes released after Christmas 2020 without the ads. Signing up will help us pay for editing and production. And you can go sign up at devchat.tv slash premium.
STEVE_EDWARDS: Picks are the part of the show where we get to talk about other things or, or maybe tech things, books, movies, food, kids being kids, whatever we want to talk about. So we will start out with Dan. Dan, what do you have? for us today for picks.
DAN_SHAPPIR: So I have actually two technical picks and one usual non-technical pick. So my first technical pick is going to be a shameless plug for myself. I actually published an article on Smashing Magazine. Smashing, I think, is one of the best resources for technical web development content available today. It's an excellent. The quality and quantity of the stuff that they publish is just amazing. And I'm honored to have an article there. I've had one before and here's another one. This time, what I did is actually use the performance information that Google collects in the context of what is known as Google Crocs to analyze and compare the performance of various JavaScript frameworks. So, you know, if you're wondering which is faster in the context, not of a simulated test, but actually, you know, the actual websites that have been built using these tools. And if you wanted to compare the performance of, let's say, React and Vue and Svelte, or maybe Next and Nox and Gatsby and whatever, then this article is for you because that's what I did. So if you're interested to see how well your favorite framework scored relative to the others, we'll put a link in the show notes. So that would be my first pick. My second pick is that Google I.O. was happening a couple of days ago, again, as the time of this recording. And what usually happens around Google I.O. is that they release a whole bunch of really useful and educational videos on their YouTube channel. And I want to shout out one in particular, and that's a deep dive into optimizing LCP. Everybody knows that I'm really into LCP. web performance. And in this case, Philip Walton from Google, who's a real expert on web performance, shows kind of simulated but really close to real use cases of how he improves the LCP for a web application. And I think it's really an excellent video, so I'll post a link to that as well. And my final pick is the same pick that I'm currently picking on every episode and that's unfortunately the war in Ukraine, which keeps on dragging on. And I'm kind of worried that, you know, we tend to eventually just, you know, not think about things like that anymore because it just keeps on happening and going on and on and on. And I think that we should because it's a real tragedy and it needs to end, but unfortunately it looks like it might be ongoing for a long time to come. And that really makes me sad. So that would be my third and final pick and that's it.
STEVE_EDWARDS: All right, AJ, what do you got for us?
AJ_O’NEILL: You know, I'm going to pick the book, None Dare Call It Conspiracy. And I think that's really relevant for this whole war with Ukraine. None, yet none dare call it conspiracy. It's a book that was published back. I think it was in the eighties or maybe even the seventies. It's available as an audio book. And essentially it talks about certain events in history that when well series of events in history that there's, you know, you can flip a coin so many times and every single time you flip a coin, there's a 50% chance that it lands on heads. But after flipping a coin enough times, you got to arrive at the conclusion, maybe this isn't a fair coin, or maybe it's not being tossed fairly. But that's my my way of summarizing what it's what it what it's talking about. And I think that it's it's particularly relevant to what's what's been happening with this, this war. Cause I, I don't anyway. And then next I'm going to pick, there's a documentary called who killed Bitcoin that I watched recently. And it talks about how essentially some investors got, got in, replaced the core team and then drove Bitcoin to be something that has high transaction fees. And that's completely unusable as a digital currency. And I thought that that made a lot of sense and was really interesting to just kind of see the, you know, who the investors were and how basically different people were moved into positions in which they were able to take control of the community and, you know, centralize Bitcoin and make it so that it essentially does the opposite of fill the function of what the Satoshi person had hoped that it would do. And then the other, the last thing I'm going to pick as a tech pick, which I will have to dig it up and then tell you what the title of it is later, but it's a Microsoft tech talk it's on UI design and very, very inspirational. If you're thinking critically about design and you're, and you're trying to approach things from an engineering perspective, the person who's speaking, I think his name was Jan or Jan. I'm not sure, but he. He created a system, I think he was a professor, and he created a system over multiple semesters with a group of, it sounded like with a group of students, that rather than trying to approach the UI toolkit problem from the perspective of, oh, we need a carousel, so let's make a carousel, and oh, we need a navigation, so let's make a navigation, he tries to break it down into the interactive pieces of...We need something that can iterate between one and five or one and 10 or whatever. We need something that can change a presentation layer in a particular way. And so he, at the end of the presentation, he demonstrates how with about 20 or 30 different primitive components that are web components, somewhat similar to React components, but these primitive web components, he's able to take 20 or 30 of them and put them in just slightly different configurations and have things that are incredibly vastly different, like the difference between an image carousel and a navigation bar and have them work, have them actually behave correctly so that when it's clicked on or touched or swiped, the correct and expected and consistent behavior takes place. And I just thought that it was mind blowing to see a tool kit that is focused around getting the function right and being just really, really clean and beautiful as well.
STEVE_EDWARDS: Thank you. Thank you for those picks, AJ. We know it can always count on them not to be short, but informative too. So moving on to my picks today, I'm going to first pick is going to be another podcast episode. And I mentioned it when we were talking earlier about the web interoperability group. I'm going to spit it out. Interoperability group. I think I got it. And as of today, the day we were recording in May, this came out a couple days ago at Syntax FM, and they're talking about the group, what it is, how it works, what it's trying to solve, and so on. So we'll throw a link for that in the show notes. And then, notwithstanding, Gals picks the high point of every episode is my dad jokes. So for those of us that have traveled quite a bit, we're familiar with travel insurance, trip insurance, where you can You know, if something goes wrong with your trip, you can get refunded on various costs. And I like to go camping. My daughter and I like to go camping. And so I got some camping insurance, but too late I found out that if someone steals my tent in the middle of the night, I'm no longer covered. Thank you. Thank you.
AJ_O’NEILL: I smiled. I smiled.
STEVE_EDWARDS: I could hear the smile, I think. And then I had an uncle the other day who, you know, I've told the history of some of my jobs where I've been fired for various things like working at the calendar factory and taking off too many shifts, et cetera. I had an uncle that was arrested for, well, this isn't job related actually, but he was arrested for feeding pigeons at the zoo. Problem was he was feeding them to the lions. And then finally, I'm in the process of, we're interviewing at GovExp, by the way, we're looking for a view and Laravel developer and I've been interviewing different candidates. And I was interviewing this one candidate and he had a gap in his resume. And I said, can you explain these four years where you had no job in your resume? He says, Oh yeah, that's when I went to Yale. I said, wow, impressive. You're hired. He said, thanks. I really need the job.
DAN_SHAPPIR: Okay. Jail. Yeah, yeah. Jail. Job. Jail.
STEVE_EDWARDS: Yeah, yeah. Okay.
DAN_SHAPPIR: That was nice. I liked it.
AJ_O’NEILL: It's a little bit more you have to read it than I think hear it because if you do it with the right accent, then it gives it away too early.
STEVE_EDWARDS: Yes, that's a good point. I'm not sure what accent I would use for that particular one.
AJ_O’NEILL: Well, somebody with a speech impediment, which could be considered insensitive.
STEVE_EDWARDS: That's why I didn't do that. Anyway. All right. With those done, we're on to saving the best for last.
DAN_SHAPPIR: We are forgetting GAL.
STEVE_EDWARDS: No, that's what I'm saying. We're saving the best for last and I was getting two GALs.
DAN_SHAPPIR: I apologize, sorry.
STEVE_EDWARDS: You gotta have faith, man. Okay, GAL, what do you have for us to pick?
GAL_SCHLEZINGER: Yeah, so I will pick an app called Raycast. I'm not sure if it was already picked.
STEVE_EDWARDS: Yes, I use it all the time, every day.
GAL_SCHLEZINGER: And specifically, it has extensions. And one of the extensions is...Home Assistant. So you can control your home with fuzzy searching and it is very nice. It's very easy to control your home this way. And Home Assistant is very good. So if you have a smart home, you can just make sure that it works. Yeah.
STEVE_EDWARDS: For those who don't know, Raycast is a Mac command line tool. Alfred is probably one of the more better known ones, but it's proprietary and Raycast is an open source command line tool. I use it constantly throughout the day for opening browser, running all kinds of tasks. You can customize it to do custom searches or to open particular applications. You can create custom workflows to do certain things. It's a really flexible tool.
GAL_SCHLEZINGER: Yeah, I really liked it.
STEVE_EDWARDS: Did you ever use Alfred?
GAL_SCHLEZINGER: Yeah, I tried to use it, but it wasn't for me.
STEVE_EDWARDS: Yeah, some of the things I've found in using the two different tools is that Alfred has some really cool things built in that don't seem to work quite as well Raycast or maybe I haven't implemented them correctly, but they do have, there's quite a marketplace of plugins, open source plugins that you can download and install very pretty easily. So yeah, it's a great tool.
GAL_SCHLEZINGER: If you want to build an extension, it's using like some form of React, if I recall correctly. So you just write it in JavaScript and React and this is your extension.
STEVE_EDWARDS: All right. Is that your only pick?
GAL_SCHLEZINGER: Yeah. So my last pick is not a tech thingy, but trying to like...In my, how can I phrase it? So I always try to work with people that are way smarter than me. So if some of you are working and you are the smartest person around, try to recruit someone else or make someone else smarter and more empathetic, or try to find a new place, because I think you should be happy with your life and you should explore more and that's it. Basically have fun with your life.
STEVE_EDWARDS: for me, that's a pretty easy thing to do. So for someone like Dan, maybe not so much.
DAN_SHAPPIR: No, no, believe me, I've been very lucky in this regard. So as I mentioned, I worked with Gal at Wix. And you know, here's a great example of working with somebody who's smarter than me.
GAL_SCHLEZINGER: And there were other as soon as done left. I just left.
DAN_SHAPPIR: Yeah, no, there are a lot of smart, really smart people still working at Wix. Yeah. Yeah, Wix has been really great at hiring some really top-notch people and they're still doing so. But yeah, also the company that I've now joined has some excellent people that I'm trying to bring on as guests as well. So yeah, I definitely, I totally agree with what Gull said. You always need to be learning. And one of the best ways to do that is to simply work with really, really smart people who can, you know, open your eyes to new possibilities and technologies and whatnot. Definitely.
STEVE_EDWARDS: Excellent. So with that, we will call it a wrap on this episode.
DAN_SHAPPIR: Gal, yeah, again, me of little faith. Gal, how do people contact you?
STEVE_EDWARDS: I was getting there, man. You gotta have faith, man. You gotta have faith.
DAN_SHAPPIR: Well, it sounded like you were wrapping up, Steve.
STEVE_EDWARDS: I was, but that's part of my wrap-up.
DAN_SHAPPIR: Okay, sorry. I apologize for that again. So, yes. So, Gal, how do people get ahold of you? They want to talk to you at you, hear your wisdom.
GAL_SCHLEZINGER: So you can follow me at Galstar on Twitter and I have anything else just there. So just go to Twitter.
STEVE_EDWARDS: All right. And we will put that link in the show notes. Dan, is there anything else I'm forgetting?
DAN_SHAPPIR: No, I don't think so. Just remember to relate to, well, actually it'll happen automatically. The recording of the recording, but yeah, sorry. I apologize for that, Steve.
STEVE_EDWARDS: All right, no problem. All right, with that, we'll wrap up this episode of JavaScript Jabber. Thank you, Gal, for coming on and enlightening us on Vercel and Edge functions. And we will talk to you next time on JavaScript Jabber.
DAN_SHAPPIR: Bye.
Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit c-a-c-h-e-f-l-y.com to learn more.
Gal Schlezinger and Edge Functions - JSJ 536
0:00
Playback Speed: