CHARLES: Hey, welcome back to another episode of JavaScript Jabber. This week on our panel, we have Dan Shapir.
DAN: Hello from the Tel Aviv at war.
CHARLES: Steve Edwards.
STEVE: Buenos dias from a cold and clear Portland area.
CHARLES: AJ O'Neil.
AJ: Yo, yo, yo, coming at you live. Yeah, that's it.
CHARLES: Anyway,
STEVE: looks like a floating head. You can make it floating talking head, right?
CHARLES: you can you can't. It would blend a lot. Anyway, we have a special guest this week. That's salty. Oh, salty. Oh, I'll just call you. Oh, do you want to introduce yourself?
SALTY: Yes. Uh, so, uh, like, uh, hello from Thailand. So, uh, my name is arm. Uh, it's kind of hard to pronounce, but. That's how you pronounce it. So basically I am an open source developer, a maintenance of Alicia and working at Dequeal. If you're familiar with like, GraphQL, you call something like that. That's like what we, what, what, what people from where I'm working at created or something like that, but yeah. So overall, that's, that's it.
STEVE: I would like to point out for the record that it is approximately 1 40 in the morning his time. So I appreciate that.
SALTY: Thank you.
AJ: Oh, wow.
SALTY: Yeah.
Hey, folks, this is Charles Maxwood. I've been talking to a whole bunch of people that want to update their resume and find a better job. And I figure, well, why not just share my resume? So if you go to topendevs.com slash resume, enter your name and email address, then you'll get a copy of the resume that I use, that I've used through freelancing through most of my career, as I've kind of refined it and tweaked it to get me the jobs that I want. Like I said, topendevs.com slash resume will get you that. And you can just kind of use the formatting. It comes in Word and Pages, Formats, and you can just fill it in from there.
CHARLES: So I'm just gonna jump in here and say some stuff, mainly because like Elysia or Elysium, it's from literature actually. And anyway, I remember reading Divine Comedy with where they go on the Elysian Fields. And anyway, cool stuff. But that's all we're talking about. What is Elysia.js? Do you wanna just give us kind of the 10,000 foot view?
SALTY: Let's start with the name first. About the name, there's a word called Elysia, right? It's kind of some literature that I read when I was younger, maybe 10 or 11, sometimes around that. If I recall correctly, it's a very nice story about something about love or something like that or something like that, but I I just like forgot how all the things went, but I remember that I liked it a lot so when I was like working on some stuff, I usually tend to like oh I remember that I like this word a lot and there's like some story behind it Maybe I forgot it, but that but that's fine I only like remember that I like this word a lot, so I just like put it on there but on the technical side, Alessia is like a backend framework for like front-end developers, which is kind of weird because before I get into the backend, I was a front-end developer before getting to the backend and I found that the backend is kind of hard because in the front-end, we have a lot of toolings like TypeScript, a bundling step or something like that. And in the don't have some tooling to improve the kind of like, durable experience a lot. So when I was starting with backend, I kind of like very frustrated our Express, FuzzyFive, and Nest.js. I found that it's quite hard to like setting things up. So I just like want to create something that is like, maybe a little bit easier and then like borrow some term from front-end development that can make like front-end developer for you at ease or something like that. That's why I create something like, if you are familiar with gRPC, there's like a server library and a client library that allow you to like use server, allow you to like get some type from server and use it on client, which like simplify a lot of data calling module like over network a lot of that. It is like, it's like simplify a lot of things. And it's like calling just a function and then you call, you do some networking stuff in the, in the between. And then it just makes it magically appear in the front end. That's like, that's how I start building this stuff. But that's, that's all.
DAN: So I have to just make a comment on the name as well. First of all, if anybody saw gladiator, the movie. It's actually mentioned there because the Elysium field was like the Roman Valhalla. It's like where you went when you died. Like the sort of a grassy heaven it was portrayed in the movie. And the other benefit of picking that name is that you manage to score Elijahjs.com which is not obvious for a lot of open source projects. A lot of developer projects have to settle for dev. or dot IO or something like that.
AJ: So there's no settling for.dev. That devs awesome.
DAN: But, but control and
CHARLES: on this one,
DAN: you're better. I'll enter still, uh, still, uh, goes to.com. So that rules. Um, but the, the second on the technical part, you kind of, um, manage to say a lot of stuff in a really short time so because you mentioned the fact that you were looking to create a kind of a replacement for Express and Fastify, and you also mentioned TRPC and type safety. So there's a lot to unpack in what you said really quickly. So maybe we'll do it a little bit more slowly. So the first thing that I want to start with is you mentioned that it's an alternative Um to express and to what's the other one? Is it fastify fasti? I'm always i'm always Confused by I always forget which what's the name exactly aj you probably remember
AJ: Fastify is the one that's far too complicated for the performance that it gives you when you could have used something else instead
DAN: Uh, okay,
AJ: but it was on team fastify for a while
DAN: Okay, so so my first question is this given that express rules so much of web development on top of node. It's what's the motivation for creating yet another alternative for it? Like why isn't Express or Fastify or one of the other alternatives good enough? Why do we need another one?
SALTY: That's a good question. So currently, if you know like WordCell or Netlify or Cloud Fair function, they are using a different standard from how node HTTP works, which Express and FartyFive is built on. So if you want to use Express on Cloud Fireworker, you have to add some config or add some Node.js additional layer or something like that. Unlike WordSaver 8 function and nullify function, add some node compatibility to make it work. And they know also that's the same, but they actually use the different API. They have a standard called WinterCG, which a lot of runtime try to unite together and create one standard that to make the JavaScript framework behave all the same on for making the request on making HTTP requests, they are called web standard API and they use something called like new request and new response. If you are familiar with Next.js, they have an API route and then in the API route, you can create something called new response that new response is the same standard as using in WinterCG. So when you create some framework that based on this, it allow you to make a framework run on any runtime without overhead for converting the for converting from node HTTP into like web standard back and forth. So it's like the standard is going to be like implemented a lot on many runtime and express and falsify is implemented on node HTTP, but not on this not on this new standard. So that's kind of so there's like some new framework like Hono and Alicia that's trying to implement the, that try to implement the framework based on this standard. That's it.
DAN: So rather than building on existing ad hoc node APIs, you're using the quote unquote, well, not really quote unquote, the standard APIs that are provided by the Winter CG. Is that what I'm understanding? And that makes it possible to run Elysia on any Winter CG compatible platform without the need for any sort of a compatibility layer that adds overhead and complexity, correct?
SALTY: Yes. Yes, that's correct.
DAN: In that case, that kind of leads me to my next question, which is when I look at the ELISA website, the ELISAJS.com, I see that it very explicitly states that it's intended for BUN. So that's kind of contradicts, it seems to me, like the statement that it's compatible potentially with any WinterCG compatible platform. So is it for BUN? Or is it recommended for BUN? Or is it only for BUN? Like, what's the attitude here?
SALTY: So actually, ERISTIA is designed to be run on BUN at first but then we found that BUN also use the Winter City standard, which allowed us to expand the API to another runtime as well. But at the first, so currently we are on like almost to the, we still in and very early stages, but technically we tried, there's like a runtime different in for each code. For example, in Node.js, they are using V8, right? V8 engine to run JavaScript. But in BUN, they are using JavaScript core. So there's like some area that some code tend to run faster in JavaScript core and some code tend to run faster in V8. So, Alicia tried to like run a lot, have a lot of code and pick the code and then we benchmark the code and benchmark the code in each runtime and pick what is fastest in JavaScript call and put it into the framework. So basically we are optimized for running in JavaScript call but the performance doesn't like different a lot on each runtime. So technically it's like, it is meant to be like run on JavaScript call but the code also run the same in other runtime as well. It is... It is just like faster in JavaScript call because it is meant to be run on there. But based on WinterCG standard that we are building, it allowed us to expand to like until runtime that I mentioned. But in the currently we have some like some plugin like maybe a file system for like serving static file. So for the first class plugin that we are building, we are building especially for barn and then we provide. What is called like the lower the second tier support for Node.js and Cloud Fireworker. But in the main environment that we are running, we are testing on BAN mostly.
AJ: So my understanding is that V8 is for desktop. It was designed by Google for desktop for long running applications where you get a lot of benefit out of JIT. And JavaScript core was designed for mobile where things need to run very quickly the first time and JIT isn't as important as just as soon as you get the code, run the code. So I understand why the two would be different. You going to do a micro benchmark on one line of code over here versus one line of code over here and you're going to get better performance on one versus the other because they are V8 is going to be slower upfront, but faster over time. JavaScript core is going to be faster in a one shot. And that's what these edge functions and edge platforms are geared towards is their one shots. So in, I know that winter CG could technically apply to any runtime, but in practice is winter CG going to be JavaScript core like Metla 5 or so, et cetera.
SALTY: Yes, exactly.
AJ: Okay. So it's, it's not just bun and it's not just winter CG. It's that there's an ecosystem here where they pretty much all are going to use JavaScript core and they're pretty much all going to use winter CG. So you're playing to that benefit and there's no chance that node would ever do as well because a node application just is, is never going to be able to be tuned for winter CG because node is running on VA.
SALTY: Yes, exactly.
DAN: Although I do have to ask myself when I'm thinking about the code that you're writing, maybe you can give a concrete example of where you see such a difference. I mean, which code do you write that does any sort of processing that a certain optimization gets it to run faster in core and a different type of optimization gets it to run faster in V8.
SALTY: Okay, let's start with my most favorite one. So if you're familiar with REST parameter in JavaScript, it allows us to like spread the parameter and create a new object based on an existing one, right? So in V8, they really optimize for that spread operator. So basically, when you like pass the spread operator, it runs very fast like thousand million operations per second or something like that on my machine, which is like, uh, uh, happened in WebM1 Max. But when I do the same on, uh, JavaScript core, it run just like a 10 million. So there's like a hundred times different on, on, on like, on very simple thing, like copying the object and then create some object over it. So when I first found this, I quite, I checked, I actually very shocked because like we i tend to use this object spread all the time when I was working on my daily job. And then I use the same pattern in my framework. So when I run this on bond, it is kind of like I have some question why it was very slow. So I have to do a lot of micro optimization to test some case like this. And then we have some kind of weird thing, a lot of weird code happened that looks something like this. It looks very simple, but it actually has a lot of different performance on different JavaScript runtime.
DAN: I have to say that one of my favorite quotes in the context of micro benchmarks and micro optimizations is lies, damn lies, and micro-optimizations. Because it's really, really difficult to properly test micro-optimizations for JavaScript engines. Because like AJ said, the engines optimize things over time. And they can make all sorts of sophisticated decisions, like identifying variant code in loops and do all sorts of things. And then you're thinking, you think that you're benchmarking one thing, but in fact you're benchmarking something that's totally different than correct that like when you're like optimizing for the code that gets reused so many times in a server environment.
It can make a significant difference, which brings me to my other point where people forget that while all these JavaScript engines, be they V8 or JavaScript Core or Spider Monkey or whatever, they all adhere to the same standard, but they're like totally different implementations. And the developers, like you said, they optimize different things or for different scenarios or they place their focus in other places. So it's not really that surprising that code that might run really efficiently in one engine might run a lot less efficiently in another or vice versa. So it's really interesting that you found that the spread operator for arguments is really not that optimized in JavaScript Core. Obviously it might change in a future version. They might come out with a new version of JavaScript Core where it's much better optimized. So yeah, these things do change over time. That's also important to remember. Interesting. So basically what did you do? Did you just avoid it? Did you use the arguments object instead? Like what did you do when you saw that you got such bad performance with the spread operator in that scenario?
SALTY: So there's two case that I used. the first one was to create a new object and then add some property over it. But sometimes it doesn't have to be copy. So when I instead of doing a spread operator, I use just like use direct assign to the object if it doesn't have to be a new one. So if it doesn't have any reference at all. And the second thing that I do is to handle the tool shallow copy the object, I use object assign instead on JavaScript call. It is like faster than spread by like five times than then spread on JavaScript call. So that's how.
DAN: Oh, so object assign instead of spread. Yes Interesting because you would think that they would do more or less the same thing. So it's really interesting that they don't. But it's also interesting because AJ, unfortunately, I didn't participate in that episode where you interviewed the creator of Bun, Jared Sumner, right? That's his name. And it's interesting because it seems that there's a lot of commonality here of really optimizing each and every line of code to try to squeeze out the most performance that you can get. So it's really interesting to see that this project has the kind of the similar philosophy to the burn runtime itself.
CHARLES: I just want to clarify something real quick. This does run on Node.js or does it only run on Bun? It sounded like it's optimized for Bun, but.
SALTY: Yeah, we have a plugin for converting the web standard into the node and then back and forth as I mentioned on the first one. So we have plugin for that. Yes.
CHARLES: Okay. So you've mentioned the Winter CG and some of the optimizations you've made. So have you more or less, I have to say, I haven't done a ton with Express, but it looks a bit different from what I've seen of Express. So how did you decide what approach you wanted to take for things like, you know, routing and, you know, returning HTML and JSON and whatever else you got it able to do?
SALTY: So I want to make the backend framework to just like return it up, just to return a value and then that return value gets sent into the user, that gets sent to user or client. So how it plays work is that you get some contact and then call like, let's point the send and then you send something. If you want to send JSON, you have to like call it respond.json and then pass the JSON into there. But in LECR, I actually want to just like return the object just like normal function and then pass it into the client. And in the client, we have a custom client for like like TRPC to get the value of the function. So if you manage to make the server behave just like normal function and then client call the function get the return value instead of doing respond.json. Using the return keyword is like, very familiar to frontend developer because when you say that it is a function, they don't expect the response to be something like respond.json or respond.send. They expect it to be a value. And then when you use it on the client when you see the server code when front end developer see the server code they just know that oh okay this function this endpoint return a function so it just like no more value there. The second thing is that when you use user.json, TypeScript can't input the return type from there so on the client side we have a custom library for calling the fetch for like something like TRPC, right? If you have ever curious why TRPC doesn't use something like respond.json or respond.send, it is because that when you are using the same syntax, you can't infer type from the server. So you are forced to return the value as a return value instead of pass, instead of like wrapping it into the function. So two key takeaways here is the first one is that TypeScript doesn't like response function. TypeScript wants you to return a value as a value to get the inferred type. The second thing is I want to make it familiar with frontend developer, so when you want to return a response, you just return a value and then it gets converted into a response.
DAN: So again, I'd like to kind of decompose that into two parts. So the first part is that you chose a simple API over, let's say, compatibility with Express. So your intention was not to create the library that is as compatible as possible, let's say, with Express, but rather one that's kind of similar to Express Xbut intentionally not compatible in order to be easier to use and promote certain good programming practices. So that's part number one, correct?
SALTY: Yes.
DAN: OK. And the second part is, and it's kind of really highlighted also on the project's home page, is that you really wanted to focus on type safety typing so that as much as possible, like you said, you can infer the types from the actual return value from the server side function. And then on the client side, if you want, you can import those types and get type safety across the wire. Now, just to clarify that to make sure that I understand, you don't. have to use this type safety, right? At the end of the day, it's just a standard HTTP response. So if I'm using just a regular old web client, like if I hit the URL in the browser address line, it'll just work. It will just get a JSON response with the appropriate values. So I don't have to use the type safety. It's a benefit and a feature that I can use the type safety if I want to, correct?
SALTY: Yes, exactly.
DAN: Now, this is kind of reminiscent of the conversation we recently had in an episode that just came out at the time of this recording about where we discussed the use of RPC in modern meta frameworks, like in Next.js version 14, or in Quick, or in Solid, where you call functions across the wire and you get type safety provided by the meta framework that you're using. So for example, if you're using Next.js 14, you create a function, you put in it use server, then you can call it from the client and it behaves like a function call and it's type safe all the way, but it's not really a URL. You're literally calling a function. It gets translated into a URL behind the scene. In your case, it's not that. It's similar to that, but it's not that. It has a URL. It has a standard HTTP return value. It's not like masquerading as something else. But you still get the type safety, correct?
SALTY: Yes, exactly.
DAN: And the type safety on the client side is provided by a library that you're using, correct? Which library is it?
SALTY: Yeah, it is called Aden. It is subplugin in the Edizier homepages. If you go into the plugin, it is listed as the first plugin for this exact purpose.
DAN: Is it like a project that you chose or is it like something that you already also worked on or like what's the relationship between Eden and because it kind of both of them reference and Eden. So I'm wondering if it's like something that you also created.
SALTY: Yes, it is something that I also created.
DAN: Oh, cool. So you built Elysium on top of Eden in a lot of ways.
SALTY: So actually I was, I first created Elysium and then I found that maybe it is good if you can do something like TRPC for the server. So I created Eden for that.
This episode is sponsored by Miro. I've been working lately on a whole bunch of different projects for top end devs. Everything from meetups to courses to podcasts to the website. And a lot of the things that I've been working on have gotten really, really messy. And even though I have some software that helps me manage the projects, one thing that I figured out was that it really helps to have a tool that you can use to manage kind of the layout and the organization of your project not just whether or not tasks are being done. And so I picked up a program called Miro and Miro, what it does, I'll just give you an example, when I'm building a course, I sit down and I open up their mind map and I just put all of the information into the mind map. And I'll do that for probably anywhere from 15 minutes to an hour, just depending on what it is that I'm organizing. And then what I can do is from there, I can actually reorganize it and rejigger it so that it actually makes sense. And it really, really helps me get all of the ideas on paper or on the screen in this case, and get an idea around how these things come together. So if you're looking to put together a project or organize some ideas, you should definitely check out Miro. You can find simplicity in your most complex projects with Miro. Your first three Miro boards are free when you sign up today at Miro.com slash podcast. That's three free boards at Miro.com slash podcast.
DAN: I understand. It's kind of similar to Zod in a lot of ways though, isn't it?
SALTY: So Zod is for validating the API, right? So instead of using Zod, I actually use something called Typebox, which is kind of like Zod, but it was like a lot faster, like a hundred times faster or something like that.
DAN: So Typebox is a.. is similar to Zod in that you can specify schema in TypeScript, and then that schema becomes type safe. So it's a type safe way to send data across the wire. And you built Eden on top of that to infer the types from the return values in Elysia. And then you have the return values Okay, now I see what you're getting. So, and effectively kind of constructed a sort of a TRPC sort of a thing. out of all these moving parts.
SALTY: Yes.
DAN: By the way, I have to mention that on your home page, you have at the bottom, you have a really nice playground. So anybody who's listening to this podcast and finding it difficult to kind of like follow along, they should just go to the aliziajs.com, scroll to the bottom and have this kind of interactive playground that they can use.
AJ: I do not see an interactive playground. I see a start in minutes and an introduction and a cheat sheet No,
DAN: there's a try
AJ: the try it out not at the very bottom. Okay. Okay
DAN: Ah, so you went even lower Yeah, it's exactly it's almost at the very bottom. Yes.
AJ: Okay. Yeah, so I I had a few questions here uh, one of them was How do you do you deal with headers and streams? Because I think that's the main thing that is not... you can't return... well I guess you could have some way that you return headers as part of your return object. But then streams are also... you know, if you're streaming some data rather than providing the object whole, what's the story for that?
SALTY: So for a stream, you can't actually return the stream, right? So you have to create something called like, uh, uh, what is called a racing or something like that in Node.js. But in order to make that work as like no more value, we create some plugin called AliciaStream, which allow you to create a new function, uh, which allow you to create a new class. And then when you want to return a stream, you call some send method. So that is the only case that you have to use the send instead of return the value. But so that's it.
AJ: Honestly, I can't think of a reason that you would ever use a stream in a web framework in production because typically you're just sending a file. And if you're sending a file, then your web server would send the file. Oh, when you got to do generative stuff, like creating a creating a CSV or a PDF or something like that on the fly. That's the case for...
DAN: And you kind of answered how I would do it in an ideal world. I don't know how difficult it would be to implement in a framework like this, especially given that I've never done it. But like you said, AJ, it's generative. So I would gravitate towards generators as a way to do it. Instead of returning a values, I would yield value.
AJ: Get the hints. Get the hints. Where's my holy water?
DAN: Yeah, but that's what it was. That's what it was created for, really, you know.
AJ: Oh, it was created to confuse people and sound cool.
DAN: It doesn't have to be difficult, really. It's more it's more scary than it seems once you get used to it. But but again, it seems to fit the case exactly like every time you have more content, you yield some more stuff.
AJ: Well, there was another question that I.
DAN: But wait, is it before, before you switch over, uh, um, is, is, is that something that you might consider either doing some sort of either generator or maybe a synchronous iter or returning in a synchronous iterator or something along these lines?
SALTY: Actually, I actually first consider to use the use keyword and generator function because in the client library that, in the client library, I actually prototype some response for handle the stream. So basically when you call fetch method in JavaScript, right? You would call JSON and then you get the result right away. But when you call, but when you call, but when the response is a stream, you can't do that because if you do that, it is going to block the response until all the stream is over. So actually, I implement an async iterator over that. And I actually also want to implement that on the server as well because to sync the API to make them behave the same. I actually asked a lot of people and then as many people said, they are kind of afraid of degenerative function and yield keyword in general.
AJ: So good
SALTY: for now, I am going to implement the same and then I will implement the yield keyword later. So we are going to support two ways to do this
DAN: I think, like I said, I totally understand why people are scared of it. But on the other hand, I think it's a lot when done correctly, it's a lot simpler than, than people really worry about it. It's almost magic and how simple it can be. There, there are potentials, potentially some issues with it. Like, I don't know what the performance will be. So, you know, given your performance focus, you know, it really depends how efficiently the engine actually implements all these mechanisms and it can vary a lot. Another potential issue to look at is the push-pull program problem, like how you deal with congestion. You basically want the client to be able to pull stuff and you don't want to create congestion on the server side. It's an interesting question how you control the flow in these types of scenarios. Well,
STEVE: unfortunately, this is the time of year for congestion and colds and stuff. So hopefully there's some good code medicine for that.
AJ: So speaking of congestion, there was one thing I meant to ask earlier, but I realized I was on mute. So we were talking about the performance and we're talking about these micro benchmarks and Dan was talking about how there's, you know, lies, damn lies and micro benchmarks or whatever it was and so you, you're showing here that I'm assuming that these requests are like, it returns the string, hello world or something. So, you know, we've got express can only do 15,000 a second, but Alicia can do 200, a quarter million can do a quarter million requests per second. But, uh, the, the two concerns I have are one this is being benchmarked on an Intel i7, which is nowhere close to what you're going to have in a deployment environment. You're going to be dealing with a 500 megahertz single thread, no hyper threading, or I guess not single thread, but single active thread, no hyper threading, um, environment, which is going to be very different than a desktop that has, you know, eight cores available or, or whatever, and has full hyper threading available to it. So there's that. And then there's also, I don't know how much of this is, we're measuring overhead and how important that overhead is. Because if I'm returning hello world, then all I'm measuring is the functions and the overhead of the framework. But that may not matter in production because if I'm returning an API response, the overhead of JSON dot stringify on, you know, a kilobyte of JSON might be so much that if we were to run this benchmark in that type of environment, maybe, you know, maybe Express and Alicia are only, you know, a hair's width apart once there's actually any of the work that you actually do in your application. So, and then, and then I did also look up, There's a frameworks, web frameworks benchmark. And on this one, it has surprisingly a few JavaScript benchmarks at the top. And then a bunch of go, uh, frameworks, and then a few more JavaScript frameworks, and then a few more go, and then. And then down at the very, very bottom, the fourth one from the bottom is Alicia. And it's showing that it's, you know, only getting a little over 10,000 requests per second, whereas it's showing that express is getting almost two and a half times that. So what do these benchmarks actually say for an actual application versus it just looks cool to put on the homepage and draws in some hype?
SALTY: Okay, sure. I got this question a lot the benchmark called web framework, right? So actually they create an issue on Alicia's site, on our repo you can go find an issue there. The maker of this benchmark request me to have something called like a code review for what they are doing for Alicia and Bon as well. Actually, if you go back, to that benchmark in a month ago around, what is it called? The nine month in English, what is it called? Around before October, there is a benchmark that run on Alicia and Alicia was performing around 100,000 which is around level of Go and Rust. If you click on the top of the pages, you can see that you can scroll back into the data that they did before. If you scroll back to the data they did before, you can see that most frameworks that run on barn and like LHC and HONO, they are performing very very good like 100,000 requests per second. But at some point they actually changed some configuration of the Dockerfile. So how this web framework testing are that they pack the run they pack the runtime and the code into the Docker 5 and then they execute the code. But the problem is that when they create an issue on Alicia repo, they are using PM2 without doing the money process work. So what you are seeing right now at the benchmarking at the benchmark website, they are actually running on single thread. So that is why LCR down to that level because other node run, other framework like Express and Fastify, they are using multiple process to run, to do the benchmark. And for BUN, like framework like LCR and HONO, which for some reason behave like quite slow recently because they are changing the Docker file or some configuration or some hardware that I am not aware of but I actually created an issue for that on their report fix.dat and I am taking a look into that issue so I can assure you that previously around a month or two months ago they are editing and performing a lot faster than it is now.
AJ: So I do see that October 3rd. I look at the October 3rd benchmarks and I see that there, Alicia is about five times faster than express and whatever this benchmark is, which, you know, who's, who knows what that is, but yeah, that that was exactly my question is, you know, like the benchmark here, you know, this is multi-threaded, multi-core versus an actual deployment environment. If you're measuring wide, it's different than if you're measuring, you know, Horizontal measuring is different than vertical measuring. Here you're doing vertical measuring. There they're doing kind of a hybrid. They're giving you, they're doing horizontal measuring, but only measuring one instance of the horizontal measuring, whereas you are doing vertical measuring. So yeah. Okay. That, that makes sense. So, but in practice with an actual application, if you implement, let's say a to do app, well, I don't know if that's even really enough, but let's, you know, if it's a to do app that connects to the database and has some items that you've, you have some application implemented in both express and in Alicia, what, what do you think is the real difference in performance and is even, is that even like as big of a deal? Cause again, you could just go with go if you were really concerned about performance or, or rust, if you're hyper concerned about
SALTY: So, about Alicia, so we have like, if you do something like that to do, like some simple to do list for this case, they, if you compare it to node, like Express and Falsify, they are not going to have like a lot of different, like 10 times different, it is not that case, because they are, if you do something like database calling, they are going to have some overhead over there too. So they can never get up to 10 times faster or 20 times faster. They never did that in the real world with database calling or something like that. But if you do something in memory, like maybe...
AJ: Which is your edge functions, your winter CG stuff?
SALTY: Yes.
DAN: Or just using Redis.
SALTY: Yes, already as well. They are going to perform a lot faster because first, in the edge function, they are using JavaScript call and then the code that we are making are actually very optimized for JavaScript call. Actually, Alicia has something like fake compiler that takes your code and generate the new code on the fly to remove any overhead that it could have or that...
CHARLES: Wow.
SALTY: Or something like that and so, yeah,
DAN: so before we get to that, and I will want to get to that, I just wanted to also respond to you, Ajay, in this context, because first of all, you know, as a person that deals a lot in performance, I can tell you that performance is a concern, but rarely the main concern. So it's not like people say, Oh, performance is the most important thing. So I'm going to choose go over JavaScript. They choose go or JavaScript based on a variety of concerns in which performance might be one of them, probably should be one of them, but mostly they need performance to be good enough rather than to be the most performance solution possible. So they might prefer to use JavaScript because I don't know, maybe it means that it's easier for them to move developers between the front end of the backend or because the person who did the initial implementation for the startup liked JavaScript, or maybe they liked JVM, or maybe they liked Go. So they make whatever technical choices they make, but they just need to know that they're not going to dig themselves into a hole, let's say, performance-wise or scalability-wise.
AJ: And by the way, this does, on the benchmark that I'm looking at from October this actually outperforms the framework I use in Go and a number of other popular frameworks in Go. So just to clarify that when I said, Go might be faster according to these benchmarks, most of the Go frameworks are actually slower than Elysia.
DAN: And the other thing is that if you're looking currently at the edge function world, that's almost by definition like it's going to be no. Well, not node based, but
AJ: JavaScript
DAN: JavaScript based or, you know, maybe web, web assembly, but, but probably JavaScript based. So that's certainly an interesting use case for that scenario. And, and, you know, especially on the edge, you probably want the fastest thing possible because that's what you're using the edge in the first place.
AJ: I think you're using the edge in the first place because it's really cool right now.
DAN: Yeah, that's probably true. We, we, we had the whole episode where I was, we, we had, uh, Galsh Lezinger from, uh, for sell and like the whole episode, I was trying to get him to say like, what was the killer app for edge and I don't recall that I got an answer of
AJ: investor money, investor money is the killer app.
DAN: Yeah. Probably. Um, yeah. So what I'm, what I'm getting is, is that here? is a really nice to use library that's an alternative to Express, that in a lot of ways has a much simpler API, it's type safe across the wire, and it also happens to be a lot faster. Now, you put faster at the top because it's sexy, but for me, it's more of the like, like the icing on the cake in a lot of ways.
SALTY: So basically because like a lot of developers say that node is slow. So I kind of want to destroy that. That that pressing. So that's why I put the performance on the top because it catches a lot of eyes easier than saying I have something like maybe open API support or something like that.
AJ: Yeah.
CHARLES: But are your benchmarks running on bun or are they running on node?
SALTY: Ah, uh, the benchmark is running on bun But for express, we also have BUN version and Node version as well.
DAN: So all the tests that you're showing are on BUN, correct?
SALTY: There are some other frameworks like for HONO, they actually work with a lot of runtime like BUN and DENO as well. So I kind of run on the other runtime as well, if the framework supports, so there's like Express on the BUN and Express on Node, or HONO on DENO or HONO on BUN, and then compare the performance difference.
DAN: I hope I don't get hate for this. We actually even use Nest at Next Insurance where I work, and I really don't get the like why people like it so much. Uh, I'm not, I'm not a nest person. Maybe it's, you know, I'm not big into decorators. Maybe it's that.
CHARLES: Maybe we need to get Kent on to talk about why we'll never use nest. You wrote a blog post about that.
DAN: Well, he spoke about why you should never use next. Not why you should never use nest.
CHARLES: No, right. Yeah, you are correct. I mixed them up.
DAN: Yeah.
STEVE: So are you saying that's just for the birds then?
DAN: Yeah, maybe. I don't get the appeal, but maybe it's just me. But again, so going back to Elysia, so we said it's potentially like 18 times, let's say in order of magnitude faster than Express, and let's say several times faster than Fastify. So if you're concerned about performance, you've got that covered. It's got a much simpler API, at least for the non-streaming scenario. It's type safe. Oh, and I see that it's also open API compliant.
AJ: And that's just the new name for Swagger, right?
SALTY: Yes. Actually is, is, uh, the standard for that Swagger build on, but, uh, mostly we open API for Swagger.
AJ: The standard that they retroactively built it on after they turned it into the standard.
SALTY: Yes. But it is nice to have because like in, in all my previous work that I was done with like freelancing, uh, I was using Fastify and it is very hard to get it make it work with swagger and make it work with TypeScript. So when I have to change the schema, I have to change it on both TypeScript and change it on the falsify schema and then change it on validation library. So I have to change a lot of that and there are like a lot of frequent change. So I try to make it like defined by using one single schema, which is the type box that I was using and then it does support the OpenAPI schema to generate the record as well. So that is really nice to have to let the manager or senior programmer see that what are you working on or something like that.
AJ: So how do you deal with headers? I don't think we actually rounded that question out.
SALTY: Ah, yes. So for headers, if you are familiar with code.js, when you get a request and response, everything is done through context. So you can, so there's two case. If you want to set the header, you can use the context.set.headers to append the value to the header. The second one is that you can explicitly return something called new response. And that class is a standard that built into WinterCG standard. So you kind of get that for free. And if you want to return a value, the second parameter of the constructor, they accept something called headers. And then you can add the headers to that as well. But you can choose return each one if uh, each one you like. But the good thing about Alicia is that Alicia tried to like, if you're working with cores or something like that, you don't have to explicitly set the headers because like plugin already set it set it there for you.
AJ: And so does it, how does it know? Well, I mean, generally you're returning Jason. I mean, like 99% of the time you're returning Jason. But if you did need to return. Text or HTML, or does it, does it do some sort of peeking at the content when it's non object content? Non-object non-stream to be able to tell what type it is or is it JSON by default? Like if I just had a stream and you didn't know what was in the stream, what would the header be?
SALTY: Actually, we, in the Aesir, when you return the value, it actually passed to some function called mapHandler to map that value into a web standard response. So what we did is that we tried to find what is the most common case that developer tend to return, like, If they are returned text with the HTML, which with HTML value, they are the and see all automatically set the response to be a automatically set the content type to be HTML. If the value is the JSON, they automatically set to JSON. And if it is a file, like you, you get it like that. All a lot of kids that we try to try to like automate. But are
AJ: you saying at runtime, it looks at the data and whatever that function returns, it memorizes the type and then just returns that type. Is that what you're saying?
SALTY: Actually we do both. We as I said before, it's actually have a fake compiler to detect the return type and return value. But in case that it can't detect the value, they are using, they are trying to interpret as on the fly.
DAN: And what if I want to just add, I don't know, let's say a cache response header, something that doesn't have to do specifically with the content type, but rather just extra information on top?
SALTY: If you want to add some manual header, you can set a header based on context and context.header, which is passed to every handle in the...
DAN: Ah, okay. So in addition to the return value, if I want to provide some additional metadata, I can do it via a context object that my handler, that's an optional parameter to my handler, as it were. OK, that makes sense. Yeah. I assume you can also use that to do stuff like, I don't know, like a three or four response instead of sending all the data down again or something along these lines or so. And again, it all works really well with the bun really fast system for, let's say, manipulating files and so forth. So I really like the combination of these things together. But I know that it's not necessarily AJ's jam, but I really like the type safety across the wire. The fact that you can post JSON data and get a build time error if you.. If you're, if you're passing in an improperly constructed object is, is really powerful.
AJ: I love type safety. I hate.
DAN: Yeah, that's the thing. It's typescript based. You can't get away from it in this call.
AJ: I don't see any typescript here.
DAN: Yes. You well,
AJ: I'm looking at the cheat sheet. I don't see any typescript.
DAN: No, but again, if you go, if you go to the homepage and you, and you search for typed strict, Actually you you go to end-to-end type safety introducing end-to-end type safety you will see that it passes a string age instead of a numeric age and You and it you get the type error at build time.
AJ: Yes, but that if if you pull that up, there's no typescript there that's that's using the I forget what it was called, but
DAN: Yeah,
AJ: it's the type object T dot object T dot string T dot.
DAN: Yeah, but but again, it's a type object But it's but but that's why I said that it's like Zod because yeah It does a type checking it, you know you do it both at build time and at runtime No,
AJ: I I'm a hundred percent on board with this. This is JavaScript. It's easy to read There's no extra syntax to learn. No, I'm a hundred percent on board with this
DAN: Oh, you mean you mean the fact that you write t dot number instead of doing the the the explicit typing. That's the thing that you like Well,
AJ: no, I'm it's not that I like it. I'm just on board with it. I don't dislike it So i'm not saying i'm not saying I want all my code to look like this. I'm saying this doesn't this doesn't bother me This is straight javascript. There's no magic here I I like, uh, jsdoc well like I use JS doc to get my type safety. I use the typescript checker. I use JS doc. I don't see any JS doc here. I don't see any typescript here, but this type safety is, is same. It's easy to read. It's easy to write. It's intuitive. I don't think that anybody would come across this, whatever their background is and, and think, oh, this is difficult. Like this is, this is intuitive. You, you would be productive with this in a matter of second
DAN: But the nice thing is that because of the way that tools like VS Code work, even if it was like client.js instead of client.ts, you could still probably get the type checking to work.
AJ: So, I was going to call you salty because that's your first part of your handle there. But, hold on, let me, it's just om, right?
SALTY: Yes, yes, it's om. Okay, om.
AJ: Okay. Do you have compatibility with JS doc or can the type checking, does the type checking work with just regular JavaScript and JS doc? I do notice now it shows that this file ends in.ts but I don't see any typescript here. This all looks like regular JavaScript to me.
SALTY: Actually I'm not sure if it works with JS doc. If it is working with JS doc because if it is typed as TypeScript and then you add the explicit type, it is going to favor the TypeScript type over the JS doc if I remember correctly. So I'm not sure in that case, but if you try to type it in JavaScript in.js file, it should work as well. But I don't actually recommend that because Elisya does a lot of typing magic to make it work that way.
DAN: It should work, AJ. It just might not be as nice.
AJ: So it's preferred that you don't use JavaScript with this framework, but you use TypeScript.
DAN: If you want to type safety, you don't have to have the type safety.
SALTY: Not exactly. You can actually use LST with JavaScript and then you can still get the type safety. But it is not as it will not be as good as TypeScript because it because like Alicia does like have like a thousand lines of TypeScript to make that work to make all the things be inferred and then you don't have to it would write it will write type manually so but if you are using KSDoc and then you are documented it line by line it should it should work but
AJ: Do you have custom? This is the thing. Do you have custom tooling? Are you using TSC?
SALTY: We are using purely TSC. Actually, I wrote a lot of typescript type. So I actually wrote some compiler based for GraphQL in type level 2 or something like that. Or some tool like that. And then I use that experience to make the same thing with the Alicia.
AJ: Okay, so. If it works with TSC, it'll work in Alicia.
SALTY: Yes.
AJ: Okay. All right. Yeah. So then, then JS doc will work. Okay. That's cool. I like that. Yeah. I hate TypeScript. I think it's just so much complexity. It's like, you got to have a computer science degree to use TypeScript. You can be a, you could be an idiot and use JavaScript like, like you wouldn't go.
DAN: Yeah. Here's the thing I like to, and it's a bit of an aside and then I apologize for that, but still, um, TypeScript is kind of like herding cats because, you know, JavaScript is like the poster child for dynamically typed languages, kind of like Ruby in that regard. It's like really carefree and wild about how it uses types. And TypeScript really tries to wrangle that and make it possible to get all the JavaScript expressiveness with type safety. So you're going to get a lot of complexity in TypeScript in order to try to achieve something like that. Most TypeScript developers don't really need to deal with all this sophistication. It's mostly for library builders, people like Om, who want to introduce type safety in this way. I'm guessing you did a lot of sophisticated TypeScript coding, which was really far from trivial. And kudos for that. But most you know, run of the mill TypeScript developers never need to deal with such sophisticated complexity.
CHARLES: All right, I'm going to push us to picks. I will also say that I like the wild west of the type system in Ruby. But yeah, let's jump in and do some picks. Before we do that though, how do people find you online?
SALTY: Oh, yeah, I do have a Twitter account which has the same name as here, SaudiOM. find me on there or otherwise you can go into Alicia and then you can go to Alicia GitHub. There are links to my GitHub profile as well. But mostly for programming stuff, I usually active on Twitter mostly.
CHARLES: Awesome. All right. Well, let's go ahead and do our picks.
Hey, this is Charles Maxwood. I just wanted to talk really briefly about the Top Endeavs membership and let you know what we've got coming up this month. So in February, we have a whole bunch of workshops that we're providing to members. You can go sign up at topendevs.com slash sign up. If you do, you're going to get access to our book club. We're reading Docker Deep Dive, and we're gonna be going into Docker and how to use it and things like that. We also have workshops on the following topics, and I'm just gonna dive in and talk about what they are real quick. First, it's how to negotiate a raise. I've talked to a lot of people that they're not necessarily keen on leaving their job, but at the same time, they also want to make more money. And so we're gonna talk about the different ways that you can approach talking to your boss or HR or whoever about getting that raise that you want and having it support the lifestyle you want. That one's gonna be on February 7th, February 9th. We're gonna have a career free to mastermind. Basically you show up, you talk about what's holding you back, what you want dream about doing in your career, all of that kind of stuff. And then we're going to actually brainstorm together, you and whoever else is there and I, all of us are going to brainstorm on how you can get ahead. Um, the next week on the 14th, we're going to talk about how to grow from junior developer to senior developer, the kinds of things you need to be doing, how to do them, that kind of a thing. On the 16th, we're going to do a visual studio, uh, or VS code, uh, tips and tricks. Um, On the 21st, we're going to talk about how to build a software course. And on the 23rd, we're going to talk about how to go freelance. And then finally, on February 28th, we're going to talk about how to set up a YouTube channel. So those are the meetups that we're going to have along with the book club. And I hope to see you there. That's going to be at topendevice.com slash sign up.
CHARLES: Steve, do you want to start us off with a high point of the episode?
STEVE: Absolutely. All right. So, uh, still missing my, uh, my rim shots. We got to get that figured out. Chuck. Um, so, you know, in the past I've talked about all the different jobs I've had and how I got fired, you know, like when I used to work for the bank and got fired by pushing an old lady over when she asked me to check her balance. Um, One point I used to work for the circus and I was a human cannonball until they fired me and then recently I bought a chicken to make a sandwich. Turns out it just poops all over the floor and doesn't make sandwiches at all.
CHARLES: Right.
STEVE: And then, uh,
CHARLES: do you need your rim shots? I'm sorry.
STEVE: I do. I really need them. And then, um,
DAN: yeah, this is almost sad without the rim shots.
STEVE: Almost. And then, uh, I burned, you know, I'm working on losing a little weight and I burned 2000 calories yesterday. I left my food in the oven for too long.
DAN: Hmm. Yeah.
CHARLES: There we go. I feel like I'm doing this, right?
STEVE: I'll have to get a sound set up. Sound effect on my phone and hold it up or something,
CHARLES: right?
STEVE: Yeah. Those are my picks. All right. Dan, what are your picks?
DAN: Uh, well, given the realities on the ground here, my main pick this week is going to be the anti-missile defense system that we've got going here in Israel just to give it, to put it in numbers, since this conflict began, Hamas has fired, and I'm excluding Hezbollah in the north, just Hamas has fired close to 10,000 rockets at Israel's population centers. So they're intentionally trying to hit our cities, like Tel Aviv or the more southern cities of Ashkelon and Ashdod have suffered a lot of rocket attacks. And what makes it, you know, being basically livable here in Israel is the fact that we've got an amazing rocket defense system. It's actually three layered. We've got the Iron Dome as like the layer that deals with the lower flying rockets. You've got David sling which deals with the mid-range systems and then you've got the arrow Which is for ballistic missiles it which was actually used for the first time in in a in a combat scenario not an experiment or something like that when The Houthis in in Yemen actually fired the rocket at a lot. So it was actually intercepted by an arrow missile now i was actually, when I was in the army a long time ago, I was actually involved in the development of these systems. So I feel an extra level of gratitude and appreciation for what they've been able to achieve with these systems. It's pretty incredible. So that would be my pick. By the way, if there's one part of our economy that's benefiting from all this is the defense industries, because a lot of countries are now looking to buy these defense systems, because they've just proven themselves incredibly well in these times. So that would be my pick. Other than that, the war is still ongoing. And even after all these times, we still get like more information and more videos and whatnot about what happened on october 7th Uh, it's actually I actually watched a video that was released online. I I saw that you also favored it steve Uh, I kind of wished I didn't watch it after watching it
STEVE: Because not the one with the girl.
DAN: Yes Yeah,
STEVE: that's pretty Amazing Amazing more like it
DAN: Yeah,
STEVE: it's a better words.
DAN: It's pretty hard. It kind of scars your soul when you when you see stuff like that Uh, it's it's really difficult
STEVE: Yeah, that's what happens when you treat human life as disposable. I mean as with no value
DAN: Yeah Anyway, it is what it is. So the war in israel the ongoing war in ukraine which people tend to kind of ignore now There's actually a lot of stuff that people are.. are kind of ignoring because of the war here. Like there's something that's on the verge of genocide in Dalfour going on right now, and nobody's talking about it. Because nobody cares about them, apparently. And whatnot. So yeah, sorry. Those are my sad picks for today, and I apologize for that. And try to, oh, how can I forget? Uh, just when you think that all the world is doom and gloom, then you've got some comic relief, like what's happening with open AI and Microsoft. Uh, have you been following this? This is pretty nuts.
STEVE: It's head spinning, just watching all the moves
DAN: and really fast recap. The board fired Sam Altman for no obvious or apparent reason, like overnight, as it were for failing to disclose some unspecified information to the board. They literally informed Microsoft was the biggest shareholder in OpenAI, like a minute before the firing. So they literally didn't give them any time to respond to that. Microsoft stock then lost 4%, which is quite a lot given the size of that company. And then they appointed the CTO. to be the interim CEO instead of Sam Altman, I forget her name. And the first thing she tried to do is to hire him back. So the board fired her as well. And then- That's why,
STEVE: I didn't realize that. I knew she'd been, I saw that she had been made CEO. I didn't understand that was why she'd been let go.
DAN: Yeah, something along these lines. And then they brought in somebody else from the outside and then, like on a Sunday night at midnight, Satya Nadala, the CEO of Microsoft, basically tweets that, first of all, we will work with the new management team at OpenAI, but oh, we are hiring Sam Altman to run, to be the CEO of our new AI division inside Microsoft, and he's bringing over everybody from OpenAI. So.
STEVE: I don't know if it's everybody, but.
DAN: Not all the good people.
CHARLES: That is the weirdest acquisition I've ever heard of.
DAN: Yeah, some people are now speculating that my maybe Microsoft instigated all this thing just to, you know, not be dependent on open AI is an external provider and just bring everything in house. Uhm, by the way, what is Microsoft stock doing today?
CHARLES: I would.
DAN: After these. Well, it's up two and a half percent almost. So yeah, so apparently the market likes that news. Um, anyway, it's, it's, it's the weirdest thing. Some people are actually speculating that maybe they're kind of grooming, uh, Sam Altman to replace that in the dollar virtually. Like, I don't know. Anyway, it's funny times. So at least I'm finishing with and with an upbeat pick. So yeah.
CHARLES: Yeah, I just caught the tail end of that on Twitter.
DAN: Oh yeah, you've got to love Elon Musk's response to Setia Nadala's tweet about this. Basically saying that now Sam Altman and his team will have to use teams for their online conferencing.
STEVE: Yes. Oh, that was so good..
CHARLES: Four souls.. All right, AJ, what are your picks?
AJ: All right, so first and foremost, metal shower head. So I don't know how many people have this problem. It seems like anybody who buys a shower head with a plastic shower head holder would have this problem. But our, cause the shower heads, you know, they, ah, and then the plastic pieces break, right? And then, so anyway, we bought metal showerhead holder and the thing was like almost the same price as a new showerhead, but now we'll never have to buy another one ever again. And it even came with the Teflon tape that you need to make it so it goes on smooth and doesn't have any leaks and stuff. So yeah. So metal shower head holder. I don't know why I didn't buy one of these months and months ago. Cause ours has been broken for a long time and we just kind of like have it balancing in the shower just right. And then, but the thing is, because they're not, it's not a standard size. I like was really demotivated to go look for one because they're, you know, and of course the company that we bought the shower head from, they're not, they don't have replacement parts, you know. So anyway, but we got, we got this, I think it's like the only all metal shower head that's available for purchase on Amazon. It's I think $27, but totally worth it. And no, it doesn't fit perfectly because there are no standard sizes for these things, but it fits good enough. You know, it's not like, it's not going to fall out. It just doesn't have that perfect look of... fits all the way in. So that's one. Number two, that should be number one, is Super Mario RPG because it's the best game ever! I mean it rivals Link's Awakening. Maybe surpasses it. I don't know. That's a tough call. But...
DAN: I'm going to interrupt you for a second and just say that I have to drop off. So... Bye bye to everybody and I'm ready... Thank you for coming on. It was super interesting and it looks like a really great project and all the best with it. So bye everybody.
AJ: So Super Mario RPG, right? Just so, so good. Very faithful adaptation. I'm really upset that they did it and I think it was unreal rather than a custom engine that's for the Switch because the performance is terrible. There are tons of places, especially in the very beginning where there's lots of. I don't know if cutscenes the right words kind of like cutscene, but where there's there's weird frame skips and stuff That just that just hurts my heart because for a lot of reasons Why like like? Yeah, but but okay that that only affects you for like the first two minutes of the cutscenes and then every once in a while it's It's noticeable and I just feel like a game this simple that literally they just brought they brough Super Mario RPG, essentially up to GameCube style graphics. And I don't know why that can't run on the Switch perfectly at measly 30 frames per second, but whatever. But it's a faithful adaptation. The music is so close that I really almost can't tell the difference between switching between classic music and modern music. I mean, obviously the modern music is a little bit smoother, but the classic music was so dag on good. I mean, they really did a great job on the Super Nintendo to get that that music to feel so full and robust. And yeah, it's just, it's just nice to be able to play. I don't, I don't have to run an emulator on my game cube to play Super Mario RPG anymore, which I wasn't really going to do anyway. Cause you know, now that we have LCD rather than CRT, when you play those Super Nintendo isomorphic games, like Super Mario RPG is the only one that comes to mind, but trying to play an isomorphic, is that what is called isomorphic? Or is it the, the, the anyway, the, the style of artwork that Super Mario RPG is, is torturous to try to play on a modern TV through emulation on a GameCube or Xbox or whatever. So I'm glad that they came out with it. Super excited to play it with my four-year-old. And she, you know, she's having so much fun with it. You know, she put Mario up on a spinning flower and then just left him there to watch him spin around and then he got dizzy and fell off. And I don't know if I knew that he would do that. And there's, you know, it's just such a fun children's game. There's, you know, if you're a kid, like my four year old, and you do the things a four year old would do, you make all these interesting discoveries about the world. And so, yeah, I just, I love Super Mario RPG. It is, it's a tie between that and Link's Awakening as to which is my favorite game of all time Link's Awakening was super challenging, great story. Super Mario RPG is just so fun and lighthearted and great characters, just so enjoyable. And then I'm also going to pick, moving on, the Primogen because I started... So there was a JavaScript Jabber episode that made it into one of his reactions because I skipped a sentence when I said something and then Lain didn't understand what I had said. He was our guest that week. And then anyway, there was, there was like this little bit of miscommunication about go routines. And then he ended up picking it up. And then a bunch of people referred me to that. And, and everything that he said was absolutely correct. Given the 30 seconds of the clip that he watched, cause if you actually backed up or knew that this was a JavaScript podcast and that it was go routines in compared to JavaScript, not go routines in compared to Python. But you know, go routines. Cause go routines compared to JavaScript async, not go routines can compared to no ACE. Anyway, whatever his tapes were great. And I started watching other of his videos and I, I love, I love just about everything that he has to say. Uh, he, he's just, he, he's got, he's got that he, he looks like a younger guy, but I think he might be older than he looks. Cause I thought at first I thought he was younger than me, but he, maybe he's even older than me. I don't know. But, um, he just, he has that wisdom of a seasoned developer of having gone through the hype and then gone through, uh, you know, kind of, kind of scaling back to simplicity and it's kind of what wins. And so I just, I love his tags. I've watched several of his reaction videos now and, uh, yeah, a hundred percent. I, my name is a Joe Neil and I endorsed the primogen.
STEVE: Yeah. I just listened to him on another podcast because whiskey web and whatnot. Interesting to hear him talk. He's pretty sharp. It's fun to listen to
AJ: Yeah, and he seems to have his hand in like so many pots that Sometimes I notice the thing that he says it's like that's not 100% technically accurate But then again, I'm a person who skips an entire sentence while I'm thinking in my head and then move on to the next Sentence without actually saying the thing that connects to the two sentences. So I've got no room to speak on that and then lastly I started writing my first Zig program and I found that Zig is what Bun is written in and Zig is almost at the level of Go. So you get kind of the performance of Rust or, you know, if we're going to talk dirty words C or C++, right? But you get a lot of ergonomics and safety not at the level of rust because it's meant to be far more ergonomic. Well, it's not meant to be more ergonomic than rust It just is more ergonomic than rust It's it's got there's a few unique features to it. Maybe I'll I'll bring up another time when I pick just that but I think that zig is a language that you can be if you if you've used other languages that do some form of memory management like rust or C Or you know or just generally modern system level languages even you know if we want to put go in that category we could put go in that category but if you've used if you used if definitely if you've used rust you know even to the point of being able to get just slightly beyond the hello world or if you've used go I think that zig is a language that you can learn in a week you know so the rust you can learn in a year zig you can learn in a week go you could learn in a weekend that's kind of how I'd position them JavaScript you can learn in a lifetime if they ever stop changing the language maybe. And I will say that I do believe I'm going to check out Alicia. I'm going to try not to get too excited about it like I did with Fastify where I was telling everybody this is the bee's knees and then I got a little bit into it and realized, oh this is just far too complicated for the small gains that you get. It's just not worth it. This looks, it has the appearance of being pretty simple. Um, so I, I am interested to try out Alicia. Uh, I, I do have a project in mind that would be a good project to, um, to play with it with and, and see, um, if it doesn't hurt me to use, which it looks like it won't hurt me to use, it looks like it might even help.
CHARLES: All right, I'm going to jump in with my picks here. I'm going to pick a board game first. Now, I've mentioned that I help teach games at Timcon and Provo, and this is one of the games we taught. This was not my favorite game, but it was fun. I kind of put it on the level of if somebody wanted to play it, I would say yes, but if somebody wanted me to pay for it, I would say no. And it's called Astra. And what it is is you have constellations and you're trying to complete the constellations. And so they all have a starting star that you start on and you trace out the constellation, whoever finishes it gets constellation in their hand. And then you can use the constellations ability to do different things. And so, um, and if you contribute to a constellation, but you don't win it, then you get a one time boon for completing it. Um, and, Yeah, anyway, you can increase your capacity to cover more stars in the constellation. You can increase your capacity to hold more cards. Anyway, there are different things. It's relatively simple game. I think it took us about an hour to play it. And it has a board game geek waiting of 2.19 and I went to try and find it on Amazon and I didn't see it there. However, it is on board game arena. So if you want to play it on the internet, you can do that. And I do not pay for my board game arena account. And I play games on here semi-frequently. So it looks like this one is a free game to play. So anyway, go ahead and check that out. really digging that. I'm also going to throw in a couple more picks with some of the stuff that I'm fiddling with. One of them, I switched my error tracking over to honey badger honey badger.io. I've been pretty happy with that. And yeah, I've also been having getting some feedback on top end devs on stuff that I isn't quite working quite right on the website. So if you're running into that, let me know and then finally, we are going to be releasing the premium podcasts and meet up some memberships. So keep an eye out for that on Topendevs. And that's what I've got. Ohm, what are your picks?
SALTY: Just, okay, I have one. So around like two days ago, I have Microsoft Thailand invite me to talk about GitHub and.. So I actually made an appointment with Microsoft to talk about it, right? So I had to talk it on stages, but apparently there's a little bit, a little bit of a wrong mark in my calendar. So on that day, I have to speak at Microsoft, Universe, GitHub Universe after party, Thailand. But in that day in the morning, I had to went to the other one, they are hosting a game event, which if you cosplay to their booth, they will give you a lot of nice stuff in return for taking pictures with people around there. So I was cosplaying, I planned to cosplay to the game booth to get the stuff, right? And then I realized that it is organizing the same day. So I have to ask Microsoft if I could, maybe using the cosplay costume to present at GitHub Universe Thailand. So apparently they allowed me to do it. So the unique thing is that this GitHub Universe are organizing in Siam. Siam is like the center of the Bangkok, where every station in Thailand are crossed there. And we are talking in the biggest shopping mall there. So there's a lot of people walking around and then there are some people like, oh, they have some cosplay on the stages, but they can't understand that at all because we are talking about programming stuff and apparently after that Microsoft employee actually DM me and say that maybe there's like a lot of, a lot of crowd actually really happy and participate more than, more than usual. So apparently they also invite me to like another conference in the January and they also asked me to do cosplay again to see if the crown actually like it or not. So just kind of funny story that I have. Yes.
CHARLES: Awesome. That sounds like fun. I love cosplay, honestly. It's fun, especially with my kids. My kids really dig it. I'd love to give a talk in cosplay. Anyway. All right, well, this was fun. I'm gonna go ahead and wrap this up. Thanks for coming.