Hey folks, I'm a super busy guy and you probably are too. You probably have a lot going on with kids going back to school, maybe some new projects at work. You've got open source stuff you're doing or a blog or a podcast or who knows what else, right? But you've got stuff going on and if you've got a lot of stuff going on, it's really hard to do the things that you need to do in order to stay healthy. And one of those things, at least for me, is eating healthy. So when I'm in the middle of a project or I just got off a call with a client or something like that, a lot of times I'm running downstairs, seeing what I can find that's easy to make in a minute or two, and then running back upstairs. And so sometimes that turns out to be popcorn or crackers or something little. Or if not that, then something that at least isn't all that healthy for me to eat. Uh, the other issue I have is that I've been eating keto for my diabetes and it really makes a major difference for me as far as my ability to feel good if I'm eating well versus eating stuff that I shouldn't eat. And so I was looking around to try and find something that would work out for me and I found these Factor meals. Now Factor is great because A, they're healthy. They actually had a keto line that I could get for my stuff and that made a major difference for me because all I had to do was pick it up, put it in the microwave for a couple of minutes and it was done. They're fresh and never frozen.They do send it to you in a cold pack. It's awesome. They also have a gourmet plus option that's cooked by chefs and it's got all the good stuff like broccolini, truffle butter, asparagus, so good. And, uh, you know, you can get lunch, you can get dinner. Uh, they have options that are high calorie, low calorie, um, protein plus meals with 30 grams or more of protein. Anyway, they've got all kinds of options. So you can round that out, you can get snacks like apple cinnamon pancakes or butter and cheddar egg bites, potato bacon and egg, breakfast skillet. You know, obviously if I'm eating keto, I don't do all of that stuff. They have smoothies, they have shakes, they have juices. Anyway, they've got all kinds of stuff and it is all healthy and like I said, it's never frozen. So anyway, I ate them, I loved them, tasted great. And like I said, you can get them cooked. It says two minutes on the package. I found that it took it about three minutes for mine to cook, but three minutes is fast and easy and then I can get back to writing code. So if you want to go check out Factor, go check it out at factormeals. Head to factormeals.com slash JSJabber50 and use the code JSJabber50 to get 50% off. That's code JSJabber50 at factormeals.com slash JSJabber50 to get 50% off.
Hey folks, I'm a super busy guy and you probably are too. You probably have a lot going on with kids going back to school, maybe some new projects at work. You've got open source stuff you're doing or a blog or a podcast or who knows what else, right? But you've got stuff going on and if you've got a lot of stuff going on, it's really hard to do the things that you need to do in order to stay healthy. And one of those things, at least for me, is eating healthy. So when I'm in the middle of a project, or I just got off a call with a client or something like that. A lot of times I'm running downstairs, seeing what I can find that's easy to make in a minute or two, and then running back upstairs. And so sometimes that turns out to be popcorn or crackers or something little, or if not that, then something that at least isn't all that healthy for me to eat. Uh, the other issue I have is that I've been eating keto for my diabetes and it really... makes a major difference for me as far as my ability to feel good if I'm eating well versus eating stuff that I shouldn't eat. And so, um, I was looking around to try and find something that would work out for me and I found these factor meals. Now factor is great because a, they're healthy. They actually had a keto, uh, line that I could get for my stuff. And that made a major difference for me because all I had to do is pick it up, put it in the microwave for a couple of minutes and it was done. Um, they're fresh and never frozen. They do send it to you in a cold pack, it's awesome. They also have a gourmet plus option that's cooked by chefs and it's got all the good stuff like broccolini, truffle butter, asparagus, so good. And you can get lunch, you can get dinner. They have options that are high calorie, low calorie, protein plus meals with 30 grams or more protein. Anyway, they've got all kinds of options. So you can round that out, you can get snacks like apple cinnamon pancakes or butter and cheddar egg bites, potato bacon and egg, breakfast skillet, you know obviously if I'm eating keto I don't do all of that stuff. They have smoothies, they have shakes, they have juices, anyway they've got all kinds of stuff and it is all healthy and like I said it's never frozen. So anyway I ate them, I loved them, tasted great and like I said you can get them cooked. It says two minutes on the package. I found that it took it about three minutes for mine to cook, but three minutes is fast and easy and then I can get back to writing code. So if you want to go check out Factor, go check it out at factormeals, head to factormeals.com slash JSJabber50 and use the code JSJabber50 to get 50% off. That's code JSJabber50 at factormeals.com slash JSJabber50 to get 50% off.
CHARLES MAX_WOOD: Hey everybody and welcome to another episode of JavaScript Jabber. This week on our panel we have Christopher Buechler.
CHRISTOPHER_BUECHELER:Hey, it's Chris from CloseBrays.com coming to you from Providence, Rhode Island.
CHARLES MAX_WOOD: AJ O'Neill.
AJ_O’NEAL: Yo, yo, yo, I'm coming at you live. Cug with my pants off.
Chales: Yikes. Podcasting with your pants off and your video off, thankfully.
AJ_O’NEAL: Actually, it's not true. I have pants on.
CHARLES MAX_WOOD: Okay. I'm Charles Max Wood from DevChat.tv. Starting up a new, another new thingy. You can go check it out at MaxCoders.io. I'll probably do an episode about it at some point. So this week, our special guest is Valeri Karpov.
VAL_KARPOV: Hi, everyone. My name is Val. I work on Mongoose. I'm coming to you live here from San Mateo, California.
CHARLES MAX_WOOD: Nice. Do you want to just remind people who you are, why you're famous?
VAL_KARPOV: Yeah, sure. This is probably like what my fourth or fifth appearance on JavaScript Jabber. I'm the maintainer of Mongoose, the most popular ODM for Node.js and Mongo. Started a few companies, most notably Level Up, that got acquired by Grubhub last year. Right now I work for a tech company here in the Bay Area called Booster Fuels. We deliver gas to people while they work here in the Bay Area and in Dallas, Texas.
CHARLES MAX_WOOD: Nice.
One of the things that I find that we talk a lot about at the different conferences and the different things that I'm working on is open source software. And a lot of people have a lot of ideas around open source software, but we don't often think about the people who are building it and trying to maintain it. I had a friend, John, who came to me. He's been a guest on JavaScript Jabber a couple of times. He came and he actually said, Hey, Chuck, I wish there was a show about sustaining open source. That really hit me where I live and I have a few other friends who are working on projects related to this. So we all got together and we put together a show called Sustain Our Software. You can find it at sustainoursoftwarepodcast.com. It's a place where several people who are passionate about open source come together and have conversations about how it can be sustained and how it can be maintained and what we can do to help these maintainers continue to deliver us value that we build our software on. Most of the software we're building is based on open source, and so it's important to us to have that maintained and have it taken care of. Come check it out. It's been really interesting to listen to the conversations that they're having from people who are working in it all the time and just hear what they have to say about it. Once again, that's at sustainoursoftwarepodcast.com.
CHARLES MAX_WOOD: It sounds like you're doing a lot of interesting stuff. We brought you on today to talk about debugging with Async Await.
VAL_KARPOV: Yeah.It's a challenging topic and I think a lot of frameworks right now don't really have good support for async functions. Classic example is just React error boundaries don't really work if you throw an error in an async function.
CHARLES MAX_WOOD: Gotcha.
AJ_O’NEAL: Do they work well with promises?
VAL_KARPOV: Async functions are mostly indistinguishable from a function that returns a promise. So they don't work as well with promises as far as I know.
AJ_O’NEAL: Okay, that's what I thought. That's why I was confused because I was like, wait, well, how is this different than just using a normal promise?
VAL_KARPOV: I think like view actions actually do work well with async functions, but like view support for async await. And my experience has been a little spotty and with react, it has been largely non-existent. And then of course, express, um, express four famously just hangs forever. If you throw an error in an async function express five, which is still not released as of this recording, I think it does promise to support it, but I haven't actually tried it yet.
AJ_O’NEAL: I have never had these problems.
VAL_KARPOV: You've never had an express route hand handler just hang on you for no reason.
AJ_O’NEAL: Well, no, cause I always have a catch and then I just return an error. And in fact, what I do in a lot of cases is I just have a little wrapper that I pass in kind of like middleware almost. Well, I guess not almost, but yeah. And then, and then just any time that something throws or returns a failure promise. then I just handle it with an error code.
VAL_KARPOV: Oh, so you just do try catch or do you use dot catch?
AJ_O’NEAL: I use dot catch, but if I use try catch, well, I don't know the way that async await converts to promises if it's different than a normal promise. But when I use the try catch, that works as well because then it, you know, if you do a try catch inside of a resolve, the resolve will just wrap it as an error rejection and you still so if you do a promise with a resolve and then you have something throw in what's being resolved, then when you use. catch, it'll be the same as if you did a try catch, you just don't have to write as much code.
VAL_KARPOV: Yeah, yeah, I know. I like catch better than try catch because I find it to be more composable.
AJ_O’NEAL: Yes, exactly, exactly.
VAL_KARPOV: And also make robust. I mean, there's two issues that I find with try catch and async await. And I see a lot more people these days using try catch with async await and I don't really like it. What they end up doing is they have an async function where the first line is a try and then the entire body of the function is, uh, is in the try catch. And then they have a catch block. I'm not the world's biggest fan of that for two reasons. Number one, if you return a promise that rejects within a try catch, the catch block won't execute. That one is a nasty gotcha. That, um, is. easy to get bitten by. And the other issue is, you know, you end up with an unhandled promise rejection if your catch handler throws. So if you have, if you use dot catch as opposed to try catch, it's easy to just chain at the very end of all of your function calls. Just add like a, um, just add a catch handler at the end that just throws an error and kills the process. If there's a, if there's an error, or if there is rather an unhandled error.
CHARLES MAX_WOOD: So I don't know if I'm completely following. So what's the difference between a try catch and a dot catch? Cause I mean, I've put try catch around stuff and it does what it's supposed to most of the time, but yeah, asynchronous stuff. Sometimes there's funky things that happen. So, you know, pardon the new question, I guess, but,
VAL_KARPOV: uh, so broadly speaking, there's kind of two ways to handle all errors in an async function without leading to an unhandled promise rejection one is you wrap the entire body of the trichatch of the async function in a trichatch. That approach has a couple of limitations. The other approach is calling an async function always returns a promise, right? So you can just call dot catch on the promise to handle any errors that occur in that function body. And those can be synchronous errors, asynchronous errors, whatever, as long as you are awaiting on all your async operations. you'll get a dot catch for all of the, or that can handle any error, synchronous or asynchronous that occurs within the function body. One of the key differences though, is that if you return a promise within a, within an async function and that return promise is wrapped in a try catch, the catch block won't get called if that promise is rejected. Whereas the dot catch, whereas if you call dot catch on the promise that the function returns, you'll actually catch that error.
AJ_O’NEAL: So the one place I found this to be tricky and unintuitive is in the rare case where you actually have to call new promise and have resolve and reject. You can get unexpected behavior because if you return from another promise inside of that, it doesn't bubble up. So I try to avoid that and only scope those things to like really, really tiny things. Like the classical example being set timeout. You need a promiseable timeout. So I just create a function that only uses new promise within the very, very, very small scope where it's very, very, very well defined, like for example, around the set timeout and call my resolve or reject, but then everything else I have chained in such a way that there's never a possibility for it to, to enter into one of those exceptional cases where you have the constructor style promise that behaves differently from the normal promise. That's one gotcha I've been bit by and I try to always avoid.
VAL_KARPOV: I think like the only difference between the constructor promise and like a promise that's returned by a library though is that the constructor promise is entirely under your control so it's easy to make um it's easy to make some mistakes. For instance um let's see here. My favorite JavaScript interview question right now that I'm in the process of retiring. So are you familiar with node streams?
AJ_O’NEAL: Streams one, two, or three?
VAL_KARPOV: It doesn't really matter that much, but call it streams three.
AJ_O’NEAL: It does matter, but go on.
VAL_KARPOV: But for the purposes of this exercise, it doesn't really matter that much because you're not worrying about back pressure or anything like that. But the question is basically given a stream, implement a function called stream to promise that given a stream returns a promise that resolves to the concatenation of all the data chunks emitted by the stream or resolve or rejects if the stream emits an error event.
AJ_O’NEAL: Okay. One more time. That was a lot of words.
VAL_KARPOV: Yeah, it is a lot of work. It's easier to see in code. So at a very high level, a node stream is an event emitter that can emit three events, data and, and error. Data means there's a new chunk of data available. Say that you're reading something off the file system and it's read like the next line in a huge text file. And it means the stream is done. There's no more data events getting to be emitted. And error means some error occurred. The exercise is that given a stream, return a promise that resolves to the concatenation of all the data events. So if I have a stream that emits ABC, data events with value A, B, and C, and then emits an end event, the promise resolves with the value A, B, C.
AJ_O’NEAL: Okay, two things here. Typically, you would not ever want to resolve a promise with the result of the data. You'd want to resolve the promise that it's been handled already and have some sort of pipe inside though, right? Because otherwise you're building up all that stuff in RAM and your server is going to crash because you're getting a gigabyte file in RAM and you've got a 512 megabyte instance or...
VAL_KARPOV: So you'd be surprised. There are several node modules that only expose stream APIs as opposed to promise APIs. And if you're just looking at a small file, it's often can to just convert into a promise as opposed to, um, as opposed to like just using it as a stream. I've used this several times. I remember there's a CSV library that I use a lot where I'm just like, I have a, uh, I have like a relatively small CSV file, but it's got, You know, uh, double quotes and scaping and stuff. And I just don't want, uh, I don't want to deal with that. So I'm just going to use the CSV library. The CSV library only had support streams and the file's too small to justify it. So like it's actually an exercise that I've done myself more than, more than a few times in my day to day.
AJ_O’NEAL: Yeah. I just want to clarify it. Cause that's one of those things where it's like really simple to get it right and really simple to get it wrong. And the difference can be catastrophic when you actually go to deploy and just for our listeners that don't know, don't ever use the data event except in the cases Val's talking about. Don't ever use the data event. Only use the readable event. If you use the data event, you will crash your server, you will cause network latency, things will go bad for you. Never use the data event. Only use the readable event.
CHRISTOPHER_BUECHELER:Oh, interesting. I don't even actually know what the readable event does.
AJ_O’NEAL: So data always pushes data no matter what. You can't stop it. When you can, you have to implement complex logic with pause and resume. So technically you can stop it, but readable, when you get a readable event, it is up to you to call read in inside of the function handler. And so then you get back a chunk of data and you call read until read returns null, meaning that it's empty, which actually can happen on the very first read if it's EOF that's what pipe is going to be using under the hood. So when you pipe. Pipe is efficient and it makes sure that it manages back pressure and you're not just loading stuff into memory. But when you use data, you're just loading stuff into memory without any regard. And it doesn't matter how fast or slow the thing is on the other end. You're just piling it up inside of RAM. But when you use readable, you're completely in control. And then the write function will return true or false to let you know that the buffer is full or not. So you can very simply put data in when there's data to be read and you have capacity for it and the thing on the other side is ready to handle it. And then you can just loop and wait when that's not the case. So you'll never have a memory problem on it because you're never stampeding. It's, yeah, it's basically data creates a memory stampede.
VAL_KARPOV: Yeah, I guess that's a case where you can actually mix like a more imperative style into dealing with a stream, right? So instead of having just like just having a push stream that's just pushing data to you, you can actually kind of say, hold up, don't push the data to me, I'll pull it.
AJ_O’NEAL: Yeah, and there's another module called pull streams, but I diverge. But a lot of people prefer that. A lot of people prefer that to the node stream.
VAL_KARPOV: Yeah, honestly like i never really liked streams as a data model for reading from the hard drive anyway, because I always thought that reading from the hard drive should be more of a pull operation than a pull push. You shouldn't say explicitly, await next chunk, as opposed to having a data handler that just spits off, that just reads chunks as they come in. Because in theory, you have control over the hard drive as a programmer. you shouldn't be reacting to what the hard drive does. The hard drive should be reacting to what you tell it to do. So like imperative style as opposed to reactive style.
AJ_O’NEAL: Agreed.
VAL_KARPOV: Yeah. It's actually something I've been writing about a fair bit. I've been working on kind of a new JavaScript tutorial site, masteringjs.io. And I think one of the recent emails I sent out was about like reactive programming versus imperative programming and kind of why I see JavaScript going more imperative these days because Async Await is just fundamentally like a very imperative pattern. And imperative programming is, Let's say, easier for less experienced programmers to adopt and easier for people who aren't JavaScript experts to contribute. Because I think one of the biggest concrete benefits that we've gotten from using Async Await here at Booster is that Developers that aren't JavaScript developers, Like say our, um, our iOS dev who works on our iPhone app. He can explicitly contribute to the JavaScript code without having to worry too much about, Oh, what's an observable, how do I do this whole callback thing? He just kind of does for loops and if statements like he normally would in an objective C or Swift.
AJ_O’NEAL: So one thing I actually haven't tried because it just seemed weird and wrong. In a normal for loop, does a wait work in a strange way where that actually works? Or does it work like a promise where that does not work? Like when you do for, you know, X and Y, does a wait actually stop the loop? Or is it just transpiling like a promise and it doesn't stop the loop?
VAL_KARPOV: No, it suspends execution of the function until the promise is resolved. So like, um, if you will wait on a timeout within a for loop, the loop won't execute until the timeout or won't continue to execute until the timeout is done.
AJ_O’NEAL: That is trippy. I've just done promise all in those cases, because intuitively to me, it makes perfect sense. And I know what it's doing without having to like, think is the compiler going to work this way or that way. I know when I do a wait promised out all that it's doing exactly what I think it should, so that's how I've done that.
VAL_KARPOV: Yeah, it's funny. I see a lot of people saying that they use a wait promise all a lot. I find myself doing the opposite and going to just doing a for loop and then just doing a wait one by one. I think it ends up being more because I work with MongoDB a lot. And so one of the odd quirks of MongoDB or any database in general, or a lot of other databases, is that they can only execute so many operations in parallel so like if I just send like in parallel 100 update requests, I'm going to choke up the database. And I'm willing to sacrifice like a little bit of performance or like a little bit of responsiveness on an update for making sure there's consistent throughput. And I'm not kind of choking off the database for somebody else.
AJ_O’NEAL: So now granted that makes perfect sense for that. So I wrote a module called batch async. That is very, very light. It's like 15 lines long or something. Maybe, maybe actually it's more like 15 lines long. I don't know. Anyway, it's like super light, but I mean, it sounds like with a weight in a traditional old school C style for loop, not a more modern JavaScript style for each loop, it sounds like that works, but you're limited to just one at a time. So I wrote this module batch async to be kind of like not as greedy as promised at all, but not as constrained as that scenario where you specify like run up to 10 of these at a time or whatever.
VAL_KARPOV: Yeah.
AJ_O’NEAL: Anybody has that scenario and you want to find the happy medium between the two where you're not causing problems in either direction. You're welcome.
VAL_KARPOV: That's pretty great. It's kind of similar to, um, the, uh, the async library has a, uh, has a parallel limit function that does something similar. You just basically give it like, you know, uh, callbacks and a number of these functions that you should execute in parallel at any given time. So like once they're done, it kicked one. So like, let's say you say, you know, parallel limit is two and you have like 10 functions, it executes like the first two until one of them is done, calls its call back and then, and then kicks off the next one in the list. Does that sound about right?
AJ_O’NEAL: Yeah. I'll actually link in the show notes to, for people that are interested in how that works, they just want to learn because they, they enjoy learning. I've got a blog post that breaks it down on how that works and why and what the hangups are if you try to implement it to yourself and that kind of thing.
VAL_KARPOV: Yeah.
CHRISTOPHER_BUECHELER:Have you tried async iterators yet, AJ?
AJ_O’NEAL: So to be honest, I didn't try async await until this past week. Because to me, it just is so unintuitive from the way the JavaScript works. And it was slower, that was a great excuse because anytime you can say something is slower, then everybody just listens to you because it doesn't matter whether you're right or wrong. you know, saying faster or slower is like a Trump card that it doesn't matter whether it's practical, doesn't matter anything. But our last guest told us that they actually finally made async weight faster than promises for most use cases for like the typical way that people write async away.
CHRISTOPHER_BUECHELER:Oh, was your previous guest on the VAT?
AJ_O’NEAL: No, no, but she had just done research about node 12 and basically she said they made it. two micro ticks faster, whatever that means, which I'm assuming means because you don't have to declare the variable, you don't have to go through the branching statements when you generate the code, like, you know, down in the VM compiler, since if you're using async await, there's slightly fewer things that you have to do than if you were to do a promise. If you want to skip those steps, like this kind of makes sense, like just having to read the name of whether or not you put a function name on the anonymous function that's in the promise is going to take some small insignificant amount of time when it's parsing and compiling. Because if the name is there, then it has one branch. If the name isn't there, it has another branch. If the name is there, it needs to look for that name reference somewhere inside the function. There's things like that that are insignificant and don't matter. And I'm guessing that a micro tick is somewhere similar to like five clock cycles, which you know billion of those happen in a second. So I don't think it's actually practically faster, but somebody used the trump card word faster. And so I was like, all right, war's over. I guess I'll have to use this. But I found out that a lot of browsers still don't like the feature phone browsers in particular, they still don't support async await. And so I have to go do babble. And I've, I've been really about babble because I just like writing JavaScript. I like the old days. I'm a curmudgeon. But anyway,
CHRISTOPHER_BUECHELER:I mean, would you believe that developer who hasn't who hasn't used Babel or at least not for anything other than occasional JSX since like,
VAL_KARPOV: I haven't used this since 2016 for anything else. Like I don't really use Babel for like transpiling anything because I haven't really done much. Most of like my browser side apps are just kind of internal tools that are just meant to run in Chrome. And if you're not using a recent Chrome, well then please upgrade. Usually it's for people who are internal to the company. They, uh, what do you call it? I can tell them to upgrade. And then for consumer facing, I often end up doing just static sites, just HTML.
AJ_O’NEAL: That's interesting to hear. Cause I'm, I just was kind of under the assumption that everybody else in the world other than me and Christopher 90, we're running Babel every day.
VAL_KARPOV: And, uh, we, um, we caught a booster. We, we used to use Babel when I first started for transpiling our No JS server code. But we kind of stopped around 2016, kind of once we upgraded to node six and, oh no.. I think we actually stopped transpiling before node six. The only reason why we used Babel was for just destructuring assignments in node four back in the day. Then we just decided, okay, we're not gonna use destructuring assignments until they're actually supported in node because it's just not worth the headache of maintaining Babel just for that.
CHRISTOPHER_BUECHELER:Yeah, I think I used to have to run transpellers all the time. And I mean, I still technically do because I work in React all the time. So there's, like you said, with JSX transpelling, you kind of need it. But, um, in terms of running them for supporting ES6 features, it's just not an issue I run into very frequently anymore. The newer versions of Node support virtually all of them. Most modern browsers support virtually all of them. So it's just, unless you're using some pretty obscure stuff. I haven't had to work with Babel a lot lately.
AJ_O’NEAL: So the reason I say that the baseline Android phones like the one that you go get at the cricket store, they run what's called UC browser, the Android browser that doesn't support it. The Android browser is pretty much dead. I don't think it's going to get more anymore updates. I think they're just going to let all those feature phones die out over the next few years. And then the phones that people use in India and China also don't support it. Like, you know, like people that don't have the Samsung Galaxy, the iPhone, you know, the $700 phones, people that have the $50 phones are still stuck with browsers that would require a Babel compile in order for it to use, which is a significant chunk of the population of the world, you know, if you're not concerned with just the United States and the richer parts of Europe.
CHARLES MAX_WOOD: Certainly true.
VAL_KARPOV: Yeah, makes a lot of sense. You're building a React app that has targets, a very broad market. You're definitely getting neat Babel. That's just not what I'm working on these days. These days I'm working, I'm actually doing a lot of Vue, and I'm working on primarily just internal tools one of the great benefits of Vue, don't need a transpiler, don't need a bundler. We end up using Webpack just so we can actually use require, but in theory, you can build like a rudimentary Vue app without a Webpack or without anything else.
CHRISTOPHER_BUECHELER:So on the topic of frameworks, I'm curious, you said earlier and in some of your documentation for the episode that frameworks in general don't handle async await particularly well and specifically react, which like I said, I work in a lot handles it very poorly. What are some ways to mitigate that? How do we get around that?
VAL_KARPOV: I mean, it's a hard question for the React team. Have you worked with React Suspense at all?
CHRISTOPHER_BUECHELER:Still new to it. Haven't really touched it yet.
VAL_KARPOV: Yeah, I haven't really touched it yet either. I've read a little bit about it and that kind of promises to be how React supports async await is, I think like you throw a promise and that kind of cancels the rent until the promise is resolved or rejected, which is a little strange. But on the other hand, like it would be pretty great if you could just have an async component didMount function and React kind of figures that out for you. Because so there is like, there are error boundaries. There is like component didCatch will handle, will handle, you know, uncaught synchronous errors within, within render, right? But if your render function is async or your component didMount is async that the error boundary and component that did catch won't help you.
CHRISTOPHER_BUECHELER:I have had to jump through hoops numerous times to deal with trying to do stuff on component mount that's asynchronous because of that where you end up writing sub functions to run just so that you can use async await in those because React will complain if you try to use or at least ESLint will complain if you try to use async await in your component didn't mount.
VAL_KARPOV: Yeah. I always think it wouldn't be that difficult to make it so that at least like an async error bubbles up at least to an error boundary if your component did malfunction as async. But on the other hand, I don't work on React, so I can't really say how difficult it is. But on the other hand, like, Vue ends up doing a pretty decent job of that in that, like, if an async with Vue method throws an error that's not caught, it at least bubbles up as an exception as opposed to just being an unhandled promise error or an So it's doable, I think. I'm just not sure what's blocking React from doing it. What really is a head scratcher for me is why it's taken Express so long. It's been 4 years and changed since us as Promises were introduced in ES6. And they really don't do automatic promise handling for you. And it's something that they could easily implement. I did write a tutorial a few years back that was just entitled, Write Your Own Express from Scratch. It was relatively short because the fundamental ideas of Express are relatively simple. It's all just middleware. But the fact that they don't handle promises returned from middleware functions is a bit of a head scratcher.
AJ_O’NEAL: It's the devicesness that I can't say the word. I was going to say divisiveness of the JavaScript community. They're so strong about their opinions. They want to make sure that people have a different opinion, have a bad time. And I agree.
VAL_KARPOV: Uh, I guess we can talk about canceling promises and whether that's a, uh, whether that's actually valuable. That seems to be a very common talking point on Twitter. Everyone likes to complain that, oh, promises are unusable because they're not cancelable.
AJ_O’NEAL: I totally don't understand this argument. I have not seen this stuff happen on Twitter, but I've heard you're the second or third person that's mentioned something about this. And again, this is like, I have never experienced this problem. I do not know what people are talking about.
VAL_KARPOV: Yeah, to be honest, I have never actually been like, I want to cancel a promise. It has never happened to me. And the problem is that like, even if you're using RXJS, like there is no way for you to actually cancel an async operation in the general case. Let's say you're using Angular 2 or Angular 8, whatever it is now, and you are using RXJS that has the function the ability to cancel an async request and you send an HTTP request that's wrapped in RxJS, right? If the HTTP request has already been physically sent onto the wire as in like it's already on the network, the way that RxJS cancellation works for that is that it just unlinks the handler for the request. The request is still out there. It's still going to the server. The server will still get it. The server will still send a response. You just won't be listening for the response. So it's cancellation in that you're adding a special function called cancel that just lets you ignore the response. Ditto for like if you're working with MongoDB, right? I suppose it's possible to cancel something on an operation on the MongoDB side, but the MongoDB driver, for instance, if you were to wrap it in rxjs, it would really have a hard time, or you'd really have a hard time coming up with a way to cancel an arbitrary operation just because once the operation is in progress, like if you've already sent an update operation that's hit the MongoDB server, you can't really undo that unless you have a transaction in place.
AJ_O’NEAL: So what is that? That doesn't have anything to do with promises in particular. That's just implementations of libraries. People generally don't implement a way to cancel something midstream.
VAL_KARPOV: Yeah, exactly. And canceling something in general is a very complex problem. It's something that like...Let's say you're executing a GET request against an API, right? Like what does it mean to cancel that? Like if it already got to the server and the server's already doing some work to execute the GET request, like if you sent a cancel, well then should it stop doing it? But then if you're already getting a response back from the server and that's in flight and you try to send the cancel, what should the server do?
This episode is sponsored by Sentry.io. Recently, I came across a great tool for tracking and monitoring problems in my apps. Then I asked them if they wanted to sponsor the show and allow me to share my experience with you. Sentry provides a terrific interface for keeping track of what's going on with my app. It also tracks releases so I can tell if what I deployed makes things better or worse. They give you full stack traces and as much information as possible about the situation when the error occurred to help you track down the errors. Plus one thing I love, you can customize the context provided by Sentry. So if you're looking for specific information about the request, You can provide it. It automatically scrubs passwords and secure information, and you can customize the scrubbing as well. Finally, it has a user feedback system built in that you can use to get information from your users. Oh, and I also love that they support open source to the point where they actually open source Sentry if you want to self-host it. Use the code devchat at sentry.io to get two months free on Sentry's small plan. That's code devchat at sentry.io.
AJ_O’NEAL: So two things. So you want to optimize for the happy path not for the sad path in most cases, very rare in your application logic. Do you want to optimize for the sad path? Now people talk about optimizing the sad path for the user experience that, you know, they get good error messages, totally agree. But in terms of like code optimizations and resource optimizations, you want to resource optimize towards happy path, not towards sad path. Cause then you're just going to write a bunch of code for stuff that rarely ever happens. So the first thing is what optimization are you actually trying to achieve with this idea of canceling? Because the idea of just like ignoring the response, to me, that's good enough for the user experience, right? They just, you just want to say, hey, that thing that you did, we're not going to show you the data. And probably half the time you could still cache the data or something in case they go back to do that operation again. But, you know, it's, it's happening. The second thing is why? Like, what are you really trying to solve for in the cancellation process. So, you know, why optimize for the sad path? And what, once, to me, this sounds more like a theoretical problem than a real problem. And that might be marginalizing some people, but I'm a marginalizer.
VAL_KARPOV: I mean, I have had one practical or one case in working on Mongoose where I have-
AJ_O’NEAL: In how many years?
VAL_KARPOV: Oh. in over five years, five and a half at this point. Five and a half years, you have one practical case.
AJ_O’NEAL: Okay, go on.
VAL_KARPOV: And it's even, it's a stretch in terms of a practical case. I got like a bug report recently where, so if Mongoose has, what you call it, like change detection on a single document, so you type like document.a equals five, and then type document.save, and then we send updates document property a to five to the database, right? But then what happens if you do doc.a equals five doc.save and then in the same tick, you modify another property and then call.save on the documents again within the same tick of the event loop, like synchronously. In Mongoose, it ends up being that the first save succeeds. The second save throws an error that says that you're trying to save the same document multiple times in parallel, but both updates end up going to the database. And like, I'm not I'm not quite sure how I can work around that without actually canceling a promise or doing some sort of snapshotting. Make that behavior kind of make sense.
AJ_O’NEAL: What about C bouncing?
VAL_KARPOV: C bouncing is one thing, but then it becomes like, should both save operations succeed? Should the first save operation throw an error? It seems counterintuitive that the second save operation throws an error despite the fact that the updates that happened between the first save and the second save succeed.
AJ_O’NEAL: Win or wins, that's what I say.
VAL_KARPOV: Win or wins, that's a good one.
AJ_O’NEAL: Truth in data.
VAL_KARPOV: But yeah, that was one case where I thought, okay, maybe if I just quote unquote cancel the first save and just take the second save, it could make sense. But on the other hand, it's also something that we just generally don't want people to do. I don't think there's ever actually a case for you wanting to call save twice in the same tick of the event loop, just because, well, why call it twice when you can call it once, right?
AJ_O’NEAL: Yeah. So, I mean, what I'm hearing is you had a specific bug that required non-generic engineering to solve. This sounds like the kind of problem that there is not a way that you could say, well, we would cancel this event this way. This is something that requires a knowledge of, well, in your case specifically, you're saying you don't even know because you don't know what the user's expectation is, what cancel is supposed to mean in a generic sense in that case, because it's not generic, it's very specific. Is that right?
VAL_KARPOV: Yeah, it's a very specific case where cancellation might be helpful but I still haven't been able to wrap my head around what should the right behavior be for that particular use case. Intuitively, I kind of want it to be both save operations succeed, but on the other hand, what the implications of that are for the rest of the code base is always the tricky question.
AJ_O’NEAL: Okay, this is kind of what I think as well, what you're describing. When I've come across cancellation issues. It's very specific to that particular use case. And maybe there's like some general cancellation semantics that could be implemented, but I mean, like, yeah, it's a pain, but it's, it's not something that's so often that I can't just be like, well, for this specific case, instead of returning a promise, I'll return an object that has a cancel method and a promise, and if you want to use the promise, then use the promise. Otherwise you, you know, like. Yeah, it's like, it's not beautiful. It's not like, uh, you know, beautiful, wonderful functional code that's hyper-composable. No, you have to make a special case for that one instance and you have to handle it differently. And there is the potential problem that, you know, you've got your red functions and your blue functions, and now you've introduced a green function and you have to go change a bunch of other functions to be able to like this, for this thing to be able to propagate. And that's. That's, I think, where it's the most painful, and I think where I imagine people would have the argument for we need a generic way of canceling. But again, in my experience, it's so rare that I really need to focus on that sad path and optimize that sad path that it, you know, whatever. A bad thing happens and someone that's human has to make a decision, have to deal with it most of the time anyway, it's not a generic. I'm curious if anybody else has had experience with these times where you need to cancel something. Christopher? Chuck?
CHRISTOPHER_BUECHELER:So for me, I think what you were saying about handling it really more on the front end is my general approach to that kind of thing. If you send out a request and then you don't want to do anything with the request, then just don't do anything with it. The data comes back and you no longer need it. There are relatively straightforward ways to just be like, okay, thanks, but I'm not going to render this or I'm not going to...It just feels to me like it's not something I deal with frequently in general. If I'm sending out async requests, a lot of it is XHR, you know, that kind of stuff, and I want whatever's coming back. So it's certainly not an issue I've run into frequently.
VAL_KARPOV: Have you played with async generator functions yet, AJ?
AJ_O’NEAL: Oh, so no, no, I'm the wrong person to ask, but go on with those generator and iterator and all that stuff, but ask Christopher Hillmill.
VAL_KARPOV: Yeah.How about you Christopher? Have you tried async generator functions yet?
CHRISTOPHER_BUECHELER:So you'd think that, but actually no, I was going to ask about them. As my next question, I'm curious what their benefit is.
VAL_KARPOV: So my favorite use case or my favorite motivation for async generator functions is one that like, this is kind of how I got started using async generator functions was that. So the big project that I'm working on right now here at booster involves basically a routing problem solver. So like a solver for a generalized traveling salesman problem, it runs for a very long time and I need to be able to report on progress on like this request that goes out over a web socket to actually solve this particular problem. So like, I want to be able to report progress and say like, Oh, it's 40% done. Oh, I've actually loaded the distance matrix and now I've sent the problem to the solver. Just yeah, progress reporting. So I have a, so underneath the hood, if like the actual function, the actual solve function that structures the data and sends the request over to the other core solver is structured as an async generator function that makes asynchronous requests to the database, makes asynchronous HTTP requests to gather data and send the request to a solver, but then it also yields to report on its progress. And then, so every time it yields, I have like a kind of a framework around it that says like, oh, okay, you know, the what you call it, the async generator function yielded this thing that says that, oh, I'm at this stage of the function and send it, send it back up over the web socket to the front end. So that's kind of the motivation. But what makes that interesting and related to cancellation is that kind of like when an async generator function yields, you explicitly need to call next in order for it to actually pick up again. So you can kind of cancel an async generator function by not calling next based on what it yielded, and then just let the function get garbage collected. So if you say like, you know, yield cancel token or something like that, you can have a framework around that, that looks for, oh, did the function yield cancel token? If it did, I'm not going to resume it.
CHRISTOPHER_BUECHELER:So in this particular instance, your yields throughout your generator function are behaving sort of in the way that, you know, if you're running a console.log and trying to figure out where in your function something's breaking, and you put 10 console.logs in, the yields are kind of the same way, kind of the same setup, right? Where you're yielding at a specific point to say I've made this progress and now I'm moving on to this next part of the function. The difference being that what's listening to the generator function has to specifically call next to say, okay, gotcha, I know where you are and now move on to the next part.
VAL_KARPOV: Yeah, exactly. And then the cool framework around the async generator function can actually listen for what you're yielding. So you can think of it as instead of putting a console.log of message, you build a message and then you can have a framework around that, that either decides, Oh, if, uh, if environment is development, I'm going to log that to the console. If the environment is production, I'm going to send it out over the appropriate web socket or send that message over the web socket.
CHRISTOPHER_BUECHELER:Nice. It's like building kind of like your own framework for, uh, for reporting on progress. That's very cool.It's almost like being able to return values in the middle of the function.
VAL_KARPOV: Yeah, that's exactly what a generator is. Is, um, it's a function that like lets you reenter into the function later.
CHRISTOPHER_BUECHELER:Gotcha. So like when you yield, that's like a return that still retains the internal state of the function. And then the function can be resumed later.
AJ_O’NEAL: So this is something that makes perfect sense for very large datasets or very long running operations.Like what you have with a cursor and a database that you want to iterate over a million records or something that is going to be really long running for the love of all that is sacred and holy in this world, please listeners, when you have an item that's five lines long and it's a bounded set, that's never going to be more than five items, don't complicate your code this way, please.
VAL_KARPOV: Yeah, that's fair. I guess it is tempting to use the latest and greatest fanciest thing, but it can also be overkill. I mean, this is one case where I'm like, basic generator function makes all the sense in the whole world. Yeah. It's a little bit of overkill.
AJ_O’NEAL: It's an awesome tool in the toolbox. The reason I bring that up is because of the WhatWG spec for basically everything WhatWG is doing new now uses iterators and like really fancy crap. Oh, makes me angry inside when you do new URL because it's like, okay, web browser. It should be able to parse the URL. We haven't had that for 20 years. Oh, sweet. Someone added a URL parser to the spec. Amazing. Now we have a URL object in the DOM and also in node, right? This is great. So now we have a standard URL parser. So you parse the URL and console.log. What do you get? Empty object pretty much like the query parameters, the query parameters, like how many query parameters you're going to have? Two, three, like 12 on a really complicated bad day? And they made it into an iterator. So you have to write all this funky-do code around it just to console.login object that shows you the two or three things that are inside of it. That's where my caution is. I absolutely believe they are an awesome tool in the toolbox for certain use cases. But gosh, when I see people implement them, it's like for cool factor or for like, we're functional purists we're writing better code for all three items in our list. And that just drives me insane. Cause then, well, it doesn't drive me insane unless I'm the one that has to deal with it. So I guess don't publish your code if you're gonna do that type of thing. I don't ever wanna see it.
VAL_KARPOV: Yeah. To be honest, I do find it kind of annoying that like the map class, the built-in JavaScript map, when you call dot keys or dot values or dot entries on the map, you get back an iterator, not an array. It seems a little frivolous and a little just like, oh, hey, look, we have this new iterator thing. Let's just use it for a lot of stuff, as opposed to just like, you're not, it seems silly just because like, the map is already in memory, right? So like, why do you need an iterator as opposed to an array?
AJ_O’NEAL: And for those of you that are stuck in the future and don't know about the boring old simple ways to do things, because I know that that's how some people learn to program is from all the new object.keys. Object.values, they're your friends.
VAL_KARPOV: Yeah, but then map.keys and map.values and map.entries, those return iterators, not arrays, but object.keys, object.values, and object.entries return an array. It is a little annoying sometimes, but well, in that case, array.from is your friend. Just wrap everything in array.from and everything goes away, which is, yeah, I find myself using array.from way too often for my taste.
AJ_O’NEAL: Oh, I did not know that's a pro tip right there. I didn't know you could pass an iterator into a rate up from, of course, I try to avoid iterators at all costs, but now next time I have to deal with the URL object and getting query parameters, I will certainly use a rate up from instead of writing like five lines of stupid iterator code. Oh,
VAL_KARPOV: yeah. But then you better watch out if you get an async iterator, a rate up from is not your friend.
AJ_O’NEAL: I don't believe in those.
VAL_KARPOV: Oh yeah, it gets a little confusing when you've got like, okay, we have async iterators. Now we have async generator functions that return an async iterator and it just gets a little confusing after a while. Yeah, it would be nice if JavaScript kind of like helped unify like async await a little or make async await like a little bit less of an edge case than that it is right now doesn't quite work right. Another interesting feature request I've been, I've gotten a few times from Lungoose is support for async to JSON functions, specifically making JSON.stringify, like making JSON.stringify asynchronous basically, and making it so that like custom transformations to JSON.stringify can actually support asynchronous functions, but they don't right now. JSON.stringify has to be synchronous.
AJ_O’NEAL: A little bit of a side note, but that is super, super, super, super important. JSON.stringify will kill you because it runs on the main thread. So there is no way of getting away from it. Because even when you're transferring messages between like node cluster processes or worker threads in the browser, you're doing it through JSON, right? Like you don't get a memory reference to an object from another thread unless there's some new improvement that I don't know about yet. But anyway...So JSON objects that are really small, that have five keys, which you're passed around 90% of the time, no problem. But you JSON.stringify a return list of 1,000 items, 10,000 items, the processing time actually grows exponentially, not linearly, with JSON.stringify. It is meant for small things, and if you are using it, and I thought I was going to get a performance increase on something. One time I was doing pre-optim, premature optimization, like almost all optimization is premature, but I was like, okay, I'm going to get some performance increase out of this. I'm going to run node cluster. I'm going to do this over here and this over here. But the data that I needed to transfer between the processes, I was json.stringifying and the json.stringify I think decreased the performance of my app. And also console.log is like that too. A lot of people don't know console.log is synchronous. Console.log will kill your app. So unless you're wrapping it with like an if debug console.log, all those console.logs are on the main thread and they are slowing down your server. They're limiting the number of requests that you can take per second. Which again, for like console.log debug value true false, not a problem. Console.log, this object I returned from the database with 600 items, very much a problem.
CHRISTOPHER_BUECHELER:That also seems like if you're doing it on the server, it's a really good way to bloat your log files very quickly.
VAL_KARPOV: There was actually an issue that I ran into a couple of months ago on this, on the routing project that I'm working on, where, yeah, the, um, the issue was server was hanging. Why? Because I was console.logging in Axios response and the, uh, the actual response, what do you call it? The request body and the response body were like huge, multiple megabytes. So that console.log was grinding the whole server to a halt.
AJ_O’NEAL: Well, I mean, if you just console.log the node request object, it is huge because it has so many properties on it and goes so many levels deep.
CHRIS Yeah, it's gigantic.
VAL_KARPOV: Yeah, exactly. Because the response also, well, at least with Axios, I forget whether it does with node. The response object includes the request object as well. So if you have a huge request body and then a huge response, well, then that gets really bad real quick.
AJ_O’NEAL: And then it's accessible from like four different locations like dot underscore connection, dot underscore stream, dot underscore request, dot underscore response. So not only do you have it, but you have it like, you know, four times for each, you know, API shim that they have in there where it's referencing what it used to be called in the previous version of node or a shortcut way that if you're in this object and you need to get to it from the this inside, da da da. Yeah, it's, it's terrible.
CHARLES MAX_WOOD: Nice. Anything else that we should attack on this before we go to picks? Sounds like we've covered a lot of this. Val, if people want to follow you online, where do they go?
VAL_KARPOV: Um, you can find me on GitHub, vcarpov15. You can find me on Twitter at code underscore barbarian.
CHARLES MAX_WOOD: Cool. One of the biggest pain points that I find as I talk to people about software is deployment. It's really interesting to have the conversations with people where it's I don't want to deal with Docker. I don't want to deal with Kubernetes. I don't want to deal with setting up servers. I don't, you know, all of these different things. And in a lot of ways, DevOps has gotten a lot easier. And in a lot of ways, DevOps has also kind of embraced a certain amount of culture around applications, the way we build them, the way we deploy them. I've really felt for a long time that developers need to have the conversations with DevOps or adopt some form of DevOps so that they can take control of what they're doing and really understand when things go to production, what's going on so that they can help debug the issues and fix the issues and find the issues when they go wrong and help streamline things and make things better and slicker and easier so that they'll more generally go right. So we started a podcast called Adventures in DevOps. I pulled in one of the hosts from one of my favorite DevOps shows, Nell Chamerell Harrington from the Food Fight show and we got things rolling there. And so this is more or less a continuation of the Food Fight show, where we're talking about the things that go into DevOps. So if you're struggling with any of these operational type things, then definitely check out Adventures in DevOps. And you can find it at adventuresindevopspodcast.com. Well, let's go ahead and do some picks. AJ, do you want to start us off with picks today?
AJ_O’NEAL: Oh gosh darn it, do I ever, except I don't remember what I had. Oh no, I've got one, I've got one for sure. Okay, so.Chuck, you're gonna have to help me with his name, because maybe you know how to say his last name. I just call him Ethan, but his last name is Garofalo. Do you know him?
CHARLES MAX_WOOD: I think I've heard the name.
AJ_O’NEAL: Ethan Garofalo, man, he's gonna, well, he's not actually gonna be that upset because he probably knows that even though I've known him for years, I don't say his last name ever. But anyway, he is pretty much. The world expert on microservices. And he came to mind while we were talking about canceling, because one of the things he recommends is that, and this I think is perfectly valid advice. Anytime you have a request response cycle that doesn't execute in constant time, meaning that depending on what the request is, the response may require a variable amount of processing or there may be, like, there's actions that cannot guarantee to basically instant return something. Basically, if something's not in memory, more or less, that you use this event pattern, and I think it totally makes sense. You do a request to fire off an event, and your response you get back is just request made. That's the response you get back. And then you do either some sort of polling or WebSocket or something like that to check for whatever the system needed to do at that event to check for its completion. And this also gives you the opportunity to do things like if you have a particularly long running event and you needed to add a cancel API on the server side, this gives you an opportunity where you could do that pretty darn easily to send a message and say, okay, that event I requested cancel it and then potentially have, and this is not something that he recommends the canceling thing, this is something I'm inserting, but then you could potentially say, okay, If this thing has multiple pipelines, I can just insert a message here so that when it gets to the stage in the pipeline, when it finishes its current stage in the pipeline, it knows don't continue on to the next stage in the pipeline. But the basic idea in general, regardless of it totally has nothing to do with canceling, it's just something I thought of. And he has a number of talks that he has available free that he's done at user groups or that when he's done at companies, uh, he's, uh, allowed them to, to publish it. So he's got some stuff on YouTube. I'll link to actually a playlist of things. I don't, I'm not going to say that one is better than the other, not ordering them. At this point, I'm just going to put it together so I can put it in the show notes, but I highly recommend watching his talks and it might take two or three of them before you, you kind of get it, not because he's not good at explaining it, but because many of us are not actually familiar with what a microservice really is. We hear the buzzword thrown around the office, but what people are really doing is creating monoliths that are more complicated, because instead of running in one server, they run in many servers. And the way that he talks about it is true microservices, where you are actually separating concerns, you're actually making sure that servers can run independently, that if an error happens with one, that the rest of the system can continue to work. So he really is just that expert consultant on microservices and related event design patterns. So I'm picking him in his talks. And then I'd pick something else, but I'll just pick something next time because I don't remember what else I had.
CHARLES MAX_WOOD: All right, Christopher, what are your picks?
CHRISTOPHER_BUECHELER:So my pick this week is actually a book by a guest that we just had on this show. It's Functional Design Patterns for Express.js by rather Jonathan Lee Martin. We interviewed him a couple shows ago and I was really impressed with what he had to say. I went out and bought the book. I'm working my way through it. I'm nowhere near done at this point, but it's a really well put together book. I'm really enjoying the pacing of it. And I think that in general, it's one of the more readable code books I've found. He talked on the show about doing various, putting together various publication paths so that he could show diffs easily in code and that kind of thing. And it really, really helps when you're going through all the code examples. So if you work with Express.js and you're at all interested in working in a more functional manner, then I recommend it. Again, it's Functional Design Patterns for Express.js by Jonathan Lee Martin.
VAL_KARPOV: Yeah, if you can drop a link to that, I'll buy that book like immediately.
CHRISTOPHER_BUECHELER: Absolutely will do.
VAL_KARPOV: Yeah, that sounds like a really good read.
CHARLES MAX_WOOD: Yeah, like on the show, it was funny because I think two of the hosts bought it during the show.
CHRISTOPHER_BUECHELER:Yeah, both of the panel members, I did and I think it was Jason also did.
CHARLES MAX_WOOD: Yeah, good stuff. I'm gonna jump in here with a couple of picks of my own. First of all, I just got off a call this morning with the folks putting on Microsoft Ignite, which is a conference done by Microsoft. They usually talk about a lot of Microsoft technologies, but they're always doing interesting things with Microsoft Azure and stuff like that. And so if you're using any of their services at all, we're gonna be doing a couple of shows the conference there so definitely check that out my friend Richard Campbell is involved in organizing some of that stuff and I'm really looking forward to catching up with some folks that I've seen at some of the other events as well so yeah keep an eye out for that those episodes will probably be coming out sometime in October or sorry in late November because the conference is the beginning of November so looking forward to that I'll probably also wind up at CubeCon in San Diego and I'm going to be in San Francisco mid-October. So I'm going to try and pull together some meetups. So if you want to come out and hang out and grab dinner and stuff like that, if you go to devchat.tv, there should be an events tab at the top. And so just click on that and we'll see if we can line up a time for, you know, for you and I and whoever else to get together. So yeah, anyway, I love connecting with people and that's just kind of an opportunity that I have there. And then finally, I'm in the process of pulling together a membership for listeners. It's funny because people keep telling me, well, just do an open collective or just do Patreon. And the issue is, is that Patreon is kind of a pain for giving people extras for supporting the shows. And I really do want to provide extra value for people who are providing extra value to me. And Open Collective is another one where, again, it's made more for people to donate than it is for people to get value back. The idea behind this membership site, and I really want to create a movement behind it called Max Coders. The idea is, well, Max is part of my name. It was also my dad's name. And I'll explain the whole thing. I'll probably do an episode on it. We're going to do monthly Q&As, so if people have questions about their careers or technology or anything that we can answer and I'll probably wind up bringing on some of my cohosts for those and see if you know or expert guests you can ask them questions and then I'm also going to be doing more of a webinar style thing and bring in experts for that kind of thing too and then I'm kind of toying with creating basically add-on memberships for each of the topics that we have shows on so JavaScript, Angular, React, Vue, Ruby, Elixir, React Native, iOS development, freelancing, etc. and giving people kind of an opportunity to max out. And that's kind of the tagline for the whole thing is just maxing out your skills, maxing out your life. The general material that you're gonna get as part of that too is just gonna be videos from me explaining how you can level up and learn more and stay current and all that good stuff. We're gonna be focused on providing you other ways to max out while we're going and keep you current that way. So anyway, if you go to maxcoders.io. then it should be up by the time this goes live and we'll be ready to roll. I'm probably also going to do kind of a launch sequence where like the first 20 people get it for a low price and then the next 20 people get it for a slightly less low price. We'll just kind of move up from there and see where we wind up at where people are filling spots. So the best way to stay on top of that is to get on the mailing list. So if you go to devchat.tv. there are a lot of different places on there where you can actually put your email address in. That'll get you on the list. Then we can let you know when it's coming out and we'll roll with it that way. I kind of had this idea a while ago with like EverywhereJS and EverywhereRB and I just didn't get enough people signing up for that. So anyway that's kind of where we ended up. But yeah MaxCoders.io is where we're heading with that. I'm actually rebranding the Get a Coder Job book. So it's the Max Coders guide to finding your dream developer job. And I'm hoping to write more books as well and get those out there so that, you know, we'll have Max Coder's guide to staying current with technology and Max Coder's guide to, you know, whatever else. So we'll see how that goes too. But that's the direction that I'm heading. In fact, I'm kind of playing with the idea of putting together a behind the scenes podcast for devchat.tv and just what we're working on there. Anyway, I have rambled for long enough. Val, do you have some picks?
VAL_KARPOV: Yeah, sure. For the last few months, I've been working on that JavaScript tutorial site, masteringjs.io. Content is all 100% free, kind of like short bite sized articles about full stack JavaScript focusing on view and express. So check it out, sign up for our mailing list, get some really good quality content and level up your career. And for another more fun pick for late summer reading, I've been reading Jurassic Park, the Michael Crichton novel that the movie was based on. It's a really fun read and it's really fun to geek out about I've never seen a book where like it actually has illustrations of graphs and those graphs are actually important to the plot. So it's actually pretty fun. I'm having a, I'm having a blast. It's a little different from the movie, but I think it's kind of like, it kind of is useful as a standalone, even if you've, even if you've seen the movie and you loved it, check out the Jurassic Park novel.
CHARLES MAX_WOOD: Yeah. The book was really good.
VAL_KARPOV: I really liked that whole, um, like a there was that normal distribution plot where Dr. Malcolm was just like, Oh, now I know that dinosaurs have escaped. And you're like, wait, what, how, how do you know that? It's just like, it's just a normal distribution. What's wrong here? And then they explained it a few pages later. You're like, Oh my God, that's so cool. They really thought that out really well.
CHARLES MAX_WOOD: Nice. All right. Well folks go check out, it was JS mastery.io.
VAL_KARPOV: Mastering JS.io. Mastering J I know I'm good at those up. So thank you Yeah, masteringjs.io and follow Val on Twitter as well. And we'll wrap this up and we'll have more JavaScript coming at you later this week or next week. Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. To live your content fast with Cashfly, visit c-a-c-h-e-f-l-y.com to learn more.