JSJ 398: Node 12 with Paige Niedringhaus
Guest Paige Niedringhaus has been a developer full time for 3 years, and today she is here to talk about Node 12. One of the things she is most excited about is the ES6 support that is now available, so things that used to require React, Angular, or Vue can now be done in Node. The require function will not have to be used in Node 12. AJ is worried about some of these changes and expresses his concerns. Paige assures him that in the beginning you won’t have to switch things to imports. You may have to change file extensions/types so Node can pick up what it’s supposed to be using. They are also trying to make it compatible with CommonJS.
Special Guests:
Paige Niedringhaus
Show Notes
Guest Paige Niedringhaus has been a developer full time for 3 years, and today she is here to talk about Node 12. One of the things she is most excited about is the ES6 support that is now available, so things that used to require React, Angular, or Vue can now be done in Node. The require function will not have to be used in Node 12. AJ is worried about some of these changes and expresses his concerns. Paige assures him that in the beginning you won’t have to switch things to imports. You may have to change file extensions/types so Node can pick up what it’s supposed to be using. They are also trying to make it compatible with CommonJS.
Node 12 also boasts an improved startup time. The panel discusses what specifically this means. They talk about the code cache and how Node caches the built in libraries that it comes prepackaged with. The V8 engine is also getting many performance enhancements.
Paige talks about the shift from promises to async. In Node 12, async functions will actually be faster than promises. They discuss some of the difficulties they’ve had in the past with Async08, and especially callbacks.
Another feature of Node 12 is better security. The transcripted security layer (TLS), which is how Node handles encrypted strains of communication, is upgrading to 1.3. The protocol is simpler to implement, quicker to negotiate sessions between the applications, provides increased end user privacy, and reduces request time. Overall, this means less latency for everybody. 1.3 also gets rid of the edge cases that caused TLS to be way far slower than it needed to be.
The conversation turns to properly configuring default heap limits to prevent an ‘out of memory’ error. Configuring heap limits is something necessary when constructing an incredibly large object or array of objects. Node 12 also offers formatted diagnostic summaries, which can include information on total memory, used memory, memory limits, and environment lags. It can report on uncaught exceptions and fatal errors. Overall, Node 12 is trying to help with the debugging process. They talk about the different parsers available and how issues with key pairing in Node have been solved.
Paige talks about using worker threads in Node 12. Worker threads are really beneficial for CPU intensive JavaScript operations. Worker threads are there for those things that eat up all of your memory, they can alleviate the load and keep your program running efficiently while doing their own operations on the sideline, and returning to the main thread once they’ve finished their job. None of the panelists have really used worker threads, so they discuss why that is and how they might use Worker Threads in Node 12.
In addition, Node 12 is making Native module creation and support easier, as well as all the different binaries a node developer would want to support. Paige makes it a point to mention the new compiler and minimum platform standards. They are as follows:
- GCC minimum 6
- GLIVC minimum 2.17 on platforms other than Mac and Windows (Linux)
- Mac users need at least 8 and Mac OS 10.10
- If you’ve been running node 11 builds in Windows, you’re up to speed
- Linux binaries supported are Enterprise Linux 7, Debian 8, and Ubuntu 14.04
- If you have different requirements, go to the Node website
Panelists
- J.C. Hyatt
- Steve Edwards
- AJ O’Neal
With special guest: Paige Niedringhaus
Sponsors
- Tidelift
- Sentry use the code “devchat” for 2 months free on Sentry’s small plan
- Sustain Our Software
Links
- Async
- CommonJS
- njs
- Promise
- Node
- Event Stream
- llhttp
- llparse
- LLVM
- Papa Parse
- Json.stringify
- Json.parse
- Optimizing Web Performance TLS 1.3
- Overlocking SSL
- Generate Keypair
Picks
J.C. Hyatt:
- AWS Amplify framework
- 12 Rules for Life: An Antidote to Chaos by Jordan Petersen
- React and Gatsby workshops
Steve Edwards:
- The Farside comic coming back?
AJ O’Neal:
Paige Niedringhaus:
Special Guest: Paige Niedringhaus.
Transcript
Hey folks, I'm a super busy guy and you probably are too. You probably have a lot going on with kids going back to school, maybe some new projects at work. You've got open source stuff you're doing or a blog or a podcast or who knows what else, right? But you've got stuff going on and if you've got a lot of stuff going on, it's really hard to do the things that you need to do in order to stay healthy. And one of those things, at least for me, is eating healthy. So when I'm in the middle of a project or I just got off a call with a client or something like that, a lot of times I'm running downstairs, seeing what I can find that's easy to make in a minute or two, and then running back upstairs. And so sometimes that turns out to be popcorn or crackers or something little. Or if not that, then something that at least isn't all that healthy for me to eat. Uh, the other issue I have is that I've been eating keto for my diabetes and it really makes a major difference for me as far as my ability to feel good if I'm eating well versus eating stuff that I shouldn't eat. And so I was looking around to try and find something that would work out for me and I found these Factor meals. Now Factor is great because A, they're healthy. They actually had a keto line that I could get for my stuff and that made a major difference for me because all I had to do was pick it up, put it in the microwave for a couple of minutes and it was done. They're fresh and never frozen. They do send it to you in a cold pack. It's awesome. They also have a gourmet plus option that's cooked by chefs and it's got all the good stuff like broccolini, truffle butter, asparagus, so good. And, uh, you know, you can get lunch, you can get dinner. Uh, they have options that are high calorie, low calorie, um, protein plus meals with 30 grams or more of protein. Anyway, they've got all kinds of options. So you can round that out, you can get snacks like apple cinnamon pancakes or butter and cheddar egg bites, potato, bacon and egg, breakfast skillet. You know, obviously if I'm eating keto, I don't do all of that stuff. They have smoothies, they have shakes, they have juices. Anyway, they've got all kinds of stuff and it is all healthy and like I said, it's never frozen. So anyway, I ate them, I loved them, tasted great. And like I said, you can get them cooked. It says two minutes on the package. I found that it took it about three minutes for mine to cook, but three minutes is fast and easy and then I can get back to writing code. So if you want to go check out Factor, go check it out at factormeals. Head to factormeals.com slash JSJabber50 and use the code JSJabber50 to get 50% off. That's code JSJabber50 at factormeals.com slash JSJabber50 to get 50% off.
Hey folks, I'm a super busy guy and you probably are too. You probably have a lot going on with kids going back to school, maybe some new projects at work. You've got open source stuff you're doing or a blog or a podcast or who knows what else, right? But you've got stuff going on and if you've got a lot of stuff going on, it's really hard to do the things that you need to do in order to stay healthy. And one of those things, at least for me, is eating healthy. So when I'm in the middle of a project, or I just got off a call with a client or something like that. A lot of times I'm running downstairs, seeing what I can find that's easy to make in a minute or two, and then running back upstairs. And so sometimes that turns out to be popcorn or crackers or something little, or if not that, then something that at least isn't all that healthy for me to eat. Uh, the other issue I have is that I've been eating keto for my diabetes and it really makes a major difference for me as far as my ability to feel good if I'm eating well versus eating stuff that I shouldn't eat. And so, um, I was looking around to try and find something that would work out for me and I found these factor meals. Now factor is great because a, they're healthy. They actually had a keto, uh, line that I could get for my stuff. And that made a major difference for me because all I had to do is pick it up, put it in the microwave for a couple of minutes and it was done. Um, they're fresh and never frozen. They do send it to you in a cold pack, it's awesome. They also have a gourmet plus option that's cooked by chefs and it's got all the good stuff like broccolini, truffle butter, asparagus, so good. And you can get lunch, you can get dinner. They have options that are high calorie, low calorie, protein plus meals with 30 grams or more protein. Anyway, they've got all kinds of options. So you can... round that out, you can get snacks like apple cinnamon pancakes or butter and cheddar egg bites, potato bacon and egg, breakfast skillet, you know obviously if I'm eating keto I don't do all of that stuff. They have smoothies, they have shakes, they have juices, anyway they've got all kinds of stuff and it is all healthy and like I said it's never frozen. So anyway I ate them, I loved them, tasted great and like I said you can get them cooked. It says two minutes on the package. I found that it took it about three minutes for mine to cook, but three minutes is fast and easy and then I can get back to writing code. So if you want to go check out Factor, go check it out at factormeals, head to factormeals.com slash JSJabber50 and use the code JSJabber50 to get 50% off. That's code JSJabber50 at factormeals.com slash JSJabber50 to get 50% off.
JC_HIATT: Hello, welcome to JavaScript Jabber. I am JC Hyatt and hosting today's episode. On today's panel, we have Steve Edwards.
STEVE_EDWARDS: Hola from the Portland, Oregon area.
JC_HIATT: We have AJ O'Neill.
AJ_O’NEAL: Yo, yo, yo, can't lie from Sonny Provo.
JC_HIATT: And today's guest is Paige Neadringhouse. Wanna say hello?
PAIGE_NIEDRINGHAUS: Hey everybody, Paige from Atlanta, Georgia.
JC_HIATT: Tell us a little bit about your background and how you got here.
PAIGE_NIEDRINGHAUS: Sure. I've been a developer now for about three years, full-time. Before that, actually, I was working in digital marketing right after I graduated from college. So I made the switch, joined a coding boot camp, and I was hired right after that to work at Home Depot. And I've been here ever since, first as a software engineer and now as a senior, just in the last couple of months. And it's been an incredible journey. I'm so, so glad that I made that switch.
JC_HIATT: Yeah, nice. And congrats on the promotion.
PAIGE_NIEDRINGHAUS: Thank you.
This episode is sponsored by Tidelift, the enterprise ready open source software managed for you solution. Tidelift provides commercial support and maintenance for the open source dependencies you use to build your applications backed by the project maintainers, save time, reduce risk and improve code health. The Tidelift subscription is managed open source for application development teams. It covers millions of open source projects across JavaScript, Python, Java, PHP, Ruby.net and more. Your subscription includes security updates from Tidelift security response team that coordinates patches for new breaking security vulnerabilities and alerts immediately through a private channel. So your software supply chain is always secure. Tidelift also verifies license information to enable easy policy enforcement and adds intellectual property and demunification to cover creators and users in case something goes wrong. You always have 100% up-to-date bill of materials for your dependencies to share with your legal team, customers, and partners. Tidelift ensures the software you rely on keeps working as long as you need it to work. Your managed dependencies are actively maintained and we recruit additional maintainers when required. Tidelift helps you choose the best open source packages from the start and then guide you through the updates to stay on the best releases as new issues arise. Take a seat at the table with the creators behind the software you use. Tidelift's participating maintainers earn more income as their software is used by more subscribers, so they're interested in knowing what you need. Tidelift supports GitHub, GitLab, Bitbucket, and more. They support every cloud platform and other development targets too. The bottom line is you get all the capabilities you expect and require from commercial software, but now from the key open source software you depend on. Check them out at devchat.tv slash tidelift.
JC_HIATT: So I know we've got a few kind of bullet points of discussion here, but just as a high level, what are you most excited about from this new release?
PAIGE_NIEDRINGHAUS: Well, with Node 12, the thing that I am most excited about, and it's actually still behind one of the experimental flags, is the ES6 support that they have. Module support is almost here, and it makes things that we take it for granted on the front end of JavaScript just super easy. It's the things like relative URLs, package names, default import exports, all those things that we just do in React, in Vue, in Angular without even thinking about it is finally coming to Node. So you won't have to write require, you won't have to write module.export. We're going to be able to use the same syntax across the board from front end to back end, and that's probably the thing that I'm, I personally am most excited about.
AJ_O’NEAL: That scares me so much. There's going to be so... Oh... So I... I'm well known for liking JavaScript and not liking ECMAScript and TypeScript. And I just think require is so simple and straightforward and easy and these other things, they require so much tooling and so much like hoopla and they never work right. I mean like basically either you webpack or you just don't develop. And I, I'm from the old school of like, you hit F5 and it works. Hallelujah. So I'm actually, I'm actually scared about that change because I think it's just, there's, oh, how is it going to work between the two? Cause one of the problems I hear is that like, I've got modules that are meant to work both in the browser and in node. And that worked fine for a long time. And then you started doing this import stuff. And then I started getting bug reports about how my modules are suddenly breaking. And somehow this is my fault, you know? So what's going to happen there? Are we going to have to all switch over? Is it going to, am I going to be able to require one of these other things or am I going to have to switch all my stuff to imports or, oh, what's going to happen?
PAIGE_NIEDRINGHAUS: Well, at least in the beginning, you won't have to switch everything to import. You may have to change some of your file endings though. They're going to have different file extensions. So modules will end with a.mjs ending and common.js files will be.cjs. So with common.js, you'll still be able to use the require module exports, all that syntax that you're used to. You just may need to change some of the file types so that Node can correctly pick up which type it's supposed to be using.
AJ_O’NEAL: But what about like when I've got, you know, something I'm writing and somebody wants to use it, are they not gonna be able to use it? Cause it's common JS and they're writing an MJS file? I mean, is this gonna be like bifurcation? You're either on one side of the fence or the other?
PAIGE_NIEDRINGHAUS: Maybe for a little bit. I'm not exactly sure how they're going to handle that, but I know that they are, you know, trying to make it work for everything.
AJ_O’NEAL: Sorry, I derailed us. Y'all go back to what you're doing there. I'm a base there over here. Just preaching gloom and doom.
PAIGE_NIEDRINGHAUS: So that's one of the things that I'm most excited about with what's coming in Node 12, but there's some great enhancements that they're doing in other areas as well. They've improved the startup time. So with Node 11, we got about a 60% startup time from worker threads by using the built-in code cache support, but with Node 12, they've managed to speed it up by another 30%. So Applications are going to get the users faster. They're going to have better experiences. It's really pushing the envelope in terms of trying to be as dynamic and quick as possible, which is pretty cool.
AJ_O’NEAL: So does this mean modules are going to load faster or just the node binary is loading faster?
PAIGE_NIEDRINGHAUS: The main thread. So the main thread that uses the code cache will be starting up faster. So any built-in library that JavaScript is using should be quicker to load in.
AJ_O’NEAL: What is the code cache? This is something that I'm not familiar with.
JC_HIATT: Yeah, I'm unfamiliar with that too.
PAIGE_NIEDRINGHAUS: Let me see if I can find a good example of how to describe it. I think it's just how they cache the built-in libraries that Node comes prepackaged with, or any of the libraries that you end up requiring in your Node application. I might have to get back to you on exactly what that is, because I didn't do a huge deep dive into it.
AJ_O’NEAL: Okay, I'm just curious, because that's something that I was trying to optimize at one point, and I thought, well, maybe if I can cat things together, it'll be faster or da da da da da. But what ended up seeming to take the most amount of time back when I was trying to optimize it, which was a long time ago, was the VM compile step. And so I would be interested to know if we are doesn't it start out interpreted now and then switch to JIT? I thought V8 introduced that at some point.
PAIGE_NIEDRINGHAUS:I am honestly not sure if that is the case or not, but I know that the V8 engine, now that you've mentioned it, is getting a lot of performance improvements as well. They've got new zero cost stack traces, so without adding extra runtime, you can keep an eye on what's happening in your error stack faster calls with async and await instead of promises, faster JavaScript parsing. So it's very possible.
JC_HIATT: We go a little more detailed on the async await stuff because, so are we saying that using async functions like is faster than promises themselves?
PAIGE_NIEDRINGHAUS: I am, yeah. This is something that's really cool. And I learned about it, I don't know, a month or two ago.the V8 team actually was able to take the async await syntax and make it two micro ticks. And I can't even tell you how quick a micro tick is, but it's actually two micro ticks faster than promises. So yeah, they were able to drop. I think there was an extra promise that had to be concatenated on when you were, before when you were using async await, but with the newest release, they've actually been able to drop that improve the performance speed of it.
JC_HIATT: So me calling async on a, some sort of fetch request or something like that is going to be a couple of micro ticks. I don't know what a micro tick is either, but a couple of micro ticks faster than, than chaining like dot thins onto a fetch request.
PAIGE_NIEDRINGHAUS:It is, it's actually going to be faster.
JC_HIATT: Cool.
AJ_O’NEAL: Why don't they speed up promises?
PAIGE_NIEDRINGHAUS: Because ES6 is the way of the future, ma'am.
AJ_O’NEAL: Oh, he's holding on.
JC_HIATT: He's clinging.
JC_HIATT: Oh man. Well, no, it's,
JC_HIATT: that kind of throws my brain for a loop, right? Like it, cause the way I thought of a single weight was that it is promises under the hood and I, and I know that it is, but it kind of throws my mind for a loop a little bit to say that it's faster than the thing that is using under the hood. I don't, I don't know how, I don't know how to reconcile that.
PAIGE_NIEDRINGHAUS: Yeah, it took me a long time to get on board with Async and Await personally. I grew up using Promises and callbacks and then Promises when I saw the great things that they provided. But I liked that then. I liked chaining on catches and all that error handling. But when one of my coworkers actually suggested that we start using Async Await, I wanted a better reason than it's ES6 and that's the hot new thing. So when I started looking into it and we found out that Chrome actually does optimize for async or wait on top of it also being an easier syntax to learn it being a little bit more like the synchronous type of JavaScript code that we're used to writing for most everything else. I got on board and I haven't really looked back honestly.
AJ_O’NEAL: So here's the question in that transition. This is something I've seen and this is why I don't use them. But I'm curious to hear your experience. What I've noticed is that people get really confused about Async08 because, well, I mean, we've got this weird, weird, weird time that we live in when it seems like the software world is 90% junior developers because every year growing like 20, 25%. So within five years, we're now at like nearly a hundred percent junior developers with like, I don't know, 20 percent being junior, mid-level, and then you've got like 1% senior developers now, just because the whole world has become software developers. And so struggle with some of the basics because they're just kind of like thrown into a boot camp and thrown into like here, copy and paste this, you'll get paid. And then with promises, it's super clear. I think, uh, well, maybe not to everybody, but it seems to be much clear with promises there's something that's asynchronous that the way that the code is working is not, you know, line by line, but it is rather by events. But with async await, people lose that context that it's actually events are happening because it, it looks like it's line by line. And then people start to make choices as if it's actually executing line by line and sometimes get themselves into weird situations where just kind of creating something that's convoluted because they think that it's natural and they don't know that it's just events. That's really the, aside from not being compatible everywhere which is becoming less and less of a problem day by day, that those are the two reasons I haven't jumped on board is because compatibility and then just develop a confusion around what are events versus what's happening in real time. That's not exactly the right word, but whatever. So what are your thoughts on that?
PAIGE_NIEDRINGHAUS: I can definitely see where you're coming from. I, like I said, started with callbacks, moved to promises, held on to promises for quite a while.
AJ_O’NEAL: And callbacks were terrible. I think we all agree that that was a disaster, right? Is that?
PAIGE_NIEDRINGHAUS: Oh yeah. No, no one wanted to deal with that. Never, never again. I like you. I really liked promises. I liked seeing, you know, the dot then, and then my response or my failure and, you know, going on with the code. I think what Async await though eliminates is some of the nested promises because you can get into a promise chain where you're doing pretty much the same thing as callbacks. You have to get data, call, do another call for another promise, fetch more data, do some more things with that data. So you can get nested down pretty deep. Whereas with async await, if you know that you're doing a fetch of data from some other end point, some API, you can say, here's this asynchronous function that I'm gonna call here's a constant that I'm setting to data, here's the await where I actually make that call to that API that's somewhere out in space. And then once I get that data back, but I don't have to think about the catch, I don't have to think about the error handling, I just know that it will either come back or I'll get some kind of a failed error response and then I can handle that however I want to. So I think it's The onus is on the developer themselves to do that deeper digging when they are introduced to promises or callbacks or async await to kind of understand what's going on under the hood or for the development team to be able to explain this is how it works, this is what's actually happening on the event loop, you know, this is the call stack. The boot camp that I went to was four months long and In 16 weeks, we learned all of the non ES6 stuff, because I went to it three years ago at this point. So ES6 was just starting to heat up. So we learned traditional JavaScript from the vanilla beginnings. And I can't even tell you how many times my instructor would reiterate the event loop, the call stack, taking things on and off of it as they were being executed. So yeah. It obscures that. I would agree with async await obscuring it a little bit, but I think if you really want to be a good developer and really go to the next level, that's one of the things that you just have to take the time to reacquaint yourself with, learn more about, read up on, write about, teach somebody else about it, because that will really help solidify and make you understand things that you had never clarified before around things like this. And data fetching is one of those things so ubiquitous for pretty much all of our applications. You need to be able to do it and you need to be able to understand how to do it.
AJ_O’NEAL: 100% agreed. So other features that Node 12 has, other than making things too, oh man, whenever somebody says the word faster, it's over. Like it's over, whatever it is. If it's faster by one Micro Tick in a benchmark like that's the new hotness. Whatever it was before is completely erased from history.
PAIGE_NIEDRINGHAUS: Absolutely.
AJ_O’NEAL: That's going to be interesting. What else is new?
PAIGE_NIEDRINGHAUS: There's better security. How about that? You like more secure applications?
AJ_O’NEAL: I do.
JC_HIATT: Sometimes.
PAIGE_NIEDRINGHAUS: Sometimes, yeah. Sometimes it makes our lives easier. So with Node 12 The TLS, which is the Transport Security Layer, which is how Node handles encrypted streams of communication is upgrading to version 1.3. I know that this doesn't sound like a major upgrade from 1.2, but the protocol is actually simpler to implement, so it's easier for developers to configure, to write. It's quicker to negotiate sessions between the applications. It provides increased end user privacy and reduces request time because it's quicker with the actual HTTPS handshake between apps and it's less latency for everybody. So that should actually provide some major benefits for everybody as well as making applications more secure from the get go.
STEVE_EDWARDS: So in other words, AJ, it's faster
AJ_O’NEAL: Here's the thing with TLS 1.3 and the handshake stuff. I could be slightly off here, but I'm pretty sure this is directly related to HTTP2, because one of the optimizations Google couldn't make was shortening the handshake. Most servers would allow, basically you could skip a step because you knew what the handshake was gonna be. You knew you were gonna send like, hey, I want A and then you're gonna get back. Well, you know, I have B and then you're going to send C. So since you knew what the response B was going to be, you could just rapid fire, send both A and C at the same time. And most would accept that. So Google was able to cut out the handshake and reduce it by like 50%. But there would be a couple of strange cases where those requests would get dropped. So like 99%, which is really, really, really low percentage. That means that one out of every hundred, you know, is going to fail. So like 99% of web servers, which is nowhere near close enough, would work with it, but a whole whopping huge, gigantic 1%, which is a significant portion of the web wouldn't be able to do it. So I think this upgrade with 1.3 is so that, that they can get that handshake down on all servers that support 1.3 and their dreams of HTTP2 will finally be complete when this gets adopted everywhere.
PAIGE_NIEDRINGHAUS: That would be awesome. I did not know about those servers that would actually drop it. So that's fantastic to know that that's part of what they might be addressing.
AJ_O’NEAL: They had to drop part of what they were doing. Like they reversed it. They did one of those things where they tested it out to see like, how does this work in the web? And it was unfortunately a failure. I'll try to link to that article. I'll try to find it and link it to the show notes. But yeah, so I'm excited about 1.3 If I understand correctly, it gets rid of this tiny little itsy bitsy edge case that caused TLS connections to be way slower than they needed to be.
PAIGE_NIEDRINGHAUS: Yeah, very cool, nice. Okay, so here's something that might be interesting to you because we all encounter bugs and every once in a while we encounter max memory size issues where we will stack overflow the browser. there is a way now that we can properly configure default heap limits. So before JavaScript heap size defaulted to the max for, or actually the max that was set by V8. So unless you manually configured it otherwise, you would just default to whatever your V8 browser said was the max memory size. So with Node 12, the JavaScript heap size will actually be configured based on available memory so this ensures that Node doesn't try to use more memory than is available and terminate processes when its memory is exhausted, which is typically what's happened up to this point. So you're gonna get a lot less out of memory errors than you would have in the past.
JC_HIATT: Nice, and this is just baked in?
PAIGE_NIEDRINGHAUS: It is. There is a flag that's called max old space size that you can pass in, which is what we would typically use in a Node command line to actually change memory size available, but this properly configured default is just baked in with Node 12.
JC_HIATT: What are some, like, or what is a common scenario where, like, you may have encountered an out of memory error in the past?
PAIGE_NIEDRINGHAUS: One that I encountered was when I was doing a little project for myself, parsing through census data for the US government. They had given it me a file that was, I don't know, like three and a half gigs worth of data. And I was trying to parse through it, pull out various pieces of data like donor names, numbers, things like that. And the list that I was constructing In memory was actually too big and caused my IDE to crash when I'd try and run it in the terminal. So, you know, something where you're constructing an incredibly large object or array of objects is something that could cause just a memory issue in actual production.
JC_HIATT: But now you can just manually specify a much bigger size if you're doing a project like that, right?
PAIGE_NIEDRINGHAUS: Yeah. That was what I ended up doing. I found an NPM package called Event Stream, and I just increased the memory size that my local terminal was running with. Yeah. That let me parse through the really large files with node.
JC_HIATT: Yeah, I had to do a really similar project earlier this year with voter data from my state and the data just like in your case was I think two or three gigabytes of data and it was a huge, you know, a whole bunch of CSVs that I was actually working at Ruby, but it's same concept of had to split out the CSVs and then like bring in a library to kind of give me those over time. Apparently this is a problem in lots of languages.
PAIGE_NIEDRINGHAUS: Yeah, I ended up doing the same kind of thing with Java just to see what the differences were. And I'm not a huge fan of Java. I really like JavaScript a whole lot better, but it was much better equipped to handle and process such large amounts of data in pretty short amounts of time, I've got to say.
JC_HIATT: Cool. So what else are you excited about with this release?
STEVE_EDWARDS: So one thing that piqued my interest quite a bit is the diagnostic reports and the integrated heat dumps. As a developer, I constantly live in the debugger and trying to figure out what's going on. So in particular, I'm looking at the ability to format your diagnostic summaries and stuff like that to sort of make it a little more readable. So can you talk about that?
PAIGE_NIEDRINGHAUS: Sure, like you just said, there's a lot of diagnostic reports that we can now either trigger when certain events occur or that you can generate from the command line. Depending on what issue you want to investigate, there's different flags and things that you can pass in through the command line terminal to tell it what you want Node to get out. Let me see if I can find some links to that. I agree though, that's something that I'm really excited about. Yeah, so here's some stuff directly from the Node website that it will deliver a JSON formatted diagnostic summary written to a file. You can see things like JavaScript and native stack traces, heap statistics, platform information, resource usage, etcetera. You can trigger them on things like unhandled exceptions, fatal errors and user signals, in addition to triggering through API calls. So from the command line, you would use things like a flag that they've called experimental report and they have other ones that are report on CodException. So basically it will just format you out a really nice stack trace of all types of things. The JavaScript heap, so you'll see things like total memory and used memory as well as memory limit. You will see resource usage. You'll see, what else do we have? Environment variables that are there when these issues are happening. Okay, here's all the different ones. So we've got experimental report which enables this feature. We've got report uncaught exception, which enables reports to be generated on uncaught exceptions, which is useful for native stack and other runtime environment data. We have report on signal, which enables reports to be generated upon receiving the specified signal to the running node process. Then it can give you reports on fatal errors, the report directory of the location the report will be generated to, the report file name, if you have a specific file that you want the report to be given for the naming conventions, and then a report signal which can set or reset the signal for report generation. So it's really attempting to make error logging and error handling much more convenient than it has been, you know, where we've previously relied on a lot of logs in terminals, a lot of error tracking software, I know some of which advertise for JavaScript, Jabber, nodes trying to build in to help help with that debugging process.
STEVE_EDWARDS: So then the idea is maybe that, uh, with the native stuff that's built into node, you may won't need some of the external services. They're sort of trying to pick up the slack that the external services have been providing to this point. Is that right?
PAIGE_NIEDRINGHAUS: It sounds like they're trying to. Yeah, I wouldn't say that they're necessarily going to replace them because sometimes having a GUI or having, you know, a nice dashboard will still be extremely beneficial in identifying bugs and fixes. But being able to kind of deep dive into what was going on inside of your server or inside of the code when the error occurred, which is usually the hardest part is figuring out exactly what happened that made it freak out in this way that you weren't expecting, will help kind of demystify that, hopefully make recreation of the errors easier, and then the developers will hopefully be able to fix them faster and pinpoint the actual root cause in a quicker way.
STEVE_EDWARDS: Yeah, that's awesome. Any helping to clarify on errors is always a welcome tool for a dev.
PAIGE_NIEDRINGHAUS: Oh, absolutely.
STEVE_EDWARDS: So then you've also listed some...I see some changes here to the HTTP parser.
PAIGE_NIEDRINGHAUS: Yes, once again, we're increasing speed. But yes, the current HTTP parser that Node has been using for probably since close to its inception has not been the easiest thing to maintain or to build upon in the past years. So there's a new up and comer called LLHTTP, which is a port of the original, but it's been ported over to TypeScript. Then it's parsed through another thing called LLParse, which will then generate the C or the bytecode. It turns out that this transformation to LLHTDP is actually faster than the original parser by about 156 percent. It's actually written in less lines of code. All the performance optimizations which had to be handwritten before are actually now automatically generated in this version.
AJ_O’NEAL: It sounds like what you're saying is you go from TypeScript to WebAssembly JIT style code. Is that what you're saying?
PAIGE_NIEDRINGHAUS: Yeah, the LLParse is another module that will run all the TypeScript of LLHTTP through it. And then that actually is what generates the C or the bit code.
AJ_O’NEAL: So C or bit code, I'm a little confused there. Because bit code meaning basically WebAssembly for the V8 compiler, C would be very different. So why would it be one or the other? Is it something that happens because of a flag you pass to when you build or is it something that, how does that work?
PAIGE_NIEDRINGHAUS: Well, from what I understood reading the articles myself, I think you could choose which thing you wanted it to be outputted to, depending on what your end goal was or where the end place was that this was going to be running. Let me see. Yeah, so according to the GitHub for LLHTTP, the library llparse is used to generate the output C and or bitcode artifacts, which could be compiled and linked with the embedders program like Node.js. Does that make more sense at all?
AJ_O’NEAL: To me, that sounds like this is not just intended for Node, but rather this is a parser that's written in TypeScript that can be exported for various projects and Node might be using either the C version or the bit code version. Is that, am I making correct connections?
PAIGE_NIEDRINGHAUS: I think that you are, yeah. Yeah, I'm reading exactly from the LL parse GitHub now, and it just describes itself as an API to compile NTC output and or LLVM bitcode. So I guess it can do both.
PAIGE_NIEDRINGHAUS: That's cool.
PAIGE_NIEDRINGHAUS: Yeah,
AJ_O’NEAL: that XUKB parser has a very rich history. If I'm not mistaken, Ryan Dahl originally wrote it for Ruby. And then it came over to see, or not see, over to Node when he started Node, and then has been very faithful since then. But I have also heard that if you can avoid the context switch out to see, you can get a lot of improvements in JavaScript. So at one point Google was trying to get the DOM to be in JS, and it actually was faster, but the problem was that then you'd have real JavaScript objects in the DOM, like for example, the node lists would have all the same properties as an array and arrayisArray would return true for a node list. And it turned out that that broke the web entirely and so they abandoned the project, if I remember the story correctly. So counter-intuitively, you think like, well, see, it's gonna be faster. But if you can get the bit code and get it in the JIT engine, then it turns out that that's not always true, that JavaScript can be faster than C if you get it to JIT, which is just-in-time compile, for those of you that don't know what JIT is and are listening here and you say JIT over and over again. Meaning that it doesn't-
PAIGE_NIEDRINGHAUS: Interesting.
AJ_O’NEAL: It doesn't make decisions, a JIT compiler doesn't make decisions at compile time. Like when you write a C program, you have to say, I want to run these certain optimizations and that's it, it's done. The legit, where the code runs and the compiler analyzes the running code and then speeds it up even faster based on how the code is actually used at the time it's being used, rather than guessing at how the code might be used when it's compiled. So just a little background for anybody that didn't know that.
PAIGE_NIEDRINGHAUS: That's really good to know. That's great
AJ_O’NEAL:. I think it's awesome.
PAIGE_NIEDRINGHAUS: Yeah, I'm really excited. They've had it behind the experimental flag in node 11. So bringing it out to the forefront for node 12 and just putting it out there for everybody to use is really going to put it to the test and we'll see how well it can actually perform for everybody's different needs.
This episode is sponsored by sentry.io. Recently, I came across a great tool for tracking and monitoring problems in my apps. Then I asked them if they wanted to sponsor the show and allow me to share my experience with you. Sentry provides a terrific interface for keeping track of what's going on with my app. It also tracks releases so I can tell if what I deployed makes things better or worse. They give you full stack traces and as much information as possible about the situation when the error occurred to help you track down the errors. Plus, one thing I love, you can customize the context provided by Sentry. So, if you're looking for specific information about the request, you can provide it. It automatically scrubs passwords and secure information, and you can customize the scrubbing as well. Finally, it has a user feedback system built in that you can use to get information from your users. Oh, and I also love that they support open source to the point where they actually open source Sentry. If you want to self-host it, use the code dev chat at sentry.io to get two months free on Sentry small plan that's code dev chat at sentry.io.
AJ_O’NEAL: So there's something that was snuck in to node 10. Like, first of all, I don't consider odd versions worth talking about because they're like the beta experimental versions. So technically this was actually implemented in node 10. It's been there the whole time through Node 11. But I think, I mean, I could be wrong. I think most people play it safe and only go with even numbered versions that have some sort of reliability and maintainability guarantee. So for many people that did not know that it was kind of sneaked in halfway through the 10 life cycle, to me this is super exciting. Node 12 is the first legit maintained release, like, Long, or wait, is 12 long-term maintenance or no?
PAIGE_NIEDRINGHAUS: It will be entering long-term maintenance in October, actually. It was just released in April of this year.
AJ_O’NEAL: Okay, yeah. So this is like dependable, reliable, will be supported, will be maintained. So this is the first version that starts with ECC and RSA bindings to OpenSSL being exposed in Node. So you no longer have to shell out to the open SSL program to generate your public private key pairs. And there's a whole ton of projects around getting key pair encryption working in Node that just had various, various problems with them. And now you don't need them. That's something I really think is cool about Node 12, even though technically it's been there in 11 the whole time and it was introduced midstream intent without much fanfare or announcement. It was more treated like a bug fix that the API didn't exist than a new feature. Did you know about that?
PAIGE_NIEDRINGHAUS: Very cool. No, I didn't actually. So that's very cool to hear about.
AJ_O’NEAL: There was a library called URSA that a lot of people use that has since gone defunct because it just fell out of maintenance. And technically you could always generate elliptic curve keys in Node, but It was under an API that if you were looking for it, you wouldn't have found it. It was kind of happenstantial to the Diffie-Hellman exchange APIs. Anyway, that's probably a little bit too much in the weeds. I'm sorry. I'll shut up now.
PAIGE_NIEDRINGHAUS: No, I think it's super cool. I'm always interested to learn more about node, you know, actual low level node instead of just all the libraries that we run on top of it and the frameworks like express and co-op.
JC_HIATT: Can you speak a little more about the worker threads? I see you wrote something down about that, and I'm just curious, first off, what exactly worker threads are for, and then if also you can speak to how they're relevant to this release.
PAIGE_NIEDRINGHAUS: Yeah, that's a great question actually. Worker threads have been around in Node for a while now. I believe that they were introduced in Node 10 as well. They were behind the experimental flag as well. Prior to Node 11.7, you had to use the experimental worker flag when you passed it in the command line, then a worker thread would run. And what worker threads are really beneficial for is CPU intensive JavaScript operations. You know, Node is really good actually with its own asynchronous IO operations, but worker threads are there for some of those really, those things that just eat up all of your memory. They can help alleviate the load and let your program keep running pretty efficiently while doing their own operations on the sideline and then returning to the main thread once they've finished calculating data or crunching numbers or doing whatever it is that they're doing in their own little thread. I personally have not used them very much so I can't speak to a lot of real world use cases that I found for them. But I'd love to hear if either of you, if any of you have had the opportunity to really see what they're made of.
JC_HIATT: It sounds like that would be a nice feature for reading a two gigabyte CSV file.
PAIGE_NIEDRINGHAUS: You never know I guess I personally haven't had enough experience or really looked into it enough to know how to operate worker threads in a way that makes sense or a way that makes them really efficient. But I agree, it could be great for parsing really large data sets like that.
JC_HIATT: Anyone else have experience with the worker threads?
STEVE_EDWARDS: Nope, not here.
AJ_O’NEAL: I don't remember who gave this talk, but one of the local meetups a couple of years ago, somebody gave a talk, I think it was on PapaParse, which this one is in browser. I don't know if this works in Node or not. I would think it probably does. But I am not certain. But yeah, it is specifically for the case of the two gigabyte CSV, as was mentioned, and it does some, I don't remember exactly how it uses the worker threads. I just remember that it does some tricksy stuff in the background, which makes sense. Like CSV is something that you can do line by line parsing. So you can chunk out like a few megabytes at a time. And when JavaScript is using the main thread, that's like, you killed it. Like there's nothing else that can go on. So if you have something that you have to parse and you can just make a copy operation that's a low cost copy operation to the worker thread. And then when you got the results to a low cost copy operation back then you're doing pretty well. The one caveat to that is that JSON string to find JSON parse are really, really expensive. Like we don't notice it for little tiny objects but you start to go on things that are multiple tens of kilobytes or into megabytes. And a lot of times the way you want to communicate between a worker thread is you want to send JavaScript back and for JSON back and forth, cause that's so nice. But if you start to scale the size of the JSON object you're sending back, then you actually end up in the same problem where the JSON parser ends up doing all the work in the thread. So a little caveat to be aware of there.
PAIGE_NIEDRINGHAUS: Very good to know. Let's see another thing that's pretty cool about Node 12 is that native module creation and support is getting easier for Node.js. So if you are a developer who is building modules for the Node ecosystem, I guess any of the npm packages, things like that, there is a new API release that they've just come out with for the creation and then support for all of the different binaries that a node developer could want to support much easier. So in the past, they had to keep track really of which distributed binaries they were supporting and they wanted their modules to be able to support. And with this new release of the NAPI, it largely abstracts that away. So it gets much closer to a developer who is maintaining more of a pure JavaScript module. There's a lot less setup and then configuration overhead for the developers to keep track of.
AJ_O’NEAL: Okay, say that one more time. A pure JavaScript module is optimized with the developer experience, huh?
PAIGE_NIEDRINGHAUS: Well, there's less in terms of the developers having to keep track of which binaries they support. You know, it's the V8 engine and as long as the V8 engine can parse whatever kind of JavaScript code they're running, it should probably just be good. But with the different versions of node, some of them either cannot run or do not support particular node modules. You know, you'll get them-
AJ_O’NEAL: Are you referring to the native C++ node modules that are compiled to binaries?
PAIGE_NIEDRINGHAUS: Yes.
AJ_O’NEAL: Okay. So the NAPI.Okay.
PAIGE_NIEDRINGHAUS: Yeah. So the NAPI basically takes care of the compiling down and the, you know, I guess different versions making those node modules compatible with the different versions of node that are running out there.
JC_HIATT: So is the NAPI for building a C++ module or is it just for building a JavaScript module?
JC_HIATT: I believe that it is for building JavaScript modules. Let me see.
STEVE_EDWARDS: Okay.
AJ_O’NEAL: I believe that NAPI refers to C++ modules that are pre-compiled for Node or compile that install time. And the NAPI provides a way that it's easier to not. So if you've ever worked with Node on Windows, you go npm install some thing enough times different things, you're going to get these error printouts saying could not compile. And a lot of times it's an optional dependency where there's a pure JavaScript implementation, and then there's the C++ implementation that's faster or more optimized. And when you install it on Windows, it just fails because you don't have the Visual Studio compiler installed. Whereas on like Mac or Linux, you probably just happen to have the GCC compiler already installed. And so before most things had to install on your computer or the author had to do a complicated build step and make sure that they published 15 different binaries for all their supported versions of node within API. They write the C++ code and it probably is insignificantly slower from the sense that there's some sort of wrapper layer or maybe that gets abstracted away with macros at compile there's a layer so that the binary they produce, they can compile and publish once and the tool chain becomes simpler. So they don't have to do 17 different compiles to support several different versions of Node. They could just kind of compile once for Windows, once for Linux, once for Linux on ARM, once for Mac and be done with it. And the NPM tooling is enhanced so that it supports pre-publishing the binaries as part of the NPM process so that when I go to NPM install on Windows or on Mac or on Linux, I don't get an error saying, oh, we couldn't find your C compiler or you didn't have the right dependencies for this. So just failing and using the native or not native, the pure JavaScript implementation instead, it's saying we're going to use, we're going to go download this pre-built thing different pieces of an API. The API with the C++ wrapper is one piece. The build step support that's integrated into NPM to make the compiling simpler is another piece. And then the pre-publishing the binary is another piece. But with it all together, it means I write some C++ that compiles down to a node binary. I publish the binaries to NPM, other people can download those binaries and between whatever point release is supported as part of NPI, that one binary will work for many customers on many versions of Node.
PAIGE_NIEDRINGHAUS: Thank you for that incredibly detailed breakdown of how NPI works. That was much better than I ever could have explained it and you know much more about it than I do.
AJ_O’NEAL: This has been a problem for me personally, because I've tried to go the route of having compiled modules. Like the biggest example of this in the node ecosystem that everyone deals with, well, not everyone, but many people deal with is SQLite. SQLite is probably the most popular, it's written in C, it's not written in JavaScript. And then the URSA that I mentioned earlier that wouldn't have become obsolete. Well, it would have become obsolete because Node advanced and provided what it did, but the maintainer constantly had to recompile, recompile, recompile and like change APIs. And there was just so much fatigue over it that eventually it switched hands once, it switched hands twice, and then it just stopped being maintained because it was such a burden to fix the changes in it. So, although when I say me personally, I don't mean that like i was part of one of those processes, but I've had to deal with the effects of, and had to try to figure out how to work around these problems. So when I heard about NAPI, I don't really know that much about it. But my understanding is that those are the problems that it's meant to solve to make it easier for people to, to not have to have so much tooling to take the benefit of something like SQLite or LevelDB or one of these other embedded either databases or optimizers that are written in C.
PAIGE_NIEDRINGHAUS: Cool, great example. And actually, those problems that you were speaking of and the tool chain is a great segue into one of the last things that I have to talk about with Node, which is that there are some new compiler and minimum platform standards. So if you're on Mac OS, you're probably going to be fine. There the GCC that's now required as a minimum of six, and then there's a GLibc, which is 2.17 on platforms besides Mac and Windows. So for I guess all Linux platforms. Mac users need at least Xcode 8 and Mac OS 10.10, which is Yosemite. So hopefully most of our developers will be far past that at this point and then Windows, if you've been running any of the Node 11 builds, you'll already be up to speed. Those are just the same for Node 11 as for Node 12. For Linux, the binaries that they're supporting are Enterprise Linux 7, Debian 8, and Ubuntu 14.04. And then if you have different requirements, you can always go to the Node website and find more about what exactly you need to install or compile to be able to run the latest versions of Node.
AJ_O’NEAL: Somewhat tangential, do you know anything about Chakra Core? Is that project still active? Is that still something that is going to provide some coolness in the future? Is that kind of dying out?
PAIGE_NIEDRINGHAUS: I am not familiar with it actually. Could you fill me in on what you know so far about it?
AJ_O’NEAL: So Chakra Core was a project to, I mean, Mozilla tried to do something similar with like the Jaeger Monkey or Jaeger Node and then Microsoft took their thing. So V8 is Google, Chakra is Microsoft. And so Microsoft was supporting getting Node to run on the Chakra engine. But then Microsoft also just announced that Internet Explorer switching over to V8. From an academic perspective way awesome. There were some practical improvements that I thought Chakra was possibly going to add to Node to possibly give better performance than V8. But then with Microsoft moving from Chakra to V8, and V8 I think has kind of adopted some of the improvements that Microsoft had researched and proved to be effective. So I'm curious if that project is gonna continue or if there's new stuff that's gonna come out of it. But I am, I guess not worried, isn't quite the right word, maybe sad that I'm thinking it might just end up dying out because everybody's just gonna use V8 now probably.
PAIGE_NIEDRINGHAUS: You're probably right. I didn't come across anything related specifically to Chakra Core when I was doing research on these new improvements to Node. So from what I can gather, it seems like Node is leveraging V8 pretty heavily and it's going to continue to do that instead.
AJ_O’NEAL: I just noticed on the download page, it's still listed down at the bottom. So that's why I was curious.
JC_HIATT: So anything else to add before we go to PIX?
PAIGE_NIEDRINGHAUS: No. Like I said, I'm really excited about these new changes that are coming to Node. And I think it's going to be some really great improvements for everybody who's developing with it. So hopefully they'll keep turning out great features and great improvements.
JC_HIATT: Yeah, and thanks for coming on and kind of filling us in on this.
PAIGE_NIEDRINGHAUS: Absolutely, I'm really happy to.
AJ_O’NEAL: This has been exciting stuff to learn about.
PAIGE_NIEDRINGHAUS: Glad to be a part of the conversation.
One of the things that I find that we talk a lot about at the different conferences and the different things that I'm working on is open source software And a lot of people have a lot of ideas around open source software, but we don't often think about the people who are building it and trying to maintain it. I had a friend, John, who came to me, he's been a guest on JavaScript Jabber a couple of times. He came and he actually said, Hey Chuck, I wish there was a show about sustaining open source. That really hit me where I live. And I have a few other friends who are working on projects related to this. So we all got together and we put together a show called sustain our software. You can find it at sustain our software podcast.com. It's a place where several people who are passionate about open source come together and have conversations about how it can be sustained and how it can be maintained and what we can do to help these maintainers continue to deliver us value that we build our software on. Most of the software we're building is based on open source. And so it's important to us to have that maintained and have it taken care of. Come check it out. It's been really interesting to listen to the conversations that they're having from people who are working in it all the time and just hear what they have to say about it. Once again, that's at sustainoursoftwarepodcast.com.
JC_HIATT: Cool. So, um, we can go to picks. I can go first. So I have a few picks today. First off, the, um, AWS amplify framework. If you are not familiar with it, it's been around for a while. Uh, but I've really been digging into it lately. I'm doing a, um, kind of a consulting gig on it right now and, um, kind of been learning a lot about Amplify and then just in a broader context, serverless, all that stuff. And it's really, really cool. And at minimum it's kind of magical. Like you can have authentication with a GraphQL database, like set up in 10 minutes. And they even give you like react components that work out of the box for like offflow, like logins, sign up, forgot password, all that stuff. You get, you know, email sign up or social sign up, things like that. It's really, really impressive. And it's all just from a really nice CLI. So definitely check that out. Next up is a book. I'm a really slow reader. I've been working on a book for about two months, just a little at a time. But it's a book by a guy named Jordan Peterson. He has a book called 12 rules for life and antidote to chaos. And it's just been a really good book to read in terms of just like facing adversity and kind of bringing order to your life. And lastly, I'm going to kind of shamelessly pick an upcoming workshop I'm going to be doing. I've got two workshops on React and Gatsby I'm going to be doing in San Diego at the beginning of November. So keep an eye out on my Twitter if you're interested in maybe attending one of those.
STEVE_EDWARDS: Cool, I'll go next. I only have one today, but I think it's pretty monumental in the world of cartoons. Yesterday, I saw a few articles about the Far Side possibly coming back. So the Far Side Gary Larson, he quit drawing it quite a number of years ago, and he actually didn't allow people to distribute his cartoons on the web. And so the only way you can really see those is if you have the books. But the Far Side website had a message yesterday about the Far Side possibly coming back. So people are trying to figure out what that means, if he's going to be drawing new cartoons or just he's going to have an online presence for all of his many previous cartoons. So as someone who is a huge fan of that cartoon, I'm really looking forward to see what comes out of that or what its reincarnation looks like, shall we say.
PAIGE_NIEDRINGHAUS: That's awesome. That's very cool. I have a couple of picks that should probably be pretty popular. Neither one of them is very developer-focused. But the first one that I'd like to plug is the espresso maker that I have at home. It's called the DeLonghi Magnifica XS, and it's an automatic espresso and cappuccino maker. And it is amazing. It's like, you just need whole beans, and you put it in the top, and you press a button, and you've got some of the best coffee you will ever have. So I would highly recommend it. It's a little bit of an investment, but it's been such a great find for my husband and I, and pretty much everybody who comes to stay with us at one time or another. And the other thing that I would love to plug is that I will be speaking at Connect Tech, which is happening in Atlanta in the middle of October. I'll be talking about responsive web design and specifically how to do responsive with ReactJS. So it would be awesome if anybody's coming to attend to come out and see me speak there.
JC_HIATT: Yeah, I'm looking forward to meeting you, Paige. Absolutely. Yeah, that'll be great. Hey, Jay, you have some picks for us?
AJ_O’NEAL: Oh yeah, I'll go, I'll go. So first of all, I'm going to pick, um, page coming on the show today because, oh, as much as it hurts my heart, I think she's kind of shown that the nail is in the coffin on JavaScript. And that, um, ECMAScript is going to be something I have to start using because otherwise all my common JS modules might stop working. And so as much as it hurts, I think my resistance might be. I can't bring the words out of my mouth. I can't say it, but we know what I'm thinking in my head and that's largely thanks to Paige. But I'm not ready to talk about it yet, other than that I am. I'm also gonna pick Field of Hopes and Strings. So this is, I don't even know what genre it would be, but of course it is game inspired music, because that's what I pick every week so check that out on Bandcamp. It's, it doesn't, to me, it doesn't sound like game music other than it's got some, you know, a couple of game music motif type things, but it's, you know, it's one of those good spans of spectrum between some stuff that's more classical-ish, but definitely still electronic-ish, sprinkled and whatever. I'm also gonna pick a Link's Awakening, which is kind of a weird pick because I don't get it until the end of the week. but I'm assuming that I'm going to love it. I played the original black and white version on the Game Boy and I own the color version, but I haven't actually played more than like two or three of the Dungeons up to that point before I had something else come up, they had to drop it. But I know that my wife is gonna like watching me play this. So that's gonna give me, and I get to play it on the TV rather than having to play it on a handheld. So I'm super excited about the Link's Awakening and I expect to report that I love it, that I agree with the changes that they made whenever they are and that they kept the source material very well. I'm also going to pick Dune. Now if you haven't read Dune, plug your ears for a second because I'm going to give away a couple of spoilers, but it's okay. Everybody dies. It's just, it's so frustrating because I get like a third way into the book and I'm thinking, okay, cool. I'm interested to see how this book is going. Like things are developing well. I'm getting attached to the characters. They're all dead. And then the next two thirds of the book, well, it's almost like a completely different book. I mean, the end of book one, because the book in of itself has book one, book two, book three, as like the section labels, even though it's a trilogy and there are literally three books, so that's a little confusing. But inside of the first book of Dune, the sections are labeled book one, book two, book three. And anyway, so that was that was a little bit frustrating for me because I started to I thought that the story was going to go in a direction and obviously it wasn't because everyone's dead. That aside, I think it still is a very interesting book and it's one of those pieces of literature that if you're into literature, you should probably be familiar with because it. In the same way that Star Wars changed every movie that was ever made after it because of the way it broke the way that cinema worked. Like for example, having the credits at the end. Star Wars was the first movie to do that and they had to pay like actors guild fines in order to do it. Dune breaks new ground in literature for its time period and you can see that Star Wars was heavily influenced by Dune as you get through the book. The whole Jedi stuff is the Benny Jesrits. It's very, very similar and I don't think that you know good artist, copy, great artist, steal. And, you know, George Lucas stole the idea to be a point of making it his own, completely separate thing. So I think that that's great. Also, just like Star Wars, because it was the first of its kind, it's the worst of its kind. And I'm waiting for the hate emails to comment on that. But Dune is a terrible book, but it broke new ground and it paved the way for the sci-fi genre to just explode. And...and new directions. And you can see the kind of the influences off of it. It doesn't feel typically sci-fi. I love Star Wars, but you gotta be honest, they invented 70% of the technology in order to create the movie. Every movie that came after was better, technically speaking, because it was iteration two. It was generation two of the technology. Jurassic Park is just way better than Star Wars. End of story, I'm sorry, that's the way that it is. But the technology was better, it was refined, it was perfected.
STEVE_EDWARDS: Well, it was also put out there because George Lucas basically gave away a lot of that technology. He just said, here, you guys can use it. He didn't license it and make people pay for it. And that's what's credited with a lot of the sci-fi explosion is that he invented all this technology and then just gave it to everybody and said, hey, go ahead and use it.
AJ_O’NEAL: When was the last time you watched a movie that didn't say Lucasfilm? Skywalker, Light and Magic and Skywalker Sound at the end of the credits though.
PAIGE_NIEDRINGHAUS: That's just what I was thinking. They're involved in everything nowadays.
AJ_O’NEAL: Pixar, Disney, Marvel. I don't know the last time I saw a movie that has, unless it's a pure drama, any movie that has special effects at the end of the credits, Lucasfilm, Skywalker Sound, Skywalker, Industrial, Light and Magic. There's like five or six Lucasfilm companies. At least three, I don't know if it's five. End of all the credits.
STEVE_EDWARDS: So AJ, here's a couple things about Dune. One, thanks for the spoiler, because I know that book just came out. And so I haven't had a chance to read it yet. Second, do you know what the inspiration for Dune was?
AJ_O’NEAL: I do not. I think it was weed, because the dude is obviously stoned.
STEVE_EDWARDS: No, actually. Um, it's a place down on the Southern coast of Oregon, which is where I'm from, called the Oregon dunes. So back in 1957, Frank Herbert came down because he'd read an article about how people were trying to plant some grass to keep the dunes from shifting all over the place. And that's what sort of kicked off his imagination. And he started writing, uh, dunes once he saw that.
AJ_O’NEAL: Well, that is pretty cool.
PAIGE_NIEDRINGHAUS: That is cool.
JC_HIATT: Paige, if anyone wants to connect with you online, Where's the best place to do that?
PAIGE_NIEDRINGHAUS: Well, I'm on Twitter at PNeedri, P-N-I-E-D-R-I, and I also write regularly on Medium. So you can find me there at page N11.
JC_HIATT: Cool. Yeah, we'll put links to all this in the show notes as well.
PAIGE_NIEDRINGHAUS: Awesome.
JC_HIATT: Well, thank you so much for coming on and chatting with us today.
PAIGE_NIEDRINGHAUS: Absolutely. Thank you so much for having me. This has been really fun.
JC_HIATT: Yeah. Yeah, likewise. Cool. All right. Well, we'll see everyone later.
AJ_O’NEAL: All right.
PAIGE_NIEDRINGHAUS: See ya.
Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit C-A-C-H-E-F-L-Y dot com to learn more.
JSJ 398: Node 12 with Paige Niedringhaus
0:00
Playback Speed: