JSJ 475: DevOps for the JavaScript Developer

In recent years the term DevOps has become ubiquitous - you'll find DevOps engineers in most every tech organization. But what does DevOps actually mean, and how does it differ from previously existing System and Network engineering and DBAs? In this episode our own Aimee Knight, who is currently expanding her role into DevOps, answers these questions, and provide further information about it.

Show Notes

In recent years the term DevOps has become ubiquitous - you'll find DevOps engineers in most every tech organization. But what does DevOps actually mean, and how does it differ from previously existing System and Network engineering and DBAs? In this episode our own Aimee Knight, who is currently expanding her role into DevOps, answers these questions, and provide further information about it.

Panel
  • Aimee Knight
  • AJ O'Neal
  • Dan Shappir
Sponsors
Links
Picks
Sponsored By:

Transcript


DAN_SHAPPIR: Hi everybody and welcome to another episode of JavaScript Jabber. This is going to be another all panelist episode with our very own Amy Knight talking to us and explaining stuff about DevOps, I think. Correct me if I'm wrong. 

AIMEE_KNIGHT: Hey, hey from a blizzard Nashville trapped in my house for at least a week. 

DAN_SHAPPIR: Oh, well, it's kind of blizzardy here in Tel Aviv as well. It's freezing 55 degrees, but that's Tel Aviv's winter for you. How about you, AJ? How is the weather over there? 

AJ_O’NEAL: It's on par for Utah, but it is quite snowy. 

DAN_SHAPPIR: To be honest, I kind of envy you. I mean, it never snows here, so you kind of miss snow, I guess you could say. Anyway.

AIMEE_KNIGHT: I don't mind it as long as there are no emergencies and I don't need to go to a hotel or something and lose power.

DAN_SHAPPIR: Yeah, I can't imagine. 

 

This episode is brought to you by Dexecure, a company that helps developers make websites load faster automatically. With Dexecure, you no longer need to constantly chase new compression techniques. Let them do the work for you and focus on what you love doing, building products and features. Not only is Dexecure easy to integrate, it makes your website 40% faster, increases website traffic, and better yet, your website running faster than your competitors. Visit dexecure.com slash JSJabber to learn more about how their products work.

 

DAN_SHAPPIR: So let's get going. We're going to be talking about DevOps. So what is it? What's this thing all about? Can you tell us Amy? 

AIMEE_KNIGHT: Man, I probably won't give like the Wikipedia definition but to me, I mean, DevOps is a lot of what people probably are doing already. I think like the definition I've seen more than anything is, you know, it's a lot of times like the dev team and the operations team are one in the same or they work together to get the code that is working on someone's machine out to production. There's also, I would say to DevOps is, the way that I explain it to people is before getting into DevOps, I mostly focused on building out the applications. And now I am mostly focused on building out the physical resources, the infrastructure that the applications run on. 

DAN_SHAPPIR: Is it called virtual? 

AIMEE_KNIGHT: Virtual. So it's physical virtual. Well, at the end of the day, it's all physical, but. Yeah, there is an actual physical CPU somewhere down there. That's what you're saying. 

AIMEE_KNIGHT: Yes. 

DAN_SHAPPIR: Underneath everything else, underneath all the stacks of software that we put on top of it. But I do actually have to ask you about that something. I mean, we've been deploying software like forever. I've been in this field longer than I would care to admit. And, you know, all this time we've been building software, deploying software and whatnot, yet the term DevOps did not really exist until a few years ago. So what changed? Where did this come from? 

AIMEE_KNIGHT: I mean, I think, you know, 

AJ_O’NEAL: as I said, the blockchain, 

AIMEE_KNIGHT: you have people who were infrastructure engineers who are now kind of what you would call DevOps engineers, where they're working with, you know, say the ops part is the infrastructure part and the dev part is the application part. 

AJ_O’NEAL: My understanding is that before we called it DevOps, we called it the guy that does the Linux stuff. 

AIMEE_KNIGHT: The system administrator or the database administrator. 

AJ_O’NEAL: And then when DevOps was first introduced, I think it was supposed to be something like how to make developers lives suck less so that stuff gets out and deployed more quickly. And then I think it became blockchain. 

AIMEE_KNIGHT: There's also, I guess we should back up too. And you know, why we decided to talk about this was, you know, a side conversation that we had last week. About I was ranting a little bit on people conflating service-oriented architecture and microservices. And I keep making the divide of, you know, there's the application side and there's infrastructure side and service-oriented architecture is the services themselves and making those so that they can scale independently. And the microservices part is making sure that the physical infrastructure can scale. So in order to have microservices, you first need to have service-oriented architecture, or if you want to sound cool, you call it SOA. But people- 

AJ_O’NEAL: I do not want to sound cool. 

AIMEE_KNIGHT: Please God no. I never had to do that. So. But I think people like to use these big words and-It's important to understand the problem they're solving, what they actually are, stuff like that. 

DAN_SHAPPIR: It seemed to me, though, that I actually haven't heard the term SOA, or Service Oriented Architecture, in quite a while now. 

AIMEE_KNIGHT: Because I feel like people are calling it microservices, but it's not microservices. They're different. 

DAN_SHAPPIR: So can you give a concrete example of something that you would call SOA but not a microservice and something that you would call a microservice but not SOA? 

AIMEE_KNIGHT: Yeah, so one example would be for service-oriented architecture that would be, you know, you splitting up your application so that it runs on, let's say like separate Docker containers, but you could potentially have service-oriented architecture but not microservices if you ran all of those containers, let's say, like you're using Kubernetes and you run all of those containers on the same node in the same cluster and the same pod and the same cluster, which you could do. That would not be a good idea in most cases, but you could do that. And when I, so, so backing up to, to talk about like Kubernetes a little bit on at a very high level, so you have a cluster which is made up of nodes and those nodes contain one too many pods. And each pod, in theory, in most cases contains one container. However, if you're doing it, there's all kinds of things. There's like smaller services that they call sidecars and you could run multiple containers in a pod. But most cases, again, you want one container per pod and that node will contain like one to many pods depending on how much you need to scale horizontally. 

AJ_O’NEAL: All right, you're still. Too many buzzwords. Let's break this down into just the generics because I think you're using the branded names. 

AIMEE_KNIGHT: Okay, so back to Kubernetes and trying to keep it super basic because I'm still learning it. And this is just like Kubernetes 101. So you have the cluster, which is like an abstraction. It's a concept. It's not an actual thing. The node inside of a cluster is a thing. A node is a virtual machine. So in Amazon EC2 instance in GCP, which is what I use mostly, it's literally just, they call it compute engine. It's literally just a VM. Like if you're looking in the GCP console and you're looking at your cluster and you click on one of the nodes in the cluster, it's going to the GCP UI will navigate you to the virtual machines page. So the node is a actual physical thing. Going to the next layer.

DAN_SHAPPIR: I love it that a virtual machine is a physical thing. It's just. 

AIMEE_KNIGHT: Well, I mean, you get into like virtualization, which is Docker, but we won't go down that because I'm trying to keep it simple, but then 

AJ_O’NEAL: you've already thrown out so many acronyms. I don't think there's that many letters in the alphabet. 

AIMEE_KNIGHT: Okay. Well, we're keeping it to, to a cluster, a node and a pod. So we talked about the cluster. We talked about the node, then there's the pods and you could run one too many pods within a node. The pod is a concept, it's not an actual thing. A pod is like an abstraction layer that wraps your container. You can sort of think of a pod and a container as the same thing, a Docker container. However, a pod is like an API layer for Kubernetes so that you don't have to, so that Kubernetes doesn't have to interact with the container directly.

DAN_SHAPPIR: Based on my own background, which admittedly is like several years old, it was my previous employer. If I understand correctly, or then please do correct me if I'm wrong. So we have this concept of a virtual machine, which tries to totally emulate a physical machine. So it's like a simulated hardware, it literally you install your operating system on it and it has various benefits from being virtual, like you maybe can move it from one physical machine to another while it's still running, but it really tries to simulate an actual physical machine. And then within that, you have your containers, which are another layer of virtualization, but a much thinner one that still creates a stronger segregation that may be having multiple processes within the same operating system but they still share a lot of resources with one another, which means that they're a thinner layer and are consequently you would have multiple such containers within that single virtual machine. Am I describing it correctly so far? 

AIMEE_KNIGHT: I think so. The main thing to keep in mind between a virtual machine and a Docker container is that a virtual machine gets its own operating system, whereas containerizing something means that each container shares the same operating system. So from what I was hearing you say, I think so in that the Docker container is a little bit of like a thinner layer. And all of this, oh, sorry, go ahead. 

DAN_SHAPPIR: No, I was saying, so it's kind of like the encapsulation or segregation that operating systems create between processes, but kind of on steroids. It makes it much less likely that incompatible processes would step on each other's feet or stuff like that. Or multiple instances of the same process would conflict with one another. Okay. 

AJ_O’NEAL: I'm gonna realize way, way back, way back. So I'm a developer, I write some HTML, and if I'm blessed, then I also write some node code or maybe something better, like go or rest or whatever. Anyway, and then what I need is I need to get this code running in a place where there's a public IP address, essentially a public phone number, but computer number, a public computer number that other computers can dial. So the way it go into the address book bar on their computer and type in, example.com their address book bar will go look up the computer number and then we'll get to that computer and connect and deliver the page. Like this is the end all be all of my work as a software engineer is that I want someone to be able to experience a visual element, unless you're really cool. In which case you just go full on CLI and, and some interaction that's written in code and it's it's out there in the ether. So from that point, how the heck do we get to all of what we were talking about? Let's just take it like step by step. 

AIMEE_KNIGHT: So I'll back up to, so when we're talking about Kubernetes and horizontal scaling, the thing that you need in order to horizontally scale is you need a load balancer and the load balancer is a service in Kubernetes and the load balancer is the, is that physical IP address that you're talking about so all of these, all of the pods and how they communicate to each other, it's all via internal IPs. So none of that is exposed, only the load balancers external IP. And that's how, so- 

AJ_O’NEAL: So, and we're making the assumption here that DevOps is Kubernetes, it sounds. 

AIMEE_KNIGHT: Yeah, and it's not at all. Like I said, this is still an area that I'm doing a ton of learning in. And so I want to be careful about like, I'll share what I do know, but I still have so much to learn. 

AJ_O’NEAL: Well, I think, I think, you know, on the, the, the meme net, Kate's and DevOps are synonymous right now. 

AIMEE_KNIGHT: But I would say like, you don't need Kubernetes. Like that is a hype train. In my opinion, you do not need it. That should probably not be the first thing that you reach for. That's just the thing. So the where I'm working right now, we are split up. We have like one core DevOps team and that core DevOps team services, a bunch of smaller organizations within the large organization and the teams that I work with, they do a lot with Kubernetes. So while I'm less familiar with AWS than I am with GCP, there's tons of different things that you can do with the different cloud providers before you reach for Kubernetes. So there's like lambdas. I think in Google are called Cloud Functions. There is something called there's like Cloud Run, there's App Engine. Um, there's all of these different, and that's how it goes. Usually do like Cloud Functions. And then you would look at App Engine and, and the distinction there is. There's what they call like PaaS, uh, which is a platform as a service. And then there's infrastructure as a service. So PaaS would be like App Engine, Cloud Run, Cloud Functions. LAM does and then infrastructure as a service is when you would get into using like compute engine EC2 instances, Kubernetes is a little bit of a blend of the two because it is a platform. However, it's it's those like VMs under the hood. 

DAN_SHAPPIR: So it's really a question of how much abstraction I want if I want to be closer to let's call it the legacy model of running software on computers, then I go for the infrastructure and service. If I want something that's more abstracted of just having my functions kind of spawn and communicate with each other without really thinking or trying to avoid thinking about the hardware at all, then I would go for something like a platform as a service, correct? 

AIMEE_KNIGHT: Yes and no. So I would probably reach towards Cloud Function and Lambdas first. And then, you know, I haven't done a lot in App Engine, so I don't want to misspeak. But, you know, the benefits of Kubernetes is that, let's say a node goes down, or even if your application goes down and it causes the node to go down, then Kubernetes is designed. It's you give it. It's like Terraform, we can get into Terraform, which is like, it's declarative. So you tell Kubernetes what you want your cluster to look like, and it's responsible for maintaining the state of that. I don't know how App Engine handles, like if a VM would go down or something, or if you're handling VMs yourself. 

DAN_SHAPPIR: I assume that something like Lambda or Cloud Function is supposed to be robust. I mean, I assume. 

AIMEE_KNIGHT: True, the part I'm thinking of is, let's say your application throws an error, and it causes your server to crash. So Kubernetes is designed to restart that, restart, like restart the containers, or you like run out of memory and your application crashes. Kubernetes is designed to prevent, well, it doesn't prevent that, but it keeps your app up by spinning up new ones whenever one of them goes down and I don't know how that works with the other services. 

DAN_SHAPPIR: From my perspective, this whole thing kind of transformed into DevOps when this whole mechanism became much more programmatic or programmable. It might be a better term. 

AIMEE_KNIGHT: I mean 

DAN_SHAPPIR: that you can you look at all of the so you write code to manage all this infrastructure. And in fact, you write code to actually define or specify how your quote unquote simulated environment looks like. Where previously you would either literally pull wires or at best to create configuration files. 

AIMEE_KNIGHT: And that's where if you hear people using the term infrastructure as code, that's what they mean whether that is like a YAML file, there's lots of different ways that you can, you know, create these physical resources. You can do it in YAML, you can do it in Terraform, we can talk about that if you want to. 

AJ_O’NEAL: So what was the name of that infrastructure as code guy that we had on here a couple months ago? 

AIMEE_KNIGHT: Pulumi. 

AJ_O’NEAL: Pulumi, Pulumi was the service. 

AIMEE_KNIGHT: I am now pronouncing correctly. Yeah. 

AJ_O’NEAL: Now you know all about Pulumi, huh?

AIMEE_KNIGHT: I'm a fangirl. 

AJ_O’NEAL: Are you using it? 

AIMEE_KNIGHT: I'm not. Well, I, our team isn't, I really like it personally, but our team, you know, they like Terraform and they, you know, I, should I get into that a little bit or 

AJ_O’NEAL: go ahead? Yeah. Let's, let's go down the rabbit hole. 

AIMEE_KNIGHT: Maybe I should give like, like my workflow of how we do it. 

AJ_O’NEAL: Yeah. That would probably be really helpful.

AIMEE_KNIGHT: Okay, this may be kind of long-winded, but hopefully this paints a good picture for people. So, and the lines kind of blur because I'm still learning. I would probably have a larger hand in the deployment of the application to let's say Kubernetes, but because I'm still learning, I am more focused on the infrastructure side, which is getting the physical like resources up so that the teams can do their deployments. But that part we kind of work hand in hand. So, you know, backing up what I used to do. So I, you know, I'm working on my feature, on my local branch, I merge that to master. And I'll kind of, I'll talk about, so the work that I do right now, I do two things. Like I work on an application, but then I also work with these other business units. But let's take the application that I work on. So I work on my feature, I merge that to a develop branch. And then we decide that we're ready to take that develop branch and whatever's in it. And we want to take that to production. So we have build systems in place where it will build a Docker container for the different parts of the application. So we have like a Docker container for the UI and the API, they run in one container that, it is a small internal application. So probably you know, not how some people would do it. And then we are using Postgres. So that's in a Docker container. You could in theory. 

AJ_O’NEAL: Now, how do you have Postgres in a container when you need the database to be long-lived? Like you can't just have your database disappear and then, you know, be regenerated. 

AIMEE_KNIGHT: You know, I don't know the specifics of that. And maybe that's a poor example because that's just like a small internal application. 

AJ_O’NEAL: Well, I'll tell you the answer because I just was being devil's advocate here for the, it's a mounted volume. You have all the stuff goes on a mounted volume and the mounted volume could be something like, um, EFS, or it could be something simpler than that. 

AIMEE_KNIGHT: Okay. Now that said, like, so let's take a larger application, for instance, you know, there's like a lot of the teams that I do the Kubernetes stuff with, they're using cloud SQL. So they have their Kubernetes just talking to cloud SQL or BigQuery or whatever cloud database they want, which would be the platform as a service. Well, I don't know, maybe that's, I could be wrong there. I don't know if that's considered platform or infrastructure, but anyways. So I, you know, we merge, Jenkins will then build that container, push it to what's called a container registry. And that's then we run a deployment script. And that uses something called a Helm chart, which is how you do deployments typically in Kubernetes. I won't get in the Helm because that's something like I'm just now starting to learn and would feel bad if I tried to speak to it. But yeah. 

AJ_O’NEAL: What's the true line, what it is? 

AIMEE_KNIGHT: So Helm, you can think of it as like NPM for your infrastructure. It's a package manager for Kubernetes. 

AJ_O’NEAL: Okay.

AIMEE_KNIGHT: So from there, we run this deployment script. And that is then what Kubernetes will take the container from the container registry, and it will run your containers for you. I'm trying to give the most simplified example that I can think of. 

AJ_O’NEAL: All right, all right. 

DAN_SHAPPIR: Okay, so what are the challenges here? I mean, I know that you're really excited about all this stuff, which means that it's far from trivial. So what are the challenges? 

AIMEE_KNIGHT: I mean, there's a lot. It's a lot of stuff to learn. It's a lot of stuff to manage. I'm trying to... It can get expensive. You need to know what you're doing because doing the wrong thing can cost a lot of money. You know, then there's the you know, people like jump on the microservices bandwagon. But if you think about that from a development perspective, like let's take it back to the old days where we were just doing like Ruby on Rails and I can just run Rails server and I get, you know, everything I need. I get my API, I get my UI, you know, it's pretty easy. However, you get to the point where you have an application where you're doing service-oriented architecture and microservices running that application locally, whether it be the developer or the DevOps team, it gets more complicated trying to create that environment, get everything spun up locally. 

DAN_SHAPPIR: How do the developers actually develop really in that kind of an environment? I mean, we kind of discussed this in the past on several episodes where we talked about serverless, about the challenges of simulating a serverless environment as part of your development environment. What do you guys do about it? 

AIMEE_KNIGHT: So for us on the smaller application that I work on, we are just running those containers locally. And we do a sync of data every month, but that's like an internal detail probably to what we're doing. But the other teams, I mean, that is the joy too of I would say that, you know, using like cloud SQL or I don't know what the AWS equivalents are, you can have like a dev cloud SQL instance and a prod one and you can just point your development environment at that remote development cloud SQL instance. So I think that would be a lot easier for a lot of teams. 

DAN_SHAPPIR: But then I'm not really emulating a serverless.

AIMEE_KNIGHT: Oh, you're, are you, what do you mean exactly? 

DAN_SHAPPIR: I mean that if I'm running something that at the end of the day is supposed to run as, you know, your favorite thing, a microservice within something like a Lambda or something like that, I can't really simulate that on my local development environment. 

AIMEE_KNIGHT: Um, I mean, I haven't I haven't worked with lambdas directly. The most that I've ever done is playing around with something called MongoDB Stitch, which if I understand correctly is similar in that it's executing. MongoDB Stitch is this layer over MongoDB where you don't need a backend. Your frontend can literally work with a remote MongoDB instance. And you can like execute these, these quote-unquote, like cloud functions. So in that case, like if I remember correctly with MongoDB, I just hit that URL that the service gave me. 

AJ_O’NEAL: Well, and typically if you're doing something like a Lambda and you're doing it in Node, the way that you experience the flow, if you're, if you're trying, if you're trying to do it with request and one of the serverless frameworks. The way that you have the flow, it's going to be very similar to what it would be with curl or a browser where you get a request, the request does stuff and it ends what's different about Lambda is that it can't have lingering state. Like it can't just have a lingering database connection that's expecting to be up for the next six hours because Lambda one, you're not getting the benefit of Lambda if you're doing that or your use case is not the right use case for Lambda if you're doing that. But, you know, Lambda is going to kill everything. I think after 15 minutes is the longest it'll let a task run. And so Lambda is really just for very small short-lived tasks. So it could be things like web requests, but again, I mean, you got to consider where is your real value? Are you going to spend a month re-engineering your stuff to use Lambda so that you can save $4.33 while you only have 10 users or, you know, are you actually getting a benefit out of it? One company I worked with that they had a use case where Lambda made a lot of sense was there's literally hundreds of thousands of users at max, tens of thousands of users at minimum, and lots of requests that come in to do a task for basically take a couple of seconds every few minutes. And so lambda does like a good, a good use case for that because you do get a lot of scale and it even if it ends up being more expensive, it doesn't matter because of the way that the pricing model and everything was structured. But I think in that case, it ended up being cheaper. 

AIMEE_KNIGHT: And don't want lamb does you have to specify like how much memory and stuff like that to allocate to them? 

AJ_O’NEAL: I think that you can, I don't think that you have to, but they do have much more stringent limits because they are meant for small short running tasks. They're not meant for it. Like for example, I mean, somebody gave this example earlier on the show like a year ago, where what they were doing was basically managing an email list and the Lambda just connected to a database, added a person to a database, connected to the simple mailer service or whatever it's called and then sent that off. And so that, I mean, that's something that requires no RAM and no CPU power and no long lived connections. It was basically just connecting other, essentially microservices together. Like the database probably in that scenario wasn't even like a traditional database. It was probably something like a Firebase or whatever, where you just like connect to it, do a post and you're done. 

DAN_SHAPPIR: So we had Gareth McCombsky on JavaScript, Java episode 440, talking of the benefits of serverless. And what I recall from that conversation is that while the concept, the issue of cost did come up. And as I recall, you kind of debated that with him from his person, the way that he presented it, the main benefit of using serverless was actually the architecture that it forces you to adopt that this whole concept of breaking up your application into short lived tasks that do a specific thing actually results in a cleaner architecture for your entire application service, call it what you want. 

AJ_O’NEAL: I would say maybe if you subscribe to that pattern and it works well with your brain and your team and you actually do it correctly, then sure, but the same is true of every other paradigm. 

AIMEE_KNIGHT: I would say too, like there's a lot of like, you know, Kubernetes proponents for like, well, this makes it easy to run like on any provider. And I would say, I don't know, at least in my experience, like there's a lot of stuff that doesn't transfer. Like one thing we haven't talked about that is a massive, you know, this is a massive like consumer of my time is not just the DevOps side of this, but all of the InfoSec concerns, like all the permissioning and security around it. And why I say that, because that how that is modeled is very different, at least from my understanding between, um, AWS and GCP. 

AJ_O’NEAL: So do you want to, do you want to talk about that for a minute or do you want to go back to the Terraform stuff? 

AIMEE_KNIGHT: Let's talk about Terraform and then I'll briefly get into the permission stuff. And again, like this is just kind of like, I'm sure there are people who, if, you know, you've listened to like. If there are dedicated DevOps people listening, like they probably know way more about this stuff than I do, but I'm learning and I'm trying to share as I learn. But Terraform, we kind of talked about it on the episode with Pulumi, but Terraform is probably what I dove into first. And what Terraform is, it is, I guess you would call it, like an API to interact with the GCP, AWS, Azure APIs. So, and the APIs also kind of have to do with permissioning, but I'll get into that. But so like, I mean, that's really at the end of the day, that's what like GCP is. It's exposing all of these APIs for you to do things. You could either click through them in the UI, which people will call click ops, but the joy of using Terraform is you how have it in, like trying to use, I'm trying to use the buzzwords so that people will start to like equate them to what they mean. But so Terraform is the infrastructure's code that allows you to spin up all of this stuff. And like, let's say, I'll take like a Kubernetes cluster, for example. So there are a lot of different resources that you may or may not have in your Terraform to create this cluster. There's the cluster itself. There could be permissioning associated with the cluster, something called a service account associated with the cluster there are node pools that are associated with the cluster. So node pools and Kubernetes like I was saying is a node pool is you could have like, let's say that you are doing some machine learning stuff. 

AJ_O’NEAL: And node as a network node, not node as in pool of node JS node. 

AIMEE_KNIGHT: No, node pool as in node pool of virtual machines. So if you have a machine learning workload, you would have, shoot, I'm blanking on the optimized VMs that they have for that. There's a special kind of virtual machine that you would use if you're running a machine learning workload. So you would want to have portions of your application that are doing the machine learning stuff running in a certain pool. And then you might have your UI running in another pool with less powerful VMs, like a traditional CPU VM or GPU VM, maybe, I don't know. So Terraform is literally just declarative. I'm talking about JavaScript. It almost reminds me of a, oh my God, taking them back to the day. It almost reminds me of like a grunt file or even like a package JSON file. You write Terraform in something called HCL, which is a HashiCorp configuration language, which is what Pulumi is like a competitor to. You can do some like scripting in it, but it's not very powerful, which is why I like ClueMe better. But, so you create these different resources, like a cluster resource and a couple of node pool resources, maybe some service account resources. Should I say what a service account is? I'm running out of breath. 

DAN_SHAPPIR: Before you do a quick question about these configuration files. So from what you're describing, it's a domain specific language or DSL that's specifically intended to provide the generic means to define a virtual infrastructure on essentially any cloud platform. 

AIMEE_KNIGHT: Yep, yep. 

DAN_SHAPPIR: Now, if I have like a scale for us front-end people, with JavaScript on the one hand, which is imperative and you can create whatever logic flow you want, being like the extreme on the one hand, and with CSS on the other hand being this declarative language that has very little in the way of imperative capabilities if it has it at all. It's mostly about declarative specification of what you want. Where does this fall in between these two ranges? 

 

Have you ever wondered if you could be offering a faster, less buggy experience for your customers? I mean, let's face it, the only way you're going to know that is by actually running it on production. So go figure it out, you run it on production, but you need something plugged in so that you can find out where those issues are where it's slowing down, where it's having bugs. You just, you need something like that there. And Ray Gun is awesome at this. They just added the performance monitoring, which is really slick and it works like a breeze. I just, I love it. I love it. It's like, you get the Ray Gun and you zap the bugs. It's anyway, definitely go check it out. It's gonna save you a ton of time, a ton of money, a ton of sanity. I mean, let's face it, grephing through logs is no fun and having people not able to tell you that it's too slow because they got sidetracked into Twitter is also not fun. So go check out Raygun. They are definitely going to help you out. There are thousands of customer-centric, customer-focused software companies who use Raygun every day to deliver great experiences for their customers. And if you go to Raygun and use our link, you can get a 14-day free trial. So you can go check that out at javascriptjabber.com.

 

AIMEE_KNIGHT: I guess it's more like CSS to me. It really CSS it. I would equate it to like a package JSON file and you can do like a little bit of logic in there. So you basically say like you would have in your Terraform, they called an argument, but you would have a property of the virtual machine type and then your value would be, you know, there's tons of different types of virtual machines based on like your CPU and like needs that you would have to pick. So that's what you would define in your chair form. 

DAN_SHAPPIR: You can specify like, so you can actually like, when you execute the script, you can pass parameters in and some of the values within that configuration are the result of let's say, doing some sort of expressions on those passed in. 

AIMEE_KNIGHT: Yeah. 

DAN_SHAPPIR: Okay. 

AIMEE_KNIGHT: Yeah. At the end of the day, all Terraform is doing, you don't need Terraform. You could literally go into the GCP UI and what they call click ops, you could click around and create your cluster. The benefits of using Terraform are that you can use modules which is the same thing as like a module in JavaScript, it's an abstraction layer. So if you have a team that is spinning up tons of cloud SQL resources and or tons of Kubernetes clusters, and there are certain things in those clusters that never change then, or you would want certain things configurable, but you would want an abstraction layer because you don't want people to touch certain things. So a good concrete example of that is a module that I recently worked on was our cluster module. And what we did in that one is we realized that the default IP space that the clusters were creating we were allocating not, we were allocating too many pods per node. And so some teams were running out of IP space. And so I'm getting into the leads again. Okay. Backing up Terraform, all it's doing, it is turning into code what you would do by clicking around. This is almost like you think of like a Selenium test clicking around a UI. You execute the Selenium test. This isn't what it's doing because it's literally hitting an API, it's not hitting a UI, but it is calling the APIs that the UI would be calling to create these things. The benefit to of Terraform is like, let's say I create this giant cluster, I have like 20 node pools and all these different service counts and I want to tear that cluster down. Like a lot of the different APIs are dependent upon each other. And so that would be a lot of knowledge I would have to keep in my head of what resource in the GCPI I need to delete before the other. With Terraform, there are ways for you to say what resource is dependent on what resource. And then I just run t terraform destroy, and it, within a matter of 10 minutes, I run terraform destroy, and everything is gone rather than it would take me hours to click around everything and maybe I forgot something and a team is still getting charged for it. 

DAN_SHAPPIR: It sounds excellent and also a little bit scary. 

AIMEE_KNIGHT: It is very scary. Yeah, yeah, you could destroy prod instead of dev and that would be really bad. 

DAN_SHAPPIR: You know what this whole thing description reminds me of? It reminds me of scripting in the old sense of the world, not of the word not JavaScript scripting as we currently have it in the browsers, but scripting as we used to have it in, let's say, Office applications where I automate repeated operations so that I don't have to remember all the details of where to click in the Office user interface. 

AIMEE_KNIGHT: Exactly. You can think about it. One thing when I was learning that was interesting to me and just tells you the scale of what's happening behind the scenes. If I need to create, let's say a service count, which we'll get into that, I guess, when we talk about permissioning. But if I need to create that in GCP and I do that in Terraform, it might take 20 seconds to create that. If I'm creating a cloud SQL instance, it's going to take 20 minutes to create that. So, and it's, that's how many like APIs that Terraform is calling out into GCP to create that Cloud SQL resource? 

DAN_SHAPPIR: So again, going to my example or my analogy actually, it's what you're saying is this. So it's not just giving you a scripting like interface on top of Word, it does two additional things. It generalizes. So the same scripting interface is now available both over let's say Microsoft Office and Google Drive. So I don't have to write distinct scripts for any one of them. I can just easily switch and as an extra benefit, they throw in higher levels of abstraction so that I don't need to deal with a lot of the nitty-gritty. I can work with higher-level concepts that generalize the common operations. 

AIMEE_KNIGHT: Yeah, there are some gotchas. Like there have been times where terraform, I guess, it could be terraform. I'm not sure. It could be like a limitation within GCP itself, but like interacting with the clusters. Like sometimes there is not enough control and I need to use, it's a tool called kubectl, which is how you interact with a Kubernetes cluster. And there are things that are immutable in Kubernetes itself, or I'm sorry, there are things that are mutable in Kubernetes itself, but are immutable if I'm using like Terraform and calling out to the GCP UI. So if anything in my experience so far, like using Terraform has been a little too restrictive sometimes, but I guess that's good. 

DAN_SHAPPIR: And again, going to that episode we had about Pulumi, what you're saying is that Pulumi is a competitor to Terraform or an alternative, let's say to Terraform, that's more programmatic in nature. 

AIMEE_KNIGHT: Yeah, yeah. It allows me to, instead of using the HCL, the HashiCorp configuration language, it allows me to use like JavaScript and I think they have like.NET and other languages. 

DAN_SHAPPIR: So it's just a set of APIs that I can invoke from my favorite programming language too. So really instead of building a configuration file, I literally write an application. 

AIMEE_KNIGHT: Yep, I guess I know we're like maybe coming close to time should talk about the permissioning stuff for a few minutes before we do picks.

DAN_SHAPPIR: Yeah, go for it. 

AIMEE_KNIGHT: So the permissioning stuff, so like GCP and a lot of these cloud providers, I mean, it's probably not very different than permissioning for like your application. Like, well, you would have, they call it like scopes where certain people are allowed to do certain things. But in GCP, you have what's called a role and they have like out of the box roles. And those roles can be applied to groups of people, which can be like associated with a Google group for your team. And there are users that are in that Google group. That role contains a set of permissions and the permissions are the permissions to certain APIs. So maybe you have a QA Google group and you would want to give that QA certain roles. Sometimes you want to do like a custom role so that you can fine-grain, like add certain permissions, but then your dev team is going to have other roles that contain other permissions. Then you also have what we were talking about, which is service accounts, and the service accounts is basically like my application talking to another GCP UI. So a service account is literally like a machine user. It's not an actual, it's not associated with an actual person. It's associated to an application or a service. But the thing that consumes, at least for me, it's kind of like busy work pretty much and looking at ways to automate it, but you always wanna do what's called least privilege. And so you start off, everybody starts off with no access to anything, and then you slowly grant them what they need. So, you know, that obviously I'm sure sounds probably time consuming and it is because I call it like permissioning whack-a-mole. Like everybody starts with nothing and then as they're trying to do certain things, they're like, oh, I'm getting this error. And then you have to do like a little bit of research to figure out, you know, what is the least amount of permissioning? What's like the least intrusive role I can grant this person to do what they need to do. And then sometimes you do like custom roles because you really want to limit someone's access.

AJ_O’NEAL: So what kind of roles do people need? Like, cause I would think you push code, you're done. Like what do I need to this thing for? 

AIMEE_KNIGHT: Yeah. So it's, I mean, you just think like how many different APIs there are within GCP and people need different roles, which would be the different permissioning for those APIs. Like, you know, do they want read access to this API? Do they want write access? Do they want admin access? But it's just all of the different APIs, or most of them have their own permissioning associated with them. Like literally, if I were to visualize, let me pull it up on my screen actually, because I have it bookmarked, like what a permission looks like for, see if I can get to like Kubernetes really fast, since that's the one I talk about a lot. So there is a container, that's not gonna be a good example. There's a, it's called container cluster viewer. And that's the role and the permission for it is container.clusters.get. So it literally, the permission literally looks like an API. So there's like container.clusters.list. Hopefully that gives a better visual. 

AJ_O’NEAL: Okay, so how are we doing for time right now? Are you, what do you've got left? 

AIMEE_KNIGHT: I'll have to jump off in about 10. 

AJ_O’NEAL: Okay. 

DAN_SHAPPIR: All I can say is, is that you've really straight far field from JavaScript. 

AIMEE_KNIGHT: I will say like, so I started this job and I was supposed to be doing 50 50 working on a JavaScript application and then kind of learning the DevOps stuff as we went, but the JavaScript application has really gone into like maintenance mode. And so I am just doing DevOps right now, but I'm loving learning it. So it gives me a whole new appreciation. 

DAN_SHAPPIR: Cool. Yeah. I love our journey in software. The fact that we can keep on learning. I know that it fatigues some people, but for me, that's the joy of the job. 

AIMEE_KNIGHT: I'm on a high right now, so I have lots of energy. 

DAN_SHAPPIR: Good for you.

AJ_O’NEAL: So I want to tell you how my DevOps stack looks like right now. Now, granted, I work with a lot smaller companies and so I don't, I don't. Well, I mean, I worked for some big companies too, but I don't do the DevOps stuff for them because I let them handle all the nonsense because I think it just gets too crazy and like what you're saying, the permissions whack-a-mole. I think sometimes it's just unnecessary. Like what does this service do? Do we really need to like put it behind three layers of virtual network and have a jump box? And it's like. Like this thing seriously just downloads a couple of files, takes some JSON, mixes it together, and then outputs the results of the two JSON files put to, you know, it's like, to me, that's just too crazy. It's it's I don't see how it's worth it. And then all the, then all the extra support it adds when people need access to something and they can't get it. And then, you know, if you're working on a team of more than one, it's just, yeah, anyway, so this is, this is what I'm, I'm setting up for somebody. And you, you tell me what you you think about this. No, I have a load balancer because the internet has a built-in load balancer. It's called DNS. You add three IP addresses to a DNS record. Boom. There's your load balancer. And if one of them, for some reason fails, which I have no idea why it would, a browser will actually try the browser will download all of the IP addresses for a given address and it will pick one and kind of try to use that one for most of its requests, but if for some reason that one stops responding. The browser itself will just switch over to the next IP address that it has. So you, you get, you get a very dumb, but actually quite effective load balancing just by letting the browser pick an IP address at, at random. And, and not all programming frameworks work this way. A lot of programming frameworks say like, for example, request in node or Axios and node. Well, those both of those probably leave it up to the DNS internals of node itself. But anyway, a lot of them will pick the very first. So you have to be careful about that approach if it's something where you, the services need to be load balanced as well. Anyway, so that happens. And then I put stuff on digital ocean. And with digital ocean, you don't have to worry about things going down because it's virtualized in the opposite way that Amazon, et cetera, are virtualized. Digital ocean. If there is a problem with the physical hardware underneath the virtual machine, and this is just really commonplace in virtualization now, but Amazon won't do this for you. It will just migrate that virtual machine over to other physical hardware automatically. So you'll have a slight period of degradation where, you know, the CPU basically goes on pause for some number of milliseconds while it makes the transition over. So I never have to worry about an instance going down on Amazon. You have to worry about that because EC2 does not guarantee uptime or reliability. Reliability. It guarantees availability which means that at any time the physical hardware that your virtual machine could run on or is running on could fail. And there's no recourse for that. It's just expected that you're going to start another EC2 instance and that EC2 instance is going to come up on hardware that's working. With something like DigitalOcean and many of the other more traditional VPS providers, you have pretty much a hundred percent uptime because if the physical node fails, the virtualization layer handles the transition without any downtime or you know, slight amounts of degradation, like milliseconds to seconds of quote unquote downtime. Then I, I just run multiple instances. And I think that this is a good idea, whichever way you go about it. I think you should be running multiple instances of your software, because what you're going to find is you did stupid things that you shouldn't have done. Like you have a global variable that's maintaining some state between requests. And as soon as you're running on two or more instances. You will find weird bugs in your application where like a number is incrementing to two and it's just staying at two and you did three more requests, but it's staying at two. And the reason that that ends up being is because the machine you're testing on is connecting to something that has the global state of two and the machine that you're reading the results on or vice versa machine that you're reading the results on has the global state of two and the other one has the global state that's incrementing or whatever. And you find those types of bugs and it's good to find them in and rid yourself of them. So I have a machine that runs a Postgres instance. I have a couple of machines that run the application instance. And at this point, because it's small, that's not really for scaling as much as it is for helping us to identify those problems in our applications. So that as we do stupid things as a quick one-off, like I'm gonna test this real quick and we forget to fix them, we find it. So when we are ready for scale, we already have the scaling bugs fixed. And then...And then I have a little go service that I wrote called get deploy, which is essentially the CICD. It takes a GitHub web hook. Whenever the GitHub web hook hits get deploy, get deploy runs a bash script, which is completely arbitrary. So that bash script can call a node program or it can call whatever it wants. And then it runs the build and test process that's defined in that script. So one thing that I'm going to add to it is the ability to view a report so that when the script runs a test, it can basically just curl or make an Axios request to a, or, you know, other requests to a URL and then give back a JSON report of like, here's how the build went, here's how the tests went kind of thing and be able to view that very simply in a table on a webpage. That is my DevOps stack right now. 

DAN_SHAPPIR: And I have to throw in that you can either do what AJ just said or alternatively hire AJ or as another alternative, and I don't plug Wix a whole lot, even though I work there, you can actually use Wix to build your web facade and then use the Wix Velo to actually do all the programming and stuff on the backend, including the data management, whatever. And we take care of everything for you because it's a true platform as a service. 

AJ_O’NEAL: So wait, what the Velo lets me run arbitrary node? Like I just have an express service. 

DAN_SHAPPIR: You don't actually hear. I won't go into that. It's probably worth up. Maybe I'll bring somebody from Wix to talk about it sometime. We will do an episode about that, but it's, it really totally abstracts all of these things for you. You don't really even need to think of those, of those things. It works at a higher level of abstraction than that. You literally just write your code and you know, it is if it's just functions and you don't even need to think about it. 

AJ_O’NEAL: I like thinking about it though. 

DAN_SHAPPIR: That's you AJ.

AJ_O’NEAL: I know. 

DAN_SHAPPIR: Most of us just want to get our applications up and running with as little fuss as possible. 

AIMEE_KNIGHT: Right, true. Especially for the audience here. 

 

Hey folks, my favorite error monitoring tool just got better. Sentry has added performance monitoring, which is awesome. Now, let me back up and tell you just how awesome error tracking is. I mean, this stuff gives you a backlog, gives you a whole bunch of context. It really helps you track down the stuff. Honestly, if you've had to go through a log, you know how painful that is. It tells you exactly what's going on. And now it works with tracking performance. It does performance tracking for your backend like Express. It does front end like React. I mean, it's just awesome. All this stuff together in one place. I just love it. Love it. Love it. So if you're looking for a tool that will profile your front end, profile your backend, tell you the errors on your front end, tell you the errors on your backend, pull everything together. Make it really easy for you to track down problems, track down issues, not even just errors, but just issues in performance. Then go check out Sentry. You can go find it at sentry.io slash four. That's F O R four slash JavaScript. Use the code JSJabber to get three months free. 

 

DAN_SHAPPIR: And with that, I think I'll push us to picks. 

AJ_O’NEAL: Sounds good. 

DAN_SHAPPIR: Okay. Then Amy, I know that you're kind of under the gun with the weather and the work and everything, so why don't you go first?

AIMEE_KNIGHT: Okay. I am going to pick something that I've had saved for a while. That's probably fitting for this episode and it, so it's called the many lies about reducing complexity. This is a part two one, but it kind of talks about some of the gotchas with cloud providers and like we were just kind of ending with. So, you know, it sounds great to be using these cloud providers and like they just do everything for you, but there's a lot of complexity there that you have to understand and you know. Not as easy as it looks. So that is why they have certifications just for the different cloud providers now, because you literally have a lot to learn using this stuff. So that'll be it for me. 

AJ_O’NEAL: We'll be sure to post a link there. 

AIMEE_KNIGHT: Will do. I mean, okay. I should add this too. I mean, initially, like I have mixed feelings on the certifications. I feel like in to some degree, it's probably some marketing on the cloud providers. However, if you're going to be building out applications in the cloud, I think that learning these things is also really important because you want to be using the right tool, you don't want to be spending more than you have to spend. Like the billing aspect is a huge part of that. Like that's what, when I talked about the internal application, that's the internal application I work on is a way for us to visualize how much we're spending across the entire organization. 

DAN_SHAPPIR: Cool. Okay then AJ, how about your picks? 

AJ_O’NEAL: So my first one is going to be Life as a Bo-Koblin, a Zelda nature documentary. So somebody made an Animal Planet slash BBC style Zelda spoof, and it is Totes Adorbs. It's got the guy with the voice, you know. And here we see in its natural environment, the Bo-Koblin is not as violent as we may have expected. In fact, we can see he's actually part of the community. You know, it's, it's amazing. Absolutely amazing. Loved it. Props to, I think it's like Monster Maze is the channel that creates it. Anyway, that was pretty cool. And on the note of video stuff, there is a service called Libre.Dot TV. I think it's supposed to be library. It's LBRY.tv. And it's basically you can sign in, you can import over all of your YouTube stuff. A lot of the creators are already over there. Like I found a bunch of the people that I follow on YouTube are already on library. And, uh, it's kind of got like the system of you. You get these digital token coin things for how many videos you upload and how many views you get or whatever. And then instead of giving a like, you can give like a token or something. That part. I don't know. It's interesting. It's interesting, but they seem to be doing the YouTube alternative, right? And that you can sign in and import all over all of your YouTube stuff in one go. And that they, they reward you for doing that. And then you can use like these points for boosting one of your posts or something as well. So if you've, if you've got a new post and you want to boost it or something, you, you can, I'm not entirely sure about all the specifics, but I think of all of the YouTube alternative ideas that I've seen out there, this one looks the best so far and I'm going to see if I can start using it. It supports embeds. I mean, it really seems to be like, uh, you know, an apolitical YouTube. So you could just watch the content you want to watch without getting a whole bunch of algorithmic stuff shoved down your throat that you don't care about and not have to hear people go on their tirades of like, YouTube is shutting us down for doing some benign thing that we didn't actually do all the time, which seems like across the entire political spectrum because I listen to people on all sides. It seems like everybody's complaining about getting their videos demonetized for the stupidest things. And so it seems like there's an appeal there to to find a platform that is less political and less uncle, uncle brother, big brother or whatever. Anyway, and then along with that, I already posted this, but just because it's one of my videos that got ported over on library, library, I don't know how I should look up how to say it. The LB R Y dot TV. My GameCube home route in six minutes video. I'm just linking to that. So if you want to check out how a video looks on the platform or whatever, and get me like 0.001 coin token things for watching a video. There you go. 

DAN_SHAPPIR: Cool as well. So now it's my turn last and least. So I mentioned Wix already today. So I'll do it again. This time as part of Wix, just as the part of picks Wix picks, just because it's such a good cool and good thing that Wix is doing. Now, unfortunately, given that we are recording this on February 16th, and what I'm going to mention ends on February 21st, it's going to be too late by the time that this podcast airs, but hopefully Wix will do it again. And this is the thing, it's called Wix Enter. The website is actually wix-enter, and it's a program for juniors. So it's called for students who are studying computer science to basically sign up to a course that Wix is giving. It's a two and a half months course that teaches lots and lots of stuff about web development with the basic web development fundamentals and beyond. There's also mentoring involved. Obviously with the intention of getting these people to hopefully decide to continue working at Wix because Wix is actually going to be Paying them a salary while they're doing this training So it's I really like it that the Wix is offering a way for Juniors to actually enter the field because too often I hear about the juniors not being able to find their way into actual jobs in and you know companies that want to hire them and train them and teach them and get them up to speed. So I'm really happy that Wix is doing that. Like I said, it'll probably be too late. It will definitely be too late by the time this episode airs, but hopefully, Wix will do it again. So that's one pick. The other pick that I want to mention is actually something that DHH put on his Twitter account. And it's a comparison of the performance of various devices, phones and computers, when executing the speedometer 2.0 JavaScript test. So it just compares the performance of running JavaScript code on various hardware. What's really interesting is that the latest iPhones are really killing it least this specific test, something like more than three times faster than the fastest Android phones out there, like the Samsung S21 or the OnePlus 8. 

AJ_O’NEAL: And faster than most desktops. 

DAN_SHAPPIR: Yeah, way faster than most desktops. 

AJ_O’NEAL: Because it's the M1 watered down phone. 

DAN_SHAPPIR: Yeah, it's A14. And the M1s are something like 15 or 20% faster than that so like the one at the top of the list is the MacBook Air M1 running Safari. But interestingly, even when running Chrome, it's almost as fast. So it's not so much about the software. It really is the hardware itself. But this has an interesting implication in that if you're developing your website and you own an iPhone 12 and you're testing that website on an iPhone 12, be aware that Android users will be experiencing this website at best one third the speed. And if worse, something like one-thirty the speed if they're using a somewhat older Android phone, because the older Android phones were already six times slower than an iPhone 8, and this new iPhone 12 is something like twice as fast as the iPhone 8. So it really creates a situation where you really need to make sure that when you're developing complex websites or web applications, you don't just test them on your latest and greatest iPhone. And I'll post a link to this comparison table. It's really interesting. And again, kudos to Apple for delivering such amazing hardware. Yeah.

AJ_O’NEAL: It's just mind blowing that they've put a phone processor in a MacBook Air and it's the most powerful computer money can buy with a few exceptions in the consumer market. 

DAN_SHAPPIR: Yes. Cool. Same cool a lot today. So maybe a lot of things are cool today. 

AJ_O’NEAL: That's cool. 

DAN_SHAPPIR: Yeah. Anyway, that's, uh, that's it for me. So that that's the end of PIX and the end of the show. So Amy, thank you very much for sharing with us all this excellent information about DevOps. And really seems that you're enjoying your new journey. So congrats on that as well. And to all our listeners, thank you for joining us. And please join us next time as well. 

AIMEE_KNIGHT: Bye. 

AJ_O’NEAL: Adios. 

 

Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit c-a-c-h-e-f-l-y dot com to learn more.

 

Album Art
JSJ 475: DevOps for the JavaScript Developer
0:00
1:06:31
Playback Speed: