Lucas_Paganini:
Hello, welcome to Adventures in Angular, the podcast where we keep you updated on all things Angular related. This show is produced by two amazing companies, the first, Top End Devs, where we create top and devs who get top and pay and recognition while working on interesting problems and making meaningful community contributions. And the second company, design and software development services with specialization in Angular and functional programming. In this episode you will listen to my voice Lucas Paganini, I'm the CEO and founder of Unvoid and Chuck.
Charles Max_Wood:
Yep, I'm here.
Lucas_Paganini:
In this,
Charles Max_Wood:
I don't know.
Lucas_Paganini:
catch you by surprise. In this episode,
Charles Max_Wood:
Yeah.
Lucas_Paganini:
we will be talking about deployment. How do we deploy our applications? Which services do we use? Which services have we used in the past? But for some reason we
Charles Max_Wood:
Hmm.
Lucas_Paganini:
didn't like them or just switched to another one that we found was better. So this is what we're going to talk about here today. probably need to put your application in a domain somehow. So let's talk about what we use to solve that issue for us. So Chuck, if you want to start, can you tell us a little bit more about what do you use, what was your experience with deployment?
Charles Max_Wood:
So it kind of depends on how things are set up, right? Because a lot of times when we're talking about frontend applications, like I've built applications that had something like Angular React or Vue on the frontend and Ruby on Rails on the back end or a node Express on the back end, right? And so a lot of times I've deployed those as kind of one deployment, And so it pushes everything up, does whatever build, migration, whatever it has to do for the backend, and then it would run Webpack or something else to build stuff on the front end, right? Yes, build, whatever, whatever we're using. And so those deployments, they tend to go differently than say, if you're doing like a static build of like 11D or JS or something like that with a front end that maybe connects base or super base or something like that, right? And so, you know, if you're dealing, depending on what you're dealing with, it's kind of a different process. And I've done both, typically, at least in the, so I'll just start with one I'm the most familiar with this, which is Rails, it usually has its own JavaScript tool chain, right? And so I deploy the whole app and then as part of the deployment, it just runs the build script, right, on the server. good to go, right? I've also done it in development actually with a Docker container that, you know, does the watch continuously build, right? So you kind of get refreshed resources. And so anyway, those all work fine just depending on what you need. As far as the other goes, you know, so if I have like a kind of a and then most of it's built by the front end framework. I've deployed that to Vercel or to Netlify. And those tend to work fine too. I used to be a huge fan of Heroku, but I've kind of become less and less of a fan of theirs these days, mainly because it used to be really easy, I feel like, and it got less easy. It got more complicated to deploy to them. So it's not that they can't do it, doable or whatever, but as my build process has gotten more complicated, figuring out how to get that to run on Heroku has gotten more complicated. And then if I have to connect to other services like, you know, Redis or things like that, because sometimes I have other database systems I need to connect to, it just, it got tricky, right? Or I had to go find some third that I've ran into was I liked Heroku because I could push stuff up to it for free and run it for free. And then when I was ready to actually pay for it to scale and stuff, I could do that and I can't do that anymore. So anyway, those are kind of the systems that I've used. I'm not going to go into too much detail because I'm curious what your experience is. And then we can kind of talk about the pros and cons of the different approaches.
Lucas_Paganini:
OK, one thing that I pegged from your explanation, which was very detailed. There were a lot of interesting things there, is that we can group things into four categories and then expand
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
if I'm wrong. Maybe there's more. But I noticed four categories for everything that you mentioned. So some of the things that you said, So Heroku, Natalie5, Versailles, DigitalOcean, so which cloud
Charles Max_Wood:
right.
Lucas_Paganini:
provider do we use? Another other things that you mentioned, I would put into how we push the deployment. So we can always manually push things
Charles Max_Wood:
Right.
Lucas_Paganini:
as in the old days, like maybe even using FTP as I've done many times in the, in my early days as a software developer, more modern structures such as continuous deployment and then inside continuous deployment we have many options to use so do we use CircleCI do we use Travis do we use GitHub actions
Charles Max_Wood:
Mm-hmm
Lucas_Paganini:
and then there are other things in which are how to encapsulate your system for example we can use Docker to make sure But there are many cloud providers that don't require us to use Docker. They just identify what you're using. So for example, if they see a package JSON, they already know that it's a node environment and they will use a node instance to run your server, and they will automatically run NPM run build and et cetera. There are providers that allow us to do that, and then we don't have to worry about Docker files containerizing our application because the cloud provider
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
can do that automatically. So there's that. There's the question of how do we encapsulate our applications before putting them into production. Then the last one is about which tools do we use to, or maybe not the tools, but the process that we use to build the thing before pushing it into the production environment. So I saw those four categories and I think that
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
having that clear mental model of, okay, so these are the four major categories that I would have to take into account to design my deployment process. I think we can more easily go into each one of them throughout the episode and make sure people don't get lost into all the things that we'll be mentioning. I would like to start with how we encapsulate things. So how do you do that, Chuck? I want to talk a bit more about this too. I have a very strong opinion of how I do that personally and how I do that at Unvoid. But I want to know, do you always use Docker? If not Docker, do you use any other container solution? complexity to the cloud provider and just use a node environment and let the cloud provider Come up with the right instance for that. How do you do
Charles Max_Wood:
All right.
Lucas_Paganini:
that encapsulation of your system before pushing to production?
Charles Max_Wood:
So it seems like there are a few options here. I will admit I haven't really ever deployed like a Docker container to production. I mean, there's some third party open source apps that you basically download the Docker image and run it, or a Docker compose file and run that. And so I've done that for those. on my local machine, I've never actually turned around and you know sent it up to the cloud that way. I know you can, I know there are services out there that do it, you can kind of spin up your own Kubernetes cluster and do it, but I just, I haven't. It's pretty slick when you can, right? Because your Docker file essentially builds your assets, right? You get the image, you know, it, you can scale it if you're getting a lot of requests. The other thing that I've looked at is pushing it to a CDN and then referencing stuff off of like a Cloudflare or a AWS CDN or something like that. I haven't done that either. Mainly because I just don't get enough traffic to where it's you know, it makes enough of a difference, right? My assets loading aren't so slow that my website's in a meaningful way. So what I have done is typically I will push the assets up to the server and then I will run the build script up there. The other way that I've done it is yeah I've just pushed it into like a Vercel or a Netlify and they're usually like oh you have this framework and it's using this build system and you have your config already set up and so they just know how Right. I haven't gotten so deep into optimizing any of it that I've wanted to make, you know, make it, make it do anything beyond that. Right. So if I can get it to build and load quickly while I'm in development, so that I don't have to break, break stride in order to wait for the JavaScript to update, then it's usually, you know, it'll build and deploy fast enough for production. just like, yeah, just, hey, it works, right? So you probably have a much more regimented idea of how you do it, but yeah, I've typically just used the systems that are already there and just let them build it. Whether it's in my backend framework that has a way of managing assets or if it's the deployment system I'm on saying, oh, you've got this, I know how to deploy that.
Lucas_Paganini:
Gotcha, gotcha. So just to see if I understood it correctly. So you mentioned that you let the provider detect the system and install the dependencies. But you also mentioned that you push your assets to the server, and then you do the build there. So you were referring to, for example, on Google Cloud or something and then you're sending those
Charles Max_Wood:
Hmm.
Lucas_Paganini:
files there and doing the build, then you SSH into the server, does the build there and then you have the new version running, is that the process?
Charles Max_Wood:
Mostly, I'm too lazy to SSH in and run the build by hand. So I have a script that says, connect to the server, run these commands, get off the server. But yeah, effectively,
Lucas_Paganini:
Gotcha.
Charles Max_Wood:
that's how I do it.
Lucas_Paganini:
Gotcha, okay. I already worked using that, the process before. It served me well for a very long time. Eventually I got into a point where I wanted to make sure that everyone could work in all projects and nobody would have to remember how to do it
Charles Max_Wood:
Yeah.
Lucas_Paganini:
be outdated. And I also wanted to make sure that not everyone had the permissions to log into the production server and do it. So I wanted to control the access to it, but I also didn't want it to make it such a gatekeeper that people couldn't put their work into production. So I wanted them to be able to put their work into production if it passed the security but I didn't want them to have direct access to the production instance. So that led me to a path of continuous deployment processes. So currently, I rely a lot on GitHub Actions. Depending on the client project that we are working, we might use a different continuous deployment pipeline that might not use GitHub Actions, to extend what the client has already created. Other times we're creating from scratch, but most of the times we are doing staff augmentation, so a system already exists, and we are simply improving it. So maybe it doesn't make sense to move away from their current continuous deployment solution and into GitHub actions. So in those cases, we just use whatever they're using. Sometimes this is drone, it really differs from company to company, but when I have the chance of choosing which service to use for my deployment process, I always go with GitHub Actions because all my repositories are on GitHub. So it integrates really well, although it's not the most complete continuous integration and continuous deployment solution, the grades most easily with the repository as a whole. So I know that I could get more features by using CircleCI, for example, I checked that once. But I just felt that trying to use CircleCI would be harder than using GitHub Actions, harder to start. And so this is why we mostly stick to GitHub Actions. flow is basically this. We have some branches that have a special meaning. So for example, we have the main branch, which in many
Charles Max_Wood:
Right.
Lucas_Paganini:
repositories might be called master, but a while ago there was a push to change the names of master to main due to historical reasons and that not We have the main branch and this branch is special in the sense that it contains the code that is in production. So every time that there is a change to the main branch, GitHub Actions automatically triggers a deployment process to production. And this deployment process will install all the dependencies necessary to build the application. I will have to get into some of the other categories here, but I always lean towards using Docker because I don't want to get dependent. I don't want to have vendor lock-in actually, so I don't want to depend on the detection that Netlify, or Versa, or Heroku might have. I want to be able to
Charles Max_Wood:
Right.
Lucas_Paganini:
run the application anywhere I want. encapsulate the application before deploying. And then in this build script that runs on GitHub Actions, it already comes with Docker installed. So GitHub Actions already comes with a lot of dependencies installed that are very common. And then I run Docker build. I tag that build and I push that to my cloud provider, which
Charles Max_Wood:
Mm-hmm
Lucas_Paganini:
most often did not is Heroku. into that later because I have a lot to say about cloud providers, but just sticking to my deployment process. So it's basically continuous deployment using
Charles Max_Wood:
All right.
Lucas_Paganini:
GitHub actions. And then I have, as I was saying, I have other branches that are special. So the main branch is special because everything that changes there is built and deployed into production automatically. But I might also have other branches that may be special. In most of the projects that we do, we don't just want to have the production instance, but we also want an alpha instance. The alpha instance is good because the developer can actually see that his work is running, we can run tests, and we can send it to our
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
client, which can approve things before we put them into production. and we also have a branch for that. So we might have a branch called dev or maybe even alpha. And then all changes made to this branch activate a GitHub action that will build and deploy that environment to the alpha production environment. But I might have others, I might have an environment for Betas. I might have an environment for specific releases. 3.1.2, I might want to have an environment
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
just running that at all times. So we have this structure of coming up with branches that will dictate the state of particular environments. So if we have a production and alpha environment, we'll have at least two branches, one to represent the state of each one of them.
Charles Max_Wood:
Yeah,
Lucas_Paganini:
So basically
Charles Max_Wood:
that makes
Lucas_Paganini:
that.
Charles Max_Wood:
sense. I'm, so I have questions. I have been looking at GitHub actions. I have not made the leap. I've used like a CircleCI. I've used some of the other ones that are out there. I think Semaphore is another one that I've looked at. But it looks like GitHub actions, I mean, mostly does the same thing. You can do all kinds of stuff. You said you build it into a Docker container. So you're just deploying a Docker container.
Lucas_Paganini:
Yes, I am always deploying a Docker container. So for example, instead of just
Charles Max_Wood:
Because I'm
Lucas_Paganini:
running
Charles Max_Wood:
kind of liking that because if I can build a local Docker container, then it basically looks like the production Docker container.
Lucas_Paganini:
That's the go. That's the go. So what we do is, instead of having a lot of comments in our build script on GitHub Actions,
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
it is rather simple there. Like the build script on GitHub Actions, it's the workflow that we have on GitHub Actions to deployment, it is very rarely more than five steps. Most times it's just three steps. So the first step is
Charles Max_Wood:
All right.
Lucas_Paganini:
build the Docker image. The second is tag the Docker image. And the third is push the Docker image to the Cloud provider that we are using. So most cases would be Heroku.
Charles Max_Wood:
Uh huh.
Lucas_Paganini:
And then, of course, that if you look into the process of building the Docker image, there are a lot of steps there. But they are not
Charles Max_Wood:
Right.
Lucas_Paganini:
in the GitHub workflow. file. So in the repository,
Charles Max_Wood:
right.
Lucas_Paganini:
we have the Docker file, which says what happens when you run Docker build, so which commands should run to build that Docker image. And then we just have that locally in the repository. So if we just want to see if it's working, we just run Docker build in our local machines. So we don't, we don't depend on the build process that we can simply run Docker build and we will have the entire build process running in our machine which was really important to us to make it easier to debug and to run staging
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
environments locally.
Charles Max_Wood:
I really like that. So a couple of other questions. Do you have like a test to run this tests? So you build, run the tests and then deploy?
Lucas_Paganini:
I do, but I don't do that in the deployment process. So I have workflows that are for continuous integration. So these workflows,
Charles Max_Wood:
Okay.
Lucas_Paganini:
yeah, they are dedicated just to check to see if the code is valid and if it should be approved by the continuous integration system. So I don't even allow the developers to ask for a review if it's not passing on the continuous integration checks.
Charles Max_Wood:
I
Lucas_Paganini:
you
Charles Max_Wood:
gotcha.
Lucas_Paganini:
get to a review, it needs to pass on the tests. So when it gets merged into the branch,
Charles Max_Wood:
So we won't even make it
Lucas_Paganini:
it's
Charles Max_Wood:
to the main
Lucas_Paganini:
already
Charles Max_Wood:
branch.
Lucas_Paganini:
exactly,
Charles Max_Wood:
Yeah.
Lucas_Paganini:
exactly. For it
Charles Max_Wood:
Okay.
Lucas_Paganini:
to make to the main branch, it needs to pass the tests.
Charles Max_Wood:
Right, because it'll pass the test when you submit the PR, or it'll run
Lucas_Paganini:
Exactly.
Charles Max_Wood:
the test when you submit the PR. Right, so then if it gets approved, it means that it passed the test, maybe it ran through the linter and got all of that stuff corrected. Anything else that you're doing there. And then when it's, when the PR is approved, then it's pulled into main and main builds and deploys.
Lucas_Paganini:
Exactly. Exactly.
Charles Max_Wood:
So the other question that I have is Yeah. So you, you, you build the Docker image. Do you push it up to like Docker hub or something like that? And then pull the image, you know, from Heroku, you know, in Heroku, does it pull from Docker hub or something like Docker hub and other Docker? What do they call them? Container repository. I can't remember the term that they have for it, but, um, does it do that or does it actually just pull in the Docker file and build it on Heroku?
Lucas_Paganini:
It is similar to that. So as you said, Docker Hub is a Docker container repository.
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
And then Is Docker container repository or is Docker container registry? I don't know, but you got the
Charles Max_Wood:
Registry,
Lucas_Paganini:
idea is
Charles Max_Wood:
I
Lucas_Paganini:
a
Charles Max_Wood:
think
Lucas_Paganini:
place
Charles Max_Wood:
it registry is the word,
Lucas_Paganini:
Yeah,
Charles Max_Wood:
yeah.
Lucas_Paganini:
I also think it is because like npm is a anyways but it's generally not Docker Hub. I've never had to push
Charles Max_Wood:
Right.
Lucas_Paganini:
to Docker Hub. So generally the cloud
Charles Max_Wood:
There are a lot
Lucas_Paganini:
provider,
Charles Max_Wood:
out there, so yeah.
Lucas_Paganini:
yeah, the cloud provider has their own Docker container registry, so
Charles Max_Wood:
That makes sense.
Lucas_Paganini:
for example, DigitalOcean has their Docker
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
registry, Heroku has their own, so you would push to their
Charles Max_Wood:
Right.
Lucas_Paganini:
registry you would release the image which tells them, hey, I just pushed
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
a new version, now go ahead and use this new version to put that into production.
Charles Max_Wood:
Right. That makes sense. Um, I guess one other question that I have with this is that some systems require more than one Docker image to run, right? So maybe they have like a back end in front end or an admin in a primary app or something like that. Um, when you're doing that, when you deploy, I guess you just deploy the one piece at a time unless you have some dependency that requires both of them to change, right?
Lucas_Paganini:
That's a great question. We generally use NX for all our repositories. I just say generally because
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
there are some repositories that we did like more than one year ago that we haven't updated to use NX yet. So maybe we have parts of the project. There are in different repositories. So we may have a repository
Charles Max_Wood:
Right.
Lucas_Paganini:
for the front and another for the back. But in all of our most recent projects, we have a single repository for the front and the back. So it's a mono repository using NX. In those cases, we will have different Docker files for each environment. So inside a single repository, we will have a Docker file for the back, for the front, maybe even for other instances that
Charles Max_Wood:
Right.
Lucas_Paganini:
we might need. And then we have the repository organized in such a way, and the GitHub workflows in such to main, it will deploy all those instances at the same time. But one thing that we are working on doing is to somehow make sure that it only deploys if there was a change. So if we only
Charles Max_Wood:
Right.
Lucas_Paganini:
change the front end, then there's no need to deploy the backend
Charles Max_Wood:
Bright.
Lucas_Paganini:
again. of, if you do your Docker files well enough, if you follow all the best practices for Docker
Charles Max_Wood:
Huh?
Lucas_Paganini:
images, then trying to build the same thing twice will give you the same layer ID. So if it gives
Charles Max_Wood:
That's
Lucas_Paganini:
you
Charles Max_Wood:
true.
Lucas_Paganini:
the same layer ID, then when you try to push it to the cloud provider, the cloud provider will simply say, hey, we already have this and then you're
Charles Max_Wood:
Yeah.
Lucas_Paganini:
telling thing because it's the same thing.
Charles Max_Wood:
Right.
Lucas_Paganini:
So it ends up, we get that for free. But there are scenarios where you might run the same thing twice, the same Docker build command twice, and get a different layer. And that shouldn't
Charles Max_Wood:
Bye.
Lucas_Paganini:
happen. But it does happen. Because maybe there are things that you want to install, and you want to make sure that they are never cached. So and Tiny is
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
a library that encapsulates the entire application and deals with some, just some nice best practices for killing and starting applications. For example, you should listen to SIG-Kill and SIG-End. I don't know if I forgot the right names, and the
Charles Max_Wood:
Right.
Lucas_Paganini:
other is stopped immediately and then Tiny allows me to easily handle those commands gracefully so I put that in front of the application so I installed Tiny
Charles Max_Wood:
Yep.
Lucas_Paganini:
in my Docker files and Tiny recommends us to do the installation using no cache so that we make sure that
Charles Max_Wood:
right.
Lucas_Paganini:
we are getting the correct version of Tiny every time that we're installing so that might
Charles Max_Wood:
That
Lucas_Paganini:
lead
Charles Max_Wood:
makes
Lucas_Paganini:
to
Charles Max_Wood:
sense.
Lucas_Paganini:
different layers
Charles Max_Wood:
Yep, and what you were looking for was sick term, sick term, sickant, sick kill.
Lucas_Paganini:
Thank you. Yes, that's it. SickQ
Charles Max_Wood:
um
Lucas_Paganini:
is the one that kills immediately, and SickTerm is the one that gives you
Charles Max_Wood:
Yeah.
Lucas_Paganini:
some time to die.
Charles Max_Wood:
Yeah. It says... Yeah, basically it says, um, hey, please, please die. for lack of a better way of putting it, right? It's like, finish what you're doing and then quit.
Lucas_Paganini:
Yeah,
Charles Max_Wood:
Right?
Lucas_Paganini:
one is like shot in the head, the other is shot in the chest, so like you still
Charles Max_Wood:
Yeah.
Lucas_Paganini:
have time to say.
Charles Max_Wood:
Yeah, but if your data is somewhat fragile, like your data management or something, the SIG term is the one you want. But if it's completely runaway processing, you can't get it to quit any other way than SIG kill is what you reach for.
Lucas_Paganini:
Thanks.
Charles Max_Wood:
And when your system shuts down, incidentally it'll send a SIG term, typically. But it usually has a timeout. So once it's sent the SIG term, if it doesn't quit within a certain amount of time, the Sig kill and reboot. Anyway, so that's really, really interesting. Man, I love talking about this stuff because then I just get into, okay, so could I do this with my apps, right? Because I'm a huge fan of Docker. We did the Docker deep dive book for the book club. And so I see a lot of the advantages you're talking about. Part of the reason that I do it the other way is mostly because I always have and a lot of the projects that I'm talking about here were set up that way before I really got into Docker, right? And so I'm really digging that. So
Lucas_Paganini:
I love
Charles Max_Wood:
one other
Lucas_Paganini:
Docker
Charles Max_Wood:
question I guess
Lucas_Paganini:
so
Charles Max_Wood:
I have,
Lucas_Paganini:
much, man.
Charles Max_Wood:
yeah, it really is. So the advantage just at least in my head is you avoid the whole works on my machine, right? It's, hey look, this thing, it, you know, it runs the way that it runs and it's got all of the same setup in both systems. And so if there are differences, they're relatively minor to the point where you almost never see them. Oh, well, I was running this with these libraries that are the macOS version of, you know, the, you know, and then it turns out that there's some vague difference between the two. And so it doesn't run or it's not as memory efficient or whatever, you know, on Linux or vice versa. And so it's nice. The other thing is, is that, you know, if I don't have the same version of Node.js or whatever installed that you do, then sometimes there are differences there. But if it will install the same version on my Docker container as it does on yours, right, off of the image. And so, yeah, it's really a terrific way to go.
Lucas_Paganini:
If I may Chuck, sorry to cut you, but
Charles Max_Wood:
Yeah.
Lucas_Paganini:
one thing that I think it's also important to mention to the audience is All those problems they don't exist just when you're deploying They also exist during development and this sounds
Charles Max_Wood:
Right.
Lucas_Paganini:
like I'm just saying something obvious, but I feel like People don't realize that because I have never seen anyone Other than like at Unvoid we do that, but I had never seen considering how to make their repository run locally for all their developers. I mean, we have developers using Linux, developers
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
using Mac, developers using Windows, and maybe running your system locally doesn't work depending on the operating system that
Charles Max_Wood:
All
Lucas_Paganini:
they
Charles Max_Wood:
right.
Lucas_Paganini:
are using. So we also have Docker set up for development environments. run it locally without needing Docker, then that's better because you probably will be more performant for local development. But if for some reason you're trying to run a project that was built by Unvoid, so it's more for internal projects, so I don't think that anyone from the
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
outside would be able to see that. But for example, we hire a new employee and for some reason he will work operating system, whatever. He can, actually, because we also do a Docker file that is supposed to be used for development environment. So if you just
Charles Max_Wood:
I'm just
Lucas_Paganini:
run,
Charles Max_Wood:
gonna ask.
Lucas_Paganini:
yeah. So you can
Charles Max_Wood:
I was
Lucas_Paganini:
run
Charles Max_Wood:
gonna
Lucas_Paganini:
that
Charles Max_Wood:
ask that,
Lucas_Paganini:
Docker
Charles Max_Wood:
yeah.
Lucas_Paganini:
image, and then it will set up all the dependencies that you need to work on that repository as a developer. The issue there is that it's
Charles Max_Wood:
Yeah.
Lucas_Paganini:
usually less performant in terms of managing files. So you, for example, if you're on macOS, it's really a big issue because if you do an MPM install, it will be so much slower just because it has to save a lot of files to disk. And the way that the macOS driver works with the Docker driver on Mac is very, very slow. but it works.
Charles Max_Wood:
Right. And that's one thing that I've run into that's different from my, I guess, development environment versus my production environment, is that the development environment, typically, I have like a watcher that watches for file changes and rebuilds, right? And I don't want or need that in production. And so that's why you would have two setups, right? Your development is, hey, you know, refresh the build whenever things change, production is, build the files, put them where they got to go, you know, to play it that way. and then it just statically serves the assets. So do you use, I'm curious, I haven't really looked at this, but does Vercel or Netlify, do they, like if you pass them, can you pass them a Docker image instead of saying, here's the Git repository?
Lucas_Paganini:
Honestly, I haven't personally looked into Versailles and Netlify. My company as a whole has, so I have developers that have used Versailles and Netlify, but they just used that for initial versions of the system. So they were just bootstrapping and they wanted to go fast, so maybe they are in a hackathon stage, so we just get together
Charles Max_Wood:
Uh huh.
Lucas_Paganini:
and do things fast. the project matures and get to a state where we want to actually support and maintain it and document and polish all edges, then we use the same structure in all our projects and the structure that we use involves pushing things to Heroku. So I don't know how they, how can you push your from the workflow that I have with Heroku. They probably have their
Charles Max_Wood:
All
Lucas_Paganini:
own
Charles Max_Wood:
right.
Lucas_Paganini:
Docker container registry and then you push your image there and they will simply run that image. I imagine that because I
Charles Max_Wood:
So...
Lucas_Paganini:
can't imagine it getting easier than that, you know?
Charles Max_Wood:
So I just looked it up, neither Netlify nor Vercel will deploy Docker images.
Lucas_Paganini:
That's a shame.
Charles Max_Wood:
So, yeah. So unless it's newer than the information I'm finding on the internet. Yeah. So I wanna discuss a few other deployment options cause I like the way you've got that setup and honestly I'm really tempted to move some of my applications to this kind of a setup just because of kind of ease of setup, the ease of running, all that stuff. I wind up playing this game where, and this is just another thing that comes out of using Docker, is that a lot of the Docker setup, what you do is you then supply the config through the environment into the Docker image, right? And so you can tell it what environment variables to set up and stuff just like a VPS, the thing that's interesting or that gets interesting there is that I still have to provide that. And so what I often wind up doing is putting some kind of config file on the file system or something like that. And the difference is that I can manage the secrets in my deployment system, right? So if I'm deploying to Linode or to Heroku or to something else, right? And so they're all stored in the cloud. And if somebody hacks into the Docker image, they're not gonna get that information. Not as easily anyway as they could just by hacking into the VPS and pulling that config file off and all of a sudden they've got access to my AWS buckets and my database password and the whole nine yards. So that's another thing that's kind of interesting with a lot of this stuff. It provides kind of an extra layer of security. And the other thing is that typically, so in your Docker file, you tell it to base it on like Ubuntu latest or something, right? And so if there is an update to the Ubuntu image, right, they find some security, some zero day security vulnerability, build, right? Because it'll pull it down, it'll add that layer to your image, and then, you know, build from there. And so you get the advantage of, of the update. One thing that I found with a lot of these VPS is that I have to periodically either log in and run updates, or I have to set up some kind of auto update that doesn't always capture everything that needs to be updated. And so I'm really digging this for a lot of those reasons as well. But yeah, let's talk about some of the other deployment options. So you can deploy your app directly to Heroku without the Docker. DigitalOcean has one. It's the app platform, which I've also used, and there are a bunch of other ones. The one that actually runs the most seamlessly for all the complaints that I put out about Heroku was Heroku. The app platform for some of the stuff that I was doing on it, it just had some And I'm sure they've figured a lot of them out by now, but it's it was just a newer product when I was using it and they just hadn't quite nailed down all of the different things that I needed it to be able to do seamlessly. And so I continued to deploy to VPSs. And then, yeah, Netlify and Vercell also, right, they they grab your stuff and build it in a similar way without necessarily file at all.
Lucas_Paganini:
Yeah.
Charles Max_Wood:
I'm just curious, have you done much of that? Was that what you were doing before more or less?
Lucas_Paganini:
There was a period where, and that's how I initially got into Heroku, which was by
Charles Max_Wood:
Hmm.
Lucas_Paganini:
using the automatic build detection system that Heroku has. So at first, that was really good for me because, quite frankly, at that time, I knew shit about deployment, so I needed it to be
Charles Max_Wood:
Ha
Lucas_Paganini:
very,
Charles Max_Wood:
ha ha!
Lucas_Paganini:
very easy. And Heroku made
Charles Max_Wood:
Well,
Lucas_Paganini:
that
Charles Max_Wood:
that's
Lucas_Paganini:
super
Charles Max_Wood:
how we all
Lucas_Paganini:
easy.
Charles Max_Wood:
start. We make it work and then figure out how to make it good, right?
Lucas_Paganini:
Yeah, exactly, exactly. Thanks for making me feel good about this. And yeah, so at the time, I just needed something that would be easy to deploy. And Heroku offered me that through this format of automatic environment detection. So Heroku would automatically detect that it was a Node.js repository, and it would
Charles Max_Wood:
Hmm?
Lucas_Paganini:
install the dependencies, run the build command, and run npm start. And that's how I got into Heroku. Over time, I got more comfortable with Docker and I started realizing the issues that I would have in the current deployment system that I had with Heroku. And then I started moving into a containerized system, still in Heroku because I noticed that I could do that there and it was fairly easy.
Charles Max_Wood:
Thank you.
Lucas_Paganini:
to encapsulate the application. But at the beginning, I was using that automatic detection system. Yes. And I think there are more considerations to be made about this because I don't want to make the case that the best solution is the one that I have because, for example, container and I will get into that now. What I mean is edge computing, for example.
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
So, I haven't seen any environment that allows you to just have serverless, to just be serverless, you just have functions in the cloud and you can specify the Docker environment in which those functions should run. I have never seen that because it doesn't make sense with the Cloud Functions. Cloud Functions is like you choose one of the environments that is popular, so you have like a Node.js environment or Ruby on Rails environment, Java environment, I don't
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
know, but like you choose the environment and then you write your code such that it will run in that environment, period.
Charles Max_Wood:
All right.
Lucas_Paganini:
to run your function, then I don't think it would make sense for the cloud provider to allow that because they would have to dedicate more time to instantiate your server to run that, and they don't want that. What they want is to have a single computer that can quickly load, spin up the environment and run multiple functions from different applications. for this model. So if you're looking into serverless and just cloud functions, then I don't think you will be able to have a clear Docker environment that you push
Charles Max_Wood:
Right.
Lucas_Paganini:
to production if you want their environment. But if you are pushing, if you are having actual servers and you have an entire machine that is all the time running just for your application or not an entire machine but at least a container instance that is running just your application, then I would highly recommend using Docker instead of having a continuously running instance because you would have all the problems that you were mentioning, Chuck, which is you have to periodically do OS updates. It's like, in the beginning it sounds easier, but then you start losing so much time to keep that up and to do maintenance on that. But even though it's easier to get started this way, in the long term it's so much easier to maintain and you will spend a lot less time if you just put things into a Docker container and have ephemeral instances. So you push a new
Charles Max_Wood:
Mm-hmm.
Lucas_Paganini:
one and then you kill the old one. So you have to build your entire repository knowing that whenever you push a new instance it will kill the old one. want to kill the database, you have to have backups, all of those things you have to have them really well set up so that you don't have any issues when you kill your old instance
Charles Max_Wood:
Right.
Lucas_Paganini:
to spin up a new one. But once you do that, everything else becomes so much easier than having to update your OS and maintain a single server instance. But yeah, if you want serverless and add functions that you is not one size fits all solution, unfortunately.
Charles Max_Wood:
Yep. Makes
Lucas_Paganini:
And
Charles Max_Wood:
sense.
Lucas_Paganini:
I think that I underrated how many topics we would have to cover about deployment. I mean, we haven't even talked about half of everything that goes into a deployment process and we're already
Charles Max_Wood:
Right.
Lucas_Paganini:
like more than one hour of podcasts. I think we're going to have to have a part two and three
Charles Max_Wood:
Yeah.
Lucas_Paganini:
and four, maybe 10 parts.
Charles Max_Wood:
Well, the thing is, is that, you know, it gets complicated right. And I think a lot of times we I guess we just kind of underestimate or overestimate, you know, how complicated it is. I think for people who aren't familiar with things like Docker or some of the other tools that we've talked about, it sounds really complicated, right? And then the people who are familiar with Docker, it's like, you know, you understand a handful of concepts about Docker. And then it's like, oh, this is pretty simple stuff. But then it's okay. What about this? What about this? What about that? Right? I want GZipped, you know, I want it to transfer GZipped because it decreases the bundle size. I want it to do this kind of minification, not that kind. And all of these other things that go into deployment, right? How do I know that it deployed properly? How do I know what version is running out there? I mean, there are all kinds of things that go into this that just, you know, and to a certain degree, abstracts a lot of that away, but still, you know, and then it's, okay, well, how do I run it out there? Do I have to run it on like Kubernetes cluster in the cloud or, you know, Heroku sounds pretty easy, right? But I don't know if I can afford Heroku if I start to scale, right? And so there's stuff there too, right? It's like, okay, because what I'm looking at, honestly, because I've been looking at, okay, how would I do this with my Rails apps, for example? realizing that, you know, with some of the dependencies in the way that some of that set up, it's not just a simple push the image into a Docker registry and then pull it back out and stand it up, right? Because I've got to also stand up and manage the database engine and, you know, I have a Redis engine for my job queue and then I've got a worker, you know, workers that I've got to run to and yeah, anyway. It's really interesting to just see how this can all go.
Lucas_Paganini:
Yes, it can definitely get super complex. And we haven't even mentioned
Charles Max_Wood:
Yeah.
Lucas_Paganini:
what I do when I have something that needs to scale really, really fast. So for example, if you really have a lot of microservices, then how do you deal with that? Because at some point you can just have like a front and a back and then you push them and all good. open to the public. What if you have instances that should be only on a local network? So you have for example a Kubernetes setup and you have microservices that should be only accessible inside the local network but they should not be accessed directly using HTTPS or anything else from the end users so it's something internal to the backend. So at that point I would cloud platform and have a Kubernetes cluster. So it gets so, so, so much complex. And I think we'll have to talk about this again, because we haven't even touched on CDN and DNS. Like there's so much
Charles Max_Wood:
Yeah,
Lucas_Paganini:
to cover.
Charles Max_Wood:
I did want to get into CDNs, but yeah, we'll have to get into that later. I think this is probably a good stopping point if we want to move into the other segments of the show. And then, yeah, let's just plan on talking about this in a few weeks if we have an open spot.
Lucas_Paganini:
Definitely. Okay. All right, so regarding promotions, what do you have going on, Chuck?
Charles Max_Wood:
All right, so I've been talking about like the book club and some of the video series that I've been working on putting out. I think I've also mentioned like conferences and workshops and stuff like that. I have a business coach that kicked my butt and basically said, you're trying to do way too much, right? Cause I was like, I can't even get to half the stuff I need to do. And he said, well, then you need to, you know, you need to eliminate half the stuff you need to do basically. I've been moving all of the premium content on top and devs over to a platform called circle.so and So I've moved the book club over if you have a full membership on top and devs You've also been invited into the the circle setup We're also incidentally moving all of the show prep over to circle. So Yeah, so if you're a member in Circle, you'll actually see posts come up that say, new episodes scheduled for, you know, Adventures in Angular with so and so, like we have Danny Perettiis and Dan Wallin that have set up episodes, right? So those, those are already in Circle right now. And so then we can start having conversations in there and say, hey, you know, hey, Danny, article on the thing, right? You can just say, Hey, I think Danny should talk about this. And, or, you know, same thing with Dan Walleen. So as we get those kind of lined up and moving, that's, that's one instance where you can come and get involved. Now, I'm, you can get into it for free. I've left that just for the community there. One of the things I'm looking at doing is actually just recording a quick video saying, hey, here's what I worked on today. And you just get that as part of the community. And then all of the paid things will be in there. So as I put out courses and put out the video series and stuff like that, those are gonna go in there. My focus at this point is just getting the video series going and then having regular meetups for each of the series. the thing that people keep coming at me with stuff for is React. And so, you know, I'll probably do a React series before an Angular series, just by virtue of where people are at. But I'm going to put up a JavaScript series. I'm going to put up a DevTools series. We're going to be going over a whole bunch of Docker stuff. As part of it, Docker, Git, VS Code, you know, those kinds of things are all going to be in there. So I'm planning on covering GitHub actions as I learn about them. So Keep an eye out for that. I'll announce it once it's up. It'd probably be within the next few weeks. So There's that and then the other thing is is I am looking for another contract the contract I've been working for the last year and probably three months Is coming to an end so Yeah, I'd like to be able to pay the bills a little longer than that while I build up top-end devs. So If you know anyone who needs a highly experienced Rails developer or not so highly experienced Angular developer who's a highly experienced Rails developer, I'm your guy, right? Just let me know and I'm happy to pick up that work. I pick up stuff pretty quickly so what I don't know about Angular I can learn. Or if you know somebody that, like I said, I'm spending a bit more focus on React. but you're doing react and your company needs another developer. Let me know. My emails, check at topenddevs.com.
Lucas_Paganini:
Awesome, awesome. Well, I generally have two promotions, so I always promote Unvoid, of course, and my web animations course. Today, I think I will mostly focus on Unvoid because everything that we said here today, I think that it's so close to what we do. I mean, it's not just so close, it is exactly what we do. this and you thought, oh, I super wish that my project was set up that way. I wish that I had people that could help my company to do that work because we don't have someone that knows how to set up those things. Then, well, that's easy. You can just go to Unvoid.com, talk to us, and we can simply tell you like, hey, I don't think we're the best option for you. So there's literally no risk. We are not the kind of company that will try to push a sale that will try to force a sale to a client that doesn't make sense. So if we just realize that, hey, actually you might need something that another company could do better, then we'll just tell it to you. But the cool thing is if you want something that we feel that we can really do it well for you, then we can work together. So if you want the React project, then sorry, but we don't have that expertise. But if you want an Angular based project, if you want to set up an X, if you want to set up CINCD, automations like we can do all of that for you for your company, for whatever project you are working on. So check us out at Unvoid.com. this time. Regarding pics, I have one pic. It is a Bluetooth speaker from Ultimate Ears. It's called Mega Boom 3. It's been around for a while now. It's not a new speaker, but it's a really, really, really good one. And I have it for a while now and it never failed me. I'm showing it here on the screen if you're seeing this I have the red one there are multiple colors and it's so cool. I mean it's 360 so It's you can hear it in all directions. You don't have to point it to your ears You can just put it in the middle of the living room and everybody's listening to high quality music or whatever and it's also waterproof and not just waterproof, but it actually forgot the word but it won't go down into the water, okay? So, it will float. Jesus,
Charles Max_Wood:
Right.
Lucas_Paganini:
I forgot the word float. So if you just throw it in a pool, it will float and keep playing music, which is super cool. Have I ever shown that into a pool? Never. Will I ever do that? Probably no, because I don't want to test it out, but it seems like it does that. commercials show that. So I trust it. It's highly resistant. I have this for like four years. Never failed me. So yeah, that would be my tech. Ultimate years, Megaboom 3.
Charles Max_Wood:
I think my wife has the Mega Boom 2. I don't know. We've had those for a long time. And yeah, it looks a bit different than that, but yeah. It's the UE Mega Boom, is what she's got, one of those. So they're great speakers. They sound terrific. I'm gonna jump in with some picks. I usually do a board game pick, and I am not going to point. The card game is called The Crew, the quest for Planet 9. It is a 3-5 player game. Board game geek says 2-5 player game, but the box says 3-5 player, and I don't know what it would look like with just two players. Anyway, so the way you play it is if you've played other games play the highest card to get the trick. It works the same. There are four colors. There's pink, blue, green, and yellow and then there's a fifth suit, which is the black suit, which is the Rockets and the Rockets are the Trump, right? So one, two, three, and four is all it goes. The rest of them are one through nine. Whoever has the four of Rockets goes first and they become the commander. And then what you're doing is you're trying quests every hand. And so, and if you don't complete all the quests, then you have to start over that round. And so what happens is it'll tell you what cards to put out, and sometimes they have little chips on them. And so the card is whoever winds up, so there are two decks, and the one is the quest deck, and the other one's the card you play with. And so you'll flip over like three And then the commander takes one, the person who had the four, and then it goes to the left and everybody else takes one. And those are the cards that they need to take in their tricks. And then if you have a chip on it that's like a one, it means that one has to be taken first, right? And then if there's one with a two on it, it has to be taken second. It doesn't have to be taken in the first trick and second trick, just first, second, third, fourth, fifth. There are also ones with arrows on them. can be taken the one with one arrow has to be taken before the one with two arrows on it but if you have a third card that doesn't have a chip on it that one can be taken in any order within those right so it could be taken first it could be taken between them or it could be taken less and anyway it's it's really fun I mean I basically explained the whole thing to you there's a book that has all the quests in it so you just start on quest number one and work your way up ranks it as a weight of 1.98. So casual game, right? I basically explain the entire game to you right there. It says 10 plus on the age. That's probably about right. We were playing with the kids and they were fine. I don't know if my seven year old would be able to, you know, figure out the strategy. And there's a lot of talking, right? Well, if some, because you can't talk about what's in your hand, they're looking for or whatever, but you can, and you can't also like strategize knowing that you'd be able to, you know, try and influence people to do a thing, but you can say, you know, whoever has the this could play this if somebody else did this, right? And you know, and so you're constantly discussing how to get people the right cards. And anyway, it was fun. It says it's 20 minutes per round. That's probably about accurate too. cards line up with their hands and it's a 10 minute round, right? Because you go around three times and you got all the cards. And so then you're done. But some of them, you really are kind of staring at your cards and looking at each other because you got six white blue cards, and there just aren't enough cards for it to go all the way around six times for everybody to capture those. So now you're trying to figure out, okay, if you take a pink trip trick and I can throw a blue card on it, you know, then you can get your card kind of thing. Um, and so those ones tend to take a little longer than 20 minutes. But anyway, fun, fun game. Really enjoyed it, probably paid, played like 25 rounds, um, with my sister in law and her husband while they were here with my wife. Um, so I'm going to pick that. And then, um, you were talking about like speakers and stuff and, and water. And that just reminded me. So, um, I'm going to do, I'm going to pick something about the triathlon training. two pieces of equipment that I'm using. So I'm training for triathlons. I'm actually doing a triathlon a week from Saturday here in Utah. But anyway, when I go to the pool and swim, my workouts are on my phone and I used to just print them off, but it was like I'm printing one off like every third day. And, you know, and then I have to, you know, I don't want to leave it where it's going to disintegrate in the pool and I don't want Read it off my phone while I'm in the pool But of course you don't want your phone right next to the pool Because phones and water just don't mix so I got one of those waterproof They almost look like ziploc bags except they have little tabs on the top that lock it shut And and make it waterproof and so I've been using that so I just slide my phone into it and Just have it on on the side of the pool Accessibility options on your iPhone, you can turn on what's called accessibility mode. And then what you can do is you can tap the power button, the button on the side three times. And what it does is it locks the phone into the app that you're currently using. In other words, you can't exit the app, which means that if somebody walks by and sees my phone sitting there and tries to fuss with it, the only thing they can mess up is my workout app. Right. They can't get into my contacts. They can't make a phone call. They can't do anything else with my phone. and then enter my passcode to exit accessibility mode. We figured that out, because it's nice when you're like putting a show on your phone for your kids. It's like, okay, well, they can't get out of the Disney app or whatever, and so I don't have to worry about them goofing with my other stuff, you know, messing with my calendar. So that's pretty nice. And then I got a set of headphones that are waterproof. They're bone conduction headphones. They are Bluetooth, and it turns out that Bluetooth, water a couple of inches, Bluetooth doesn't work. So I can't stream music to the headphones, but I can load music onto the headphones. It's got eight gigs of storage on it. And so that's nice when I'm swimming. If I want to listen to some music or something, they work better with your earplugs in. And I swim with earplugs anyway, because I get swimmers here. I get water stuck in my ear and it takes hours for it to come out drives me crazy. So anyway, so I really like those. I'll put a link to those in the show notes. They were But they work really nicely. So anyway, those are my picks.
Lucas_Paganini:
Dude, you had a lot of picks this time, man. I like that one to block the cell phone was really interesting. I know a couple of people from my company that will be really happy to know about this, especially to block things from, um, babies playing with the cell phone and et cetera. Really
Charles Max_Wood:
right.
Lucas_Paganini:
interesting. Awesome.
Charles Max_Wood:
So handy.
Lucas_Paganini:
Yeah. Okay. Uh, I think that was it for today's episode. Thank you so much for sticking with us up until the end. stuck with us up until the end. This is like Marvel movie credits. We will be rolling
Charles Max_Wood:
Ha ha.
Lucas_Paganini:
out new thumbnails for the episodes and they look so sharp. So
Charles Max_Wood:
Yeah, they look great.
Lucas_Paganini:
yeah so check out the thumbnail for this episode and for the next ones that will come. Everything is revamped and we will be doing even more and more because is already the most popular podcast in the world about Angular is about to get even better every time we want to make this like the default thing that every Angular developer listens to. So yeah we'll be doing a lot of improvements and if there's anything that you think could be better in the show we want to hear from you too so you can either send your feedback to me or to Chuck so to me you Instagram, the links to it, it's also in the description. For Chuck, you can also see his Twitter and other social medias in the description. So yeah, if there's anything that you think could be better, please let us know. This show is made by us, but it is for you. So let us know anything that could be better, and we will see what we can do to make it so. All right? Thank you,
Charles Max_Wood:
Yeah,
Lucas_Paganini:
and
Charles Max_Wood:
absolutely.
Lucas_Paganini:
I'll see you.
Charles Max_Wood:
One other thing I just wanna
Lucas_Paganini:
Go ahead.
Charles Max_Wood:
add with that is, I mean, I've been podcasting for, 16, 17 years. A lot of people just assume that I just know what to do. You know, it's like, Chuck obviously knows he's heard everything. What I'm finding is as I talk to people, like the thumbnail thing, right? It just wasn't something that was on my radar And so I like the feedback. I like being pushed on this stuff. So don't feel like it's, oh, well, you know, they're pros and whatever. Right. Feel free to give us feedback. This would be better. This would be nice. I like this. I don't like that because we're constantly looking to improve and it may be something that just hasn't appeared, you know, something that I haven't heard or Lucas hasn't heard that, that really works for podcasting. So anyway.
Lucas_Paganini:
Definitely, definitely. All right, thank you and I'll see you next week. Bye.
Charles Max_Wood:
Max out everybody. Oh, I need to stop it. That's right.