Deploying Ruby on Rails Applications - RUBY 592
Dave and Valentino join this week's panelist episode to talk about Deployment in Rails. Dave begins by explaining the app deployment process and talks about deploying apps with MRSK. They also talk about some of the deployment tools you can use and things to consider.
Show Notes
Dave and Valentino join this week's panelist episode to talk about Deployment in Rails. Dave begins by explaining the app deployment process and talks about deploying apps with MRSK. They also talk about some of the deployment tools you can use and things to consider.
Sponsors
Links
Picks
- Dave - Hugging Face
- Valentino - Why LLaMa Is A Big Deal
Transcript
Valentino_Stoll:
Hey everybody, welcome to another episode of the Ruby Rogues podcast. I'm your host today, Valentino Stoll, and I'm joined by co-host Dave Kimura.
Dave_Kimura:
How's it going?
Valentino_Stoll:
And we're here to talk about all of the new, latest, and greatest things in Rails related to deployment. And if you haven't been looking at what's been coming out, it's what we're talking about is Docker and a Rails-centric way to deploy apps with it. I'll admit I have very limited knowledge of this. So Dave, do you wanna give us like a quick rundown this and how people are using it.
Dave_Kimura:
Yeah, so MERSC is a deployment utility that leverages Docker very heavily. And it takes, in my opinion, a lot of great practices or best practices as far as how you deploy. So it'll, the thousand foot view is it'll first create a local Docker image of your application. It'll push it up to a registry. And then when it actually goes to deploy it, traffic in between the actual application on that virtual machine or bare metal server. And instead of just boarding all the traffic directly to the Rails application running, it all goes through traffic, which is a load balancer. And when you are deploying your application, it's going to first spin up another container on that machine. Make sure that the application is up and running. checks, and then it'll replace the old virtual machine with the new one, traffic updating to then point all the traffic to the new one. So it's a deployment mechanism, very similar to Capistrano, but instead of having to deal with bare metal servers and that kind of stuff, it's all pretty much stockerized. So you can do this with any kind of environment that you would deploy to a with Capistrano, do with it is currently leverage stuff like Docker swarm, Kubernetes, or any kind of environment where the actual underlying virtual machine can go away at any given point of time and then new ones get provisioned. So things like app runner or beanstalk would be out of the question too.
Valentino_Stoll:
That's really interesting in that it went that direction. And I'm a little disappointed, to be honest, that it isn't a little more inclusive to what all of the current providers are. And I understand that there's a heavy move off the cloud movement starting.
Dave_Kimura:
Mm-hmm
Valentino_Stoll:
I don't know what necessarily I agree with, It seems like, and maybe we can dig into more of whether or not this is even true, but my first impression is that it's just like, has cut out a large chunk of how people are currently deploying. Is that not the case?
Dave_Kimura:
If people have adopted Kubernetes and that kind of stuff, then yes, that is the case. But I guess the question is, why are you using Kubernetes in the first place? If you have a simple enough monolithic application that has few dependencies, you have maybe some kind of storage API like S3. You have a database, Redis, maybe a full tech search engine. If you've outgrown what Postgres equal or something like that adds. So if you have a simple monolithic application, why are you even using Kubernetes? Like what is Kubernetes really bringing to the equation that is like, I'm so glad to have this orchestrator that I now have to maintain and troubleshoot and deal with.
Valentino_Stoll:
I mean, the first thing I think of is scaling. You know, Kubernetes definitely makes it easier, I think, to scale up and down those services. But yeah, I mean, maybe you're right. Maybe if you only have a handful of them, or, you know, probably three is the magic number, right? You have your web background
Dave_Kimura:
Yeah.
Valentino_Stoll:
and maybe something else. Yeah, it's interesting. So I guess why not Capistrano? Well, how does this differentiate, maybe we should start there. Like how does this differentiate from Capistrano, which, you know, it's just a configuration tool that lets you use SSH to magically, you know, deploy your Rails application to the right passenger stack or however you want to configure that. Where do these start diverging?
Dave_Kimura:
So I think Capistrano, the biggest difference between the two is that Capistrano is going to do everything on the underlying host machine that it's getting deployed to, whether that's a bare metal server or a virtual machine, it's not using Docker in any fashion. So when that happens is Capistrano going to easily or by default be able to spin up a separate running instance of the application, do health checks. the old running instance or is it going to be a lot more complicated to get that kind of functionality? So basically you have zero downtime deployments with MERSC versus Capistrano.
Valentino_Stoll:
I see. So I mean, it sounds good. I do like that it's kind of zero dependency, or one single dependency, right? You need Docker to run it. So I mean, that aspect of it I do like, how is the experience like? Like, where do you even get started? How much effort is it to like start fresh, convert to, have you had an experience with this.
Dave_Kimura:
Yeah, so I have recorded a couple of Drift and Ruby episodes on the topic specifically and one of them is a free episode that you can go and watch today and I'm pretty impressed with it at first it was a bit, you know, like there's too much magic going on in the background And that's always worrisome but as you start digging through the logs and actually reading what Merseys is doing it makes a lot more sense and it's a lot less magic than you would think. The biggest magic piece is that you are compiling your application or not not compiling, you're building your application image, which is a Docker image that gets pushed up to the registry and then your production environment pulls it down. And the nice thing about this is that I was able to destroy all of the example environments that had running in preparations for the episode because I wanted to make sure that the steps I'm doing are correct and have a nice presentation to show. And I was able to rerun the MRS deployment, initializing the servers and everything without having to change anything but the IP addresses that they were appointed to. And it was able to recreate basically everything. And so I think it is very powerful because if you have a bare metal server or if you are using if you're using digital ocean droplets, EC2 instances, then you're still able to be on the cloud using MERSC. But the idea is you don't even need to SSH into that virtual machine to get everything up and running. As long as MERSC on the environment that you're running it on, whether it is a CI CD pipeline, your actual computer or something like that, then it's going to provision that machine entirely. to install Docker, get that up and running, it's going to install the traffic load balancer, which is just a Docker container, and deploy your application image. So you don't need to have an extensive amount of DevOps knowledge, Linux knowledge in order to get up and running with Merck. And I say that with a huge asterisk that we'll talk about. All right,
Valentino_Stoll:
Yeah,
Dave_Kimura:
so.
Valentino_Stoll:
I was gonna say, I mean, how does, how do you even start to think about security in this way?
Dave_Kimura:
And there's the asterix.
Valentino_Stoll:
So I have so many questions, like, how does the bootstrapping process work, for most like, if you just have a bare metal server, like, let's stick with, like, you know, a digital ocean droplet or something like that, right? And you just provision an Ubuntu box or something, right? That's just blank. Is that the process?
Dave_Kimura:
Yeah?
Valentino_Stoll:
And
Dave_Kimura:
Mm-hmm.
Valentino_Stoll:
then you'd what?
Dave_Kimura:
When you, in the Merse
Valentino_Stoll:
Thank you. Bye.
Dave_Kimura:
configuration, there's a deployment YAML file that you basically give it the IP address, the public-facing IP address, or if it's all happening within a local network, the non-roundable local IP, you would give it that in the deployment YAML file, and it's an array, so you get multiple servers that you'd be deploying to, and then it would SSH into there, run and install Docker if it's not already installed, get the traffic load balancer and deploy the application. So there's very little that it actually does for us. Outside of the deployment, it doesn't do any server hardening. So if you do have a application or if you created a droplet and that droplet is just exposed to the world and from there, you just deploy your application, never think about it again. Well, were leaving open a lot of security holes potentially. So in the episode, I do go through and talk about some of the minimal things that I would do on those kind of environments, one being installing and enabling UFW if you're on a Buntu, which is a firewall program. So you can basically block all traffic and only allow certain traffic in through the network. traffic on port 80, which is what traffic listens on. And you would only allow traffic on maybe your SSH port. You want to make sure that you are not doing password authentication on SSH that you are using some kind of RSA token. So you're, you know, you're not going to be able to brute force that really. And just a few other things like running the security updates on the underlying OS and stuff.
Valentino_Stoll:
I see, so it's not exactly turnkey.
Dave_Kimura:
No, no, if and I think that this is where maybe DHH has never had this need because when you're deploying to a cloud virtual machine versus having your own hardware within your own data center or local network, there is an inherent different level of surface exposure for attack. Because firewall in place and the world's not going to be able to access those machines. You would have to open up a pinhole or a port forwarding or net translation from the world at your firewall to that machine to then expose port 80 or whatever. So I do think that there is some things that are needed within MERSC to really make this a full fledged deployment utility. But at the same time, I think it's also worth noting that if you were to go the Kubernetes route, is that really any different as far as, you know, the initial provisioning and server hardening? Because you again are creating EC2 instances and those EC2 instances need some kind of access. And so are those out of the box secure? Or is it a very similar problem?
Valentino_Stoll:
Yeah, I mean, I think the difference there is there's tooling already set up to make those things easier to do. Right? Like there are probably, I haven't experienced Kubernetes enough, but you know, there's the AWS command line. Uh, there is, you know, Kubernetes has its own. Cube control that, you know, lets you make your own configurations and set up. So you could do automate a lot of that lockdown, uh, which I know is just a matter of time for, you know, M, MRSK to get there. Um But I see those as being advantageous, right, where, you know, if you're a team that using Kubernetes, it's still hard to tell kind of what the advantage of using MRSK would be. And maybe that's not the target audience, right?
Dave_Kimura:
Yeah, I think that if you want a more simple infrastructure, because I do think that there's a lot of overhead with Kubernetes. You know, if you were to install Kubernetes on a, you know, let's say a cluster, so you have three different virtual machines, there's a lot of network traffic talking between the two. And you also have the resources of Kubernetes running on there as well. And, you know, it might not be worth it. especially after one year when your certificate expires and then it's like you're unable to deploy because you can't, you know, keep control into there to make a deployment because the certificate is expired. You know, how many people have encountered that immediately knew like, oh, I need to go in here run the, you know, cube control command to then update the local certificates, pull down that profile or that cert on my local machine, rerun the deployment. Now, how many people are going to be able to do that without having to look up Google, try to do research or refer back to documentation? So I do think that there is a lot of added complexity in things like Kubernetes, which I run Kubernetes at home. So don't get me wrong, I don't hate it. But I don't think it's the end meets all answer to everything, especially for smaller organizations who maybe don't have a full DevOps and IT team to manage that kind of stuff.
Valentino_Stoll:
Yeah, I mean, you make a good point. There's a reason why Fly.io and Heroku and all these companies were so successful for the Ruby community in that it's hard to deploy an application for Rails or Sinatra or something like that, where it, you know, Rack needs some extra setup and configuration with most web servers to get it set up and taking requests. So it makes sense that Rails should have its own tooling to make that easier. When I first saw this, I was almost hopeful that it was Pau.
Dave_Kimura:
Ha!
Valentino_Stoll:
I don't know if you remember Pau, but
Dave_Kimura:
Yep.
Valentino_Stoll:
the former local deployment,
Dave_Kimura:
Mm-hmm
Valentino_Stoll:
a way to basically just get your application served on your local network easier through custom domain names. I was hopeful that that was what this would be. I take this and I execute it. And then suddenly I could just say, OK, I'm going to use my local machine and temporarily have this deployed and accessible.
Dave_Kimura:
Yeah. Puma dev replaced pal, in my opinion. If you've not heard of Puma dev, it's
Valentino_Stoll:
No,
Dave_Kimura:
very
Valentino_Stoll:
what's
Dave_Kimura:
similar
Valentino_Stoll:
that?
Dave_Kimura:
to pal. It's made by the Puma team, I believe, and it allows you to do very much the same stuff. But what you get out of the box with Puma dev is SSL. But then you also get the support for WebSockets. So if you do anything with action cable or anything like that, out of the box. I've kind of since moved on from Puma Dev. Once I started using Docker, because I don't think they're going to really play well. But one thing that loved about Puma Dev is if I go to that domain name, whatever it gives me, like, you know, example, dot local, then it'll automatically spin up that application starting the rail service and stuff. I don't have to do that manually. that it had back in the day, but right now my normal development workflow is with Docker and a Docker compose file. And my production deployment mechanism for a lot of my hobby ish apps are actually with Docker swarm. So the reason why I like Docker swarm is because it uses a Docker compose file. So I'll have a file that looks very similar to what I'm doing in development and a Docker file that looks very similar to what I'm doing in development, but it just has some of the environment variable changes and that kind of stuff. I'm able to basically have a as close to production development environment that I'm deploying to production. That's why I like that setup and I actually have real templates that generates the Docker file and Docker compose files for me whenever I am creating a new app. That's my preferred way. But if I did want to branch that out into its own infrastructure, each application, then I think I would definitely go the MERSC route because I could just create the virtual machines that I need, not even need to log into them, just grab their IP address, make sure that I can SSH into them, and then run the MERSC deploy against them, and then the application be up and running in just a few steps.
Valentino_Stoll:
Yeah, that sounds pretty cool. Does that let you like... How easy is it then to just like deploy a local, you know, a one-off project? Like what is that process like? And how, I was reading in MRSK like, it's not really compatible with that Docker Swarm setup. How do you feel about the differentiation there?
Dave_Kimura:
And I don't know if I would want it to be, you know, simply because Merce, it takes a few assumptions. If one assumes that you do have some networking and IT experience, that you're going to be able to create your own infrastructure. But what it basically doesn't want you to have to worry about is the actual application deployment process. You still have to worry about the other things. and hardening a virtual machine, getting your database up and running, being able to handle the networking and the NAT translations, the DNS stuff, you still have to do all of that. So it's not a free, it's not a free DevOps person that, you know, someone like render or fly.io would say. So there is still a bit of knowledge requirement, but I will say it is sure as heck a lot easier. then trying to get this all set up and deployed within AWS EC2 instances by yourself. And it's going to be a lot more repeatable. Like I said earlier, I was able to destroy the entire application environment, the virtual machines, get a new load balancer, two new virtual machines, a database server, reprovisioned without even SSHing into those virtual machines. I was then able to just run my MRS to deploy again. those two new IP addresses and they deployed everything and the application was live again. So those steps are, I do show those fully in the video on being able to set all that up. And it's really not that much. There was a recent article from someone at fly.io that basically ripped into Merck and I think they've since retracted it because the community kind of like that this they work for fly.io. So but I think that's kind of like the thing where this utility came out and it scared them enough to say like, here's why you shouldn't use for Merce. And I think that kind of gives more legitimacy, especially if you then retract the article and enough people saw it. It to me gives more legitimacy like, well, yeah, Merce is something to be afraid of if you're in that deployment. space. But what they should have focused on instead of how much steps it took to get MerseGuppin running versus Fly.io or you know any of that stuff, it's we're handling all the DevOps for you. You don't have to worry about networking. You don't have to worry about database provisioning. You don't have to worry about server hardening. We take care of all that for you. All Merse takes care for you is application deployment, not server provisioning, networking, hardening none of that. That's all still up to you.
Valentino_Stoll:
Yeah, that's an important distinction to make. You know, it's kind of, that's one thing I did like about it is it's encapsulation, right, as an application ecosystem, move it where you want to. It definitely makes a lot of sense. I guess... I'm still trying to find maybe its advantages over other deployment packaging mechanisms like Peketo or something like that that kind of make this packaging of application deployments easier. Where do you kind of see that? What are some advantages in using MERSC over? You know, something like a, you know, a build platform or Kubernetes. I mean, in a way, you know, like, I know that they package differently, but like, what, what is advantageous specifically about Merisk over, you know, packaging up the application deployment process?
Dave_Kimura:
Yeah. Before I answer that, I do want to note that they did not retract the article on fly.io. I did link to it here on the show notes. But the Reddit post that was created around that, I think they did remove. So I just want to clear the air there. But so it's an interesting article and it's good to know what's out there and stuff. But back to your bill, bill question. So near and infrastructure as a service. And I think many have tried to recreate what Heroku has offered. And one of the things that they had that's been very successful is the Bill Kite or the Bill Kit. And the Bill Kit is, and keep me honest here, Valentino, because I don't know too much about the Bill Kit, but it basically is the set of instructions for, for most Ruby on Rails applications or, these kind of things, it's going to check to see if you have yarn installed or the package JSON or whatever, and then it'll install node and it'll install a lot of these things in a Docker image for you so you don't have to worry about it. And I think for the most part, that kind of stuff works pretty good until something deviates. So what if for your particular application, you need something or something with videos and you need to have that library on there. How easy with flight.io with render with Heroku any of these infrastructure as a platform services, are you going to be able to add in these kind of dependencies? And maybe you're able to get it to work on a specific version of a Docker image. But maybe you need to do some acceleration. How the heck is that going to work on Heroku? So I think Merck is allowing you to make these decisions. It puts in place, especially with Rails 7.1, where we're going to get this production Dockerfile by default or new applications that's kind of been tailored with best practices and to work with Merck. So we're going to get all of this out of the box. And the ability to make our own decisions and to configure them how we need when we need it. Instead of trying to figure out the existing toolset deviating from that and trying to get things to work. You're muted, Valentino.
Valentino_Stoll:
I was going to say that it makes a lot of sense. But to go back to your point of build over basically running, ongoing running, which I think is kind of like the crux of why this was created, is trying to get off of the cloud process of clone, swap, and destroy, which is kind of that process. dependency that's not there is you spin up a whole new node with all of the existing ones, add it, switch to that new application, and then once everything is settled, drop the other one.
Dave_Kimura:
Yeah.
Valentino_Stoll:
To me, it doesn't make any sense, but it solves the problem. It does make it easier to do those things, but it is like, why are you wasting all of when you don't necessarily need to. So I will go back to Merisk in a couple points that I saw as like... being kind of a huge benefit. And one of the ones that stuck out to me was like rolling back, you know, deploys,
Dave_Kimura:
Mm-hmm
Valentino_Stoll:
where that kind of just comes out of the box, where you can just say, roll back, you know, Merisk roll back as a command line, and it will just automatically revert, you know, your application to the previous working state.
Dave_Kimura:
Yeah. And it's pretty interesting the way it does that. So it looks at the Git log. Every Git commit is going to have a Shaw attached to it. And Merck will keep the previous few images with those different Shaw's. And over time, it's going to then start cleaning up the old ones, which is really great for not having to maintain that server. Because those I've read into this where my deployment on a container environment using Docker swarm. I did so many deployments that the Docker images just ate up all the available disk space. So. Merce is taking that into account where it's going to then clean up the old old ones, but keep the few recent ones that you can do a rollback if you need to.
Valentino_Stoll:
So how does one of my biggest concerns is kind of like extendability, right? Like if you have some external service that you're trying to connect and into your MERSC setup, like how easy is that to extend and integrate with? So,
Dave_Kimura:
Do you have an example service? Or do you mean to something like S3 or Postgres SQL?
Valentino_Stoll:
Sure, I mean those are good examples or like I'm trying to think of one where it's not as like popular right like Let's say you have like I don't know, like an email client or something like that or... I don't know, like a third party application that has a bunch of dependencies that are separate to it that you want to coexist, right? Like, how easy is it to like make that symbiotic relationship? Like, I'm thinking more of like, in a, you know, multi-docker setup, you can configure that with a lot of configuration tools to connect the two and make it easier to host everything all in one kind of ecosystem. Like how hard is that? Is there like a hard divide with, you know, Merisk? versus some of the other options that are out there. Like if I went and I saw, oh, I could set up this Discord bot or whatever it may be, and it has its own deployment setup and configuration options, using some of the other deployment tools, it's kind of easy to just say, okay, well, use their setup. Here's their Docker file or whatever it may be. And it'll deploy smoothly as long as you use the cube control or you know Docker swarm or things like that that are already set up to you know lock in their setup for you.
Dave_Kimura:
Yeah, I really think it's a non-issue. And I say that simply because let's say if you do have your own infrastructure, whether in a data center, in your office, at your home, uh, or wherever, and you were doing your deployment with Merck. So it is no different than having your existing infrastructure the way it's at. And then you have these virtual machines. The only layer that you're really adding of complexity here, is a Docker layer. So on these virtual machines, instead of deploying the application directly to those virtual machines, which is something like what Capistrano would do, you're deploying Docker, and then having Docker run the image. And so if you have a application that let's say is taking in requests, so you have some non standard ports that you're then having your environment on, then it could get a little bit complicated because then you do need to have those ports open for the Docker environment to listen to. You have to then forward them within the Docker network to the appropriate running container. So I think that's really the only complexity from a bare metal deployment that you have with Merce. And I think it's a lot less complex than a lot of other deployment And I say that from the perspective of when something goes wrong and things aren't building right or deploying right, you're going to have a lot more to try to figure out.
Valentino_Stoll:
Yeah, so I guess that's my ultimate question is, at what point does MRS stop making it easier for you to manage the things that you're working on?
Dave_Kimura:
I think that the illusion of Merce being the one stop all deployment and server management utility, I think that's the main problem. Because again, it's not going to do server hardening or any best practices. It only handles the application deployment. And if you are not concerned about all of those other things, then you really should using a platform as a service tool. Because if you're not going to take care of your servers, because at this point, are you familiar with the whole pet versus cattle deal with servers and being able to reprovision them and all that stuff?
Valentino_Stoll:
No.
Dave_Kimura:
So
Valentino_Stoll:
What is that?
Dave_Kimura:
a pet is something that, you know, obviously you it's like your dog that you love that dog. It you take care of it. You groom it versus cattle, which serves a purpose. And it is to ultimately go to the slaughter and stuff. So in the server analogy, having a bare metal machine is going to be like a pet. You have to take care of that thing. You have to do maintenance on it. You have to. Feed it what it needs, essentially. And then a cattle virtual machine or a cattle infrastructure would be more like something like AWS beanstalk where at any given point of time you can destroy the virtual machine that it's running. And then it'll automatically reprevision another one. The old one is out. You have a new one. It doesn't matter what happens to that machine nothing important is stored on it. All your logs are shipped off. You have all of your uploaded files up to S3. That virtual machine does no purpose other than to serve traffic. So that would be more of a cattle situation. And Merck basically is a hybrid of both because you have this bare metal server or virtual machine that is your pet. But then the application deployed to it is more like the cattle, because you can destroy that Docker container in its entirety and your application will still work fine. You just redeploy your provision to a new one and then your application's up and running there. So in that sense, it's kind of like a hybrid pet cattle or cattle pet. And you don't,
Valentino_Stoll:
I see.
Dave_Kimura:
yeah, you don't have to worry about the virtual machine or bare metals with a platform as a service. So with Heroku, you don't have to worry about the machines that they are running underneath to have that Docker container up and running for you. All you have to worry about is that running Docker container. So. And I think the basic ideas, it really depends on your needs, what you're willing to take on. Like if you don't mind doing the networking, if you don't mind having some pets around that you have to maintain the bare metals or underlying virtual machines that Merseys is going to use, then that's going to be a pretty good route because you can deploy that anywhere. And I think there's something else that is important. And companies like Render and Fly are good about writing migration documents. They do have documentation on if you want to move off of Heroku onto our environment, here's how you do it. But then how do you move off of that environment and go do something like Heroku or your own servers? And I think that's where my biggest issue with a lot of these platforms as a services is that it's not easy to migrate. You are getting yourself into a vendor lock in here, not completely because you can always just take your environment and migrate it somewhere else. But you're not going to be able to do it as easily as you would with Merse because Merse doesn't care. You just it just handles the application deployment. So there is no vendor lock in. It just needs SSH access into that machine or the virtual machine.
Valentino_Stoll:
Yeah, that's pretty cool. I mean, I'm sorry to come around to it a little. Uh... I guess it's hard to get around. multi-container aspect of it, right? Like when you have multiple service, like you have your MySQL, your Redis, and things like that, that Merisk is managing.
Dave_Kimura:
I would like to say something about that.
Valentino_Stoll:
Sure, yeah, go for it.
Dave_Kimura:
That is so dangerous.
Valentino_Stoll:
Ha ha
Dave_Kimura:
I was
Valentino_Stoll:
ha
Dave_Kimura:
playing
Valentino_Stoll:
ha
Dave_Kimura:
around
Valentino_Stoll:
ha ha!
Dave_Kimura:
with digital ocean on this because I thought, what if I just did the entire deployment like this? Merse has something called accessories, which accessories are a way that you were able to then have another virtual machine up and running that you can then deploy Postgres, MySQL, Redis too. and much like many other cloud providers that give you access to a virtual machine, there is no hardening done. If you use Merse to deploy MySQL or PostgresSQL, it will not harden that server. What that means is that the entire world can access your database. The only thing protecting you is that password and username that you have for that database. It's very dangerous. But at the same time, if you were running your own bare metal servers, you already have a firewall in place, which means that the world cannot access your database server. And in that particular case, then running the Postgres as an accessory is wonderful and stuff because you're not exposing anything. But if you're deploying this to a cloud environment. please use the managed databases and managed services that that particular cloud offers, assuring that you're not getting yourself into a vendor lock-in situation. They're gonna take much better server hardening practices than what Merce is able to do, because again, Merce only deploys the application. Or in this case, the accessories as well. So just something to take note of.
Valentino_Stoll:
Yeah, I mean, that brings me to another point or question. How easy is it now to deploy with Merck to some of these service providers? Is that a straightforward process? Because they do offer their own database. If you want to use Heroku's Hobby Plan still, I imagine that you probably wouldn't use Merck anyway. But if you had Digital Ocean or something Is that an easy configuration option? Is it still just Rails? You know, how easy is it to continue to use these other services?
Dave_Kimura:
It's very easy because in your Rails application, in your secrets or wherever you're putting it, you're just going to have that database URL and that connects to the appropriate database server. There's nothing else to do there. I mean, it's really that simple. And setting up the infrastructure, if you were to deploy to Heroku, or I'm sorry, if you were to deploy to Digital Ocean. And I like Digital Ocean a lot simply because it's simple. There's not a lot of guesswork for your databases. And this is in comparison to what I know. So AWS, with AWS, if you want to lock down your database server, so only. your certain virtual machines can access it, it's a pain. There is no clear cut way to do that. It's possible. I mean, you have to do that if you're deploying a database, a cloud managed database, but it's not as simple and straightforward as DigitalOcean. Because DigitalOcean says, your database is wide open to the world. Here, click this, add some of your virtual machines they can access it or and then it also gives you an option for your own public IP address. It's that simple to lock it down. And so from that perspective, I would recommend Digital Ocean because they make it that simple. I think that if it were AWS or someone else were that simple, you know, then they're a good contender as well. But you have to know a lot more of the AWS lingo. groups, IM profiles, and all that stuff in order to have it properly locked down.
Valentino_Stoll:
Yeah, that's one concern I have with this whole idea, right? Is it's trying to say, OK, you don't need the cloud. You can just use this tool and help you get set up. But there is still that you need to lock it down aspect of it. Like
Dave_Kimura:
Yeah.
Valentino_Stoll:
even if you have your own bare metal server, like say you have a rack in your basement, you can't just use Merck and then be like, all right, I'm good to go. Like let's. plug this into the, you know, public internet.
Dave_Kimura:
Yeah.
Valentino_Stoll:
It's got that similar Docker vibe to it, right? Where like people switch to Docker thinking, okay, like it's all self-contained. I don't have to worry about security because all of the services only talk to each other that are exposed directly. But that's not exactly true.
Dave_Kimura:
Yeah.
Valentino_Stoll:
Right? So I mean, what do you see like, is there any feedback you've seen from the Rails community in like having the same concern? Or Is it kind of just like, well, you should know what you're doing.
Dave_Kimura:
I would like to see on the Merck read me. Like big disclaimers of what it is meant to do and what it is not meant to do. I think it needs that disclaimer because otherwise. It can be very dangerous. And then who's responsible for that? I mean, ultimately, the person deploying is going to be the person responsible. But. I mean, just at a glance of the documentation, you really don't see that. It's just how to use it. And I think that having some disclosure is important to say that we are only handling application deployment. We are not doing server hardening, networking, anything. So.
Valentino_Stoll:
Yeah, I mean, and maybe there's another tool coming. Right, which I'm hopeful for, but I mean, if the whole point is to solve making deployments, Rails deployments easier, it seems like we're not quite there yet. Yeah.
Dave_Kimura:
But aren't we though? Because, you know, you just said Rails deployments. That's the application deployment. Merce is handling that. It's not
Valentino_Stoll:
Well...
Dave_Kimura:
handling their server hardening and provisioning and that stuff, but...
Valentino_Stoll:
Yeah, I mean, to me, I would include that.
Dave_Kimura:
Yeah, you're
Valentino_Stoll:
Right, like
Dave_Kimura:
wanting
Valentino_Stoll:
if
Dave_Kimura:
a
Valentino_Stoll:
I'm
Dave_Kimura:
doku.
Valentino_Stoll:
renting a server from somebody, like let's say I have a linode or a droplet or whatever, and I just create it, like, okay, like if I'm deploying to it, like Capistrano, even all the base recipes, they lock it down, right? Like there's plenty of plugins and stuff for Capistrano where I could have all of the stuff hardened and everything and Rails deployment set out of the box. I was kind of hoping this was gonna get there. But, and I, maybe they made the announcement a little prematurely, but it seems like there's still like that missing turnkey step, right? Like,
Dave_Kimura:
Yeah.
Valentino_Stoll:
there's so many, because there's so many other platforms and things that you can use in other languages and frameworks that have this built in, you know? Real should be there, you know.
Dave_Kimura:
Yeah. And I do understand that perspective. And I think the biggest thing there is that because we are deploying basically to any kind of virtual machine, bare metal machine can be running any operating system. There that's going to take a lot of assumptions into account. And that could be just as dangerous as well, where you basically brick that machine and have to completely wipe it again. And so it's. What's the give and take there I think? But.
Valentino_Stoll:
Yeah, I mean, it makes sense why they decided not to do it. Because you're right, it is such a wide net to cast to try and get everything to work. But I... I'm not sure what I'm saying. I'm just trying to get everything to work. I'm just trying to get everything to work. I'm just trying to get everything to work. I'm just trying to get everything to work. I'm just trying to get everything to work. I'm just trying to get everything to work. I'm just trying to get everything to work. You know, it's definitely the missing, it's introducing a piece that solves a lot of things with a missing piece.
Dave_Kimura:
Yeah.
Valentino_Stoll:
And I feel like that there's so many other tools out there that already have all these pieces fit. They may be a little more complicated up front, right, but they'll get you there. And they're open source and they have communities. And so I'm hopeful for this. traction and expand, but it's still kind of a hard sell for me.
Dave_Kimura:
Yeah. And I think that if you were deploying this to a bare metal again, you already have a firewall in place. So let's say if you do have a rack in your basement, you're experimenting out, you install proxmox on a few machines to play around with. And that's just a bare metal hypervisor for those who don't know. So you create a few virtual machines that you want to deploy, merce with, with the accessories. And, you know, so you have your postgres and your Redis has an accessory. So you create those virtual machines for it to then use and provision. Then Merse is going to be great out of the box as is. Because let's say you have a separate VLAN for this network. So none of your home traffic can interact with that other traffic. So you have to either SSH into a machine or something or, you know, connect to a VPN into your own local network on the server side. But you would then naturally have a firewall in place in front of all this. And then you would have some load balancer, H8 proxy, engine X proxy or something that would then load balance the traffic be your SSL termination point. And then, you know, you're not exposing anything. You don't really need that firewall It's still always good practice to harden it. But by default, if you're only really allowing a VPN connection and if you're only allowing port 80 and 443 traffic in, then you're not really exposing anything by default. And the need for hardening is much less important, but still equally important. If you know what I mean.
Valentino_Stoll:
Yeah, I guess what I was getting at is I was hoping for something more like, you know, something more packaged where I could just, you know, install this on a bare machine and it would work. You know, some kind of package where it, you know, you just execute it and it has all the context it needs to build itself up.
Dave_Kimura:
Damn.
Valentino_Stoll:
Which is like, kind of seems like it's... not really meant to do.
Dave_Kimura:
Yeah.
Valentino_Stoll:
Not that it's not useful. Like all the reasons you mentioned, it definitely is very useful for deploying applications. But yeah, if I just have a computer sitting around, there's a lot of steps and things I need to know in advance that if I don't do,
Dave_Kimura:
Yeah.
Valentino_Stoll:
and I just run this on, and if I'm not familiar with it, I wouldn't know it, right?
Dave_Kimura:
Mm-hmm
Valentino_Stoll:
Which just doesn't seem like a tool targeted at maybe beginner Rails people. But if the goal is active deployment, you know,
Dave_Kimura:
Yeah.
Valentino_Stoll:
maybe it's better, it's called something that's hard to pronounce.
Dave_Kimura:
Yeah.
Valentino_Stoll:
So where do you see the future of this going? I mean, it's incredible how many features it already has out of the box. And we haven't even touched on the cron job scheduling and the callbacks for deployments and things like that. Where do you see this expanding to, or do you see it tapering off and rounding out into a solid version of what it is?
Dave_Kimura:
I'm not sure. I think the way it stands today, it has a lot of potential. And especially if they could at least get some documentation out there on how to manage your server beyond Merck, I think would be really great because then we're taking a lot of that guesswork out of it. And we are now more working towards getting this to be a full DevOps suite utility where it's going to manage the server. It's going to handle database backups, because that's something that we didn't talk about either is. What disaster recovery do we have? If you were not using a managed database, taking periodic snapshots. You're screwed if something happens. So having. more hardened for a disaster recovery scenario, I think would be good, or snapshots that are taken of the database before a Mercedes deployment's done, that kind of thing. Because the rollbacks that you mentioned earlier is only for the application code. It's not rolling back databases. It's just putting the old code back up there. So if you've dropped a table in a potential particular release and then you need to roll back. Well, that table is gone. Your application is still not going to work right. So we need to be mindful on those kind of things. And for the future, I would like to see when using accessories that some care is taken around database servers and backing up in snapshots or replications and that kind of stuff. Because I think. Especially if you're deploying to bare metal and not using a managed database, that kind of stuff is very difficult and takes a special skill set on its own to do beyond the networking, beyond the security hardening and that stuff.
Valentino_Stoll:
Yeah, I mean time will tell. You know, I'm hoping that it, you know, rolls into something a little more solid, like you said, with the documentation on how, you know, progressive enhancements to your application and management work, because it definitely is a huge piece missing, because I would love to get started, but I worry that I would get there and then be like, how do I upgrade this thing? I'm not sure. I'm not sure. I'm not sure. I'm not sure. I'm not sure. I'm not sure. I'm not sure.
Dave_Kimura:
Mm-hmm. And right now it is going through a lot of changes. So, you know, things are changing quite rapidly with it. It's a new package, a new library. So now when expect anything less, so just be caution of that as well, because if you are an early adopter, the whole world of Merce could change overnight, you know, and you'll have to figure out a new way of doing it and stuff. So I think that's something else that has been in recent but not, maybe not fully implemented or at least not released yet, is the ability to specify environments that you're deploying to. So having a staging environment and stuff, it's kind of all taken into context. I want to deploy the application code to production right now, but I think they are working on that support right now.
Valentino_Stoll:
Well, Dave, is there anything else you wanted to touch on or should we move into pics?
Dave_Kimura:
Let's move into pics. I think we've merged out. All that we can.
Valentino_Stoll:
Oh, that's funny. Ah, yeah, what do you got?
Dave_Kimura:
Ah, pics. I don't know, man. Uh, I've not done much this week. I've just been working on my AI stuff and that stuff's crazy. It's so much fun. And I was able to finally successfully get a pie torch ASR model trained, but then also transcribing from it. So I fed it one of my Drift and Ruby videos. And I would say it probably had like a five percent accuracy by grammar
Valentino_Stoll:
Wow.
Dave_Kimura:
including. So I've added a lot of my manually transcribed videos just so it could learn my voice a bit. And I'm just super excited about that. So I guess my pick because I did use open AI's base model whisper that is a open source thing that they have where if you go to the website and this is my pick hugging face dot co. It is a. Big. big AI community of pre-trained models and data sets that you're able to use and consume with whatever AI that you're doing. So whether you're doing natural language processing, speech recognition, or anything else, it's going to be able to have some sort of model or something there that you can kind of get inspiration from.
Valentino_Stoll:
Yeah, I also recommend Hugging Face. That's where I originally had started looking into all these large language models. I think a lot of people have, but yeah, they have some great tools and open source stuff. Yeah, I'm in a similar boat, you know? Like this AI service is so much fun. My pick is actually also large language model related. I saw, you know, the Lama. The Lama came out, if you're not familiar. Lama is kind of like open source GPT-3. And basically, you can get up and running on your own hardware and get similar kind of results out of it. And Stanford released their own kind of version of it, which got it working on even more kinds of hardware. hardware called Alpaca, kind of like the subsequent Lama. And I plan to try that out. I have some plans for over the weekend, get that running on a Raspberry Pi that I have and see if I can get it to, you know, performs with calculations. From what I've seen, it's pretty slow. It'll take a while on that particular hardware, but I don't quite have the GPUs you have, Dave, but.
Dave_Kimura:
Me neither.
Valentino_Stoll:
It's pretty wild. You can use these things on your own hardware. So
Dave_Kimura:
Mm-hmm.
Valentino_Stoll:
we'll see how it goes. I mean, I have a feeling I'll just get it and be like, it won't be useful because it'll take 10 minutes to run a question.
Dave_Kimura:
Yeah.
Valentino_Stoll:
But I do like it for the training. You can use PyTorch and many of the Hugging Face libraries. I'm hoping to snap them in and use some of the other models that aren't just like, you more blanket for certain things. So.
Dave_Kimura:
Here's what you should do, Valentino. Here's
Valentino_Stoll:
Go for
Dave_Kimura:
a
Valentino_Stoll:
it.
Dave_Kimura:
free multi-million dollar idea, a project for you that will use llama to generate prompts for Dolly. And you can call it the Dolly Lama.
Valentino_Stoll:
Oh my gosh, that's great. If
Dave_Kimura:
Unless
Valentino_Stoll:
you're listening
Dave_Kimura:
if that name's already
Valentino_Stoll:
and
Dave_Kimura:
taken, you know.
Valentino_Stoll:
I mean, hey, if you're working on it, shout out to us, let us know. Because I would love to use it or just explore, promote you. Promote
Dave_Kimura:
Yeah.
Valentino_Stoll:
the Dalai Lama. You know? Oh, that's great. Yeah, that's it for me too. It was great talking about this stuff. It's exciting to see Rails tackling the deployment challenge. and bogging it down for so long with external tools that have kind of worked, hobbling
Dave_Kimura:
Mm-hmm.
Valentino_Stoll:
along, and it's good to see some definitive progress being pointed at it, right? Identifying as being a problem and something that can be solved and consolidated, right? I think that's the ultimate goal of the framework is that conceptual compression and making things easier to do out of the box, right? So, well, I'm excited to see where it goes.
Dave_Kimura:
Yep, me too, man.
Valentino_Stoll:
Alright, well until next time everyone, Valentino out!
Dave_Kimura:
Alright, talk to you later. Bye.
Deploying Ruby on Rails Applications - RUBY 592
0:00
Playback Speed: