ALLEN_WYMA:
Hello, everybody. So this is the only host today, Alan Weima, for Elixir Mix. And today we have a, as we always have, a special guest. This one is even more special, Richard Taylor. He is a Elixir Phoenix developer. And he worked on this article that we mentioned recently. We were just catching up before we started to record. MERSC, I think it's called. And we mentioned your article before. And I was confused, because I thought we had you on. But we just mentioned your article. But now we have the original author on. So why don't you go ahead and say hello and maybe do a quick intro about yourself, Richard.
RICHARD_TAYLOR:
Yeah, sure. Thanks for having me. I'm Richard. I've been working as a software engineer for the past 25 years. Worked with a lot of different languages, but my main journey has been quite a common one, I think. So I went from Java to Ruby to Elixir. And I've been doing Elixir for the best part of the last five years.
ALLEN_WYMA:
Going from Java to Ruby, I mean, that must have been an interesting change. I mean, something that's the kind of classic OOP to really an OOP language, I think, after that. Because Ruby, everything is an object, right?
RICHARD_TAYLOR:
Yeah, that's right. I think it was quite a common move though at the time. I think a lot of Java developers were a bit jaded and when Ruby came along and the promise of developer productivity, I think it was quite a common move for a lot of people to do. So I was mainly doing Java in like enterprise, corporate environments. And the first time I started to do Ruby was for a startup. So it was kind of a huge shift for me to go from. lots of red tape and slow development processes to the speed and productivity of Ruby. It's really good news.
ALLEN_WYMA:
And with the way Java kind of runs, I think you also are kind of used to being told how to layout your files too, right? You have to lay out packages, all these folders, folder, folder, folder with nothing inside, but one folder all the way down to one file, right?
RICHARD_TAYLOR:
Yeah, yeah, definitely. And reams of XML as well. It was nice to leave that behind.
ALLEN_WYMA:
Oh, yeah. So yeah, I don't know. I mean, I started doing Rails, I think, when it was 2.1 or something. It's been a long time ago. And I don't know. Has there ever been? I mean, you could always generate XML, but there was never any XML configuration before. I think it was just YAML only, right?
RICHARD_TAYLOR:
Yeah, I think so. I think it kind of popularized Yaml a bit as well at the time.
ALLEN_WYMA:
Yeah. And then, so you said you went from Java to Ruby. Was it Ruby to what was the next one? Was it to trade
RICHARD_TAYLOR:
Uh,
ALLEN_WYMA:
elixir or no?
RICHARD_TAYLOR:
you know, like I say, yeah, yeah.
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
So about probably just, just less than five years ago, I was kind of dabbling with elixir on some side projects and then managed to get a job where, um, it was full-time elixir, which was, which was great. Basically accelerated my, uh, my knowledge of elixir and Phoenix from there.
ALLEN_WYMA:
Yeah, I'm kind of curious about like, how did you first hear about Elixir to begin with?
RICHARD_TAYLOR:
Um, Oh, that's a good question. I can't, I can't recall any specific instance again. I think, um, it was again, quite popularized in the Ruby community, I think. So a lot of, a lot of Rubyists were, were chattering about elixir. Um, obviously cause of, uh, Jose's background with, with Ruby. Um, and so it was all the promises of performance and low memory footprint and things like that. The stuff that you kind of, the problems that you kind of hit with Ruby sometimes, uh, at scale. And so, yeah, I kind of dabbled with it and I wanted to build a couple of small projects on the side. But it wasn't until I got a full-time job doing it that it kind of really hit home how productive it can be as well.
ALLEN_WYMA:
I mean, that's kind of a good question, right? Where you're starting to hit, I mean, at the beginning, you had the productivity of, I'm talking about Ruby, right? You had the productivity of kind of a language that just kind of read, at least to me, it kind of reads like you're writing English. And then, you know, you start hitting some weird issues, right? The scaling issues. Is that kind of what you're running into?
RICHARD_TAYLOR:
Yeah, I don't think I've really hit much that you couldn't work around, but you had to throw memory at it, basically. So it would use a lot of memory and going from Ruby to Elixir, it was just really noticeable that you could run it on a 256 megabyte of RAM, like a virtual server. And there's no way you could run a proper full-blown Ruby on Rails app with that. So yeah, it was just a huge difference in the like a lot less resources needed. But I don't think I ever really hit anything with Ruby that you couldn't work around. You could just chuck more resources at it and it would kind of work.
ALLEN_WYMA:
What about I mean the reason I'm asking did you actually start to did you ever deploy with Capistrano?
RICHARD_TAYLOR:
I did, yeah, yeah, way back in
ALLEN_WYMA:
Yeah
RICHARD_TAYLOR:
the day. Yeah, I used to use it quite a lot actually, but one of the downsides that kind of MRSK, which we'll get into kind of helps out with is that with Capistrano, you kind of have to set the server up ready to run everything before you can kind of deploy your app to it. So there was that whole bootstrapping process for the actual virtual server as well. You'd have to install all your software, compile all the dependencies, that kind of thing. But yeah, well, I guess we'll get into that. MRSK in a bit.
ALLEN_WYMA:
Yeah, I think at the beginning when you first started using Caposrano, it was like that. You had to kind of get everything ready. And I think later on they started adding more flexibility. And now you can use Caposrano for like anything.
RICHARD_TAYLOR:
Okay.
ALLEN_WYMA:
Yeah, that's what my interpretation is. Actually, I had a project a couple of years ago, which was a PHP app that was actually using Caposrano to deploy, which would be weird if you looked at it, if you looked at Caposrano many years ago and looked at what it is now in terms of what you would do with it, right? Because I think it was strictly Rails application deployment at the time when it first came out. Or at
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
least Ruby
RICHARD_TAYLOR:
yeah, definitely
ALLEN_WYMA:
only.
RICHARD_TAYLOR:
when it first came out. I think, I think basically all it's ever really been is a command runner over SSH. So I guess it's always been capable of doing other things, but it was, it was pretty much tied to Rails I think in the early days.
ALLEN_WYMA:
Yeah. Well, I mean, obviously you know why I brought it up. Because to me, that's kind of like the predecessor to what you're going to be talking about too. I mean, when you started deploying Phoenix applications, I mean, you don't have something like Capistrano. So how were you kind of doing your deployments then?
RICHARD_TAYLOR:
So I've done it a few different ways. So I've done it just to Heroku where you basically push the whole project as it is with mix included. I've deployed to servers where I've built releases first and then kind of deployed the release. And obviously I've used fly quite a lot as well, which basically you kind of have a Docker file which builds the release and then you deploy the Docker container to production.
ALLEN_WYMA:
Yeah, I remember the first time when I started to deploy apps, I kept using Mix all the time. Because I felt like, I don't know, I just felt, I think this is before even, I don't know, do you know when Distiller came out? I mean, it wasn't out from the beginning, right? It kind of came out later on.
RICHARD_TAYLOR:
I think it was around when I started doing Elixir. So I used Distillery quite a lot to do releases before. I think it was Elixir 1.9 that introduced releases themselves.
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
So I used Distillery to create releases. So I used to have a build pipeline that built a release with Distillery, published it to GitHub, and then my deployment process would basically fetch the latest built release from GitHub and then deploy that to the service.
ALLEN_WYMA:
I mean, because before distillery, I mean, I guess we had XRMers, there was something like that. But I felt like there was no clear way how to actually create a release maybe maybe I missed it something. So that's why I was always like, what do I do? And okay, I'll just run mix. And then they said, well, you shouldn't be running mix because so many reasons and you should be doing a release, but I don't think there was ever really a way to do it besides distillery. Obviously, now things are different. But before distillery, I mean, I don't really know what was like I said, I think there was like a predecessor called XRM or something like that. I don't
RICHARD_TAYLOR:
Yeah.
ALLEN_WYMA:
really know what that what it was like before, but I don't think I've seen any guides and how to deploy before distillery, you know.
RICHARD_TAYLOR:
No, I think there was another one that was a bit like Capistrano as well, I think, for Elixir, that was quite
ALLEN_WYMA:
Oh,
RICHARD_TAYLOR:
popular.
ALLEN_WYMA:
e-deliver, right?
RICHARD_TAYLOR:
Yeah, that's the one, yeah.
ALLEN_WYMA:
Yeah, but that one, I mean, when I started using that one, that one also relied upon distillery too.
RICHARD_TAYLOR:
Okay.
ALLEN_WYMA:
But. Yeah, that's a good point. I mean, that's what I leaned on later on was e-deliver. But then at that time I only had one client with a VPS. It seems like everybody's kind of moving over to containers since then.
RICHARD_TAYLOR:
Yeah, it's moved on a lot. I never used edeliver, but I do remember, it was quite confusing, I think, originally. I think, like you say, the lack of mix in a release is really confusing to developers when they're deploying to production with releases for the first time. So yeah. Yeah.
ALLEN_WYMA:
Well, now I think it's much better, right? Because we also removed the mix.config. You just have config, right? Which makes it much more clear, and I think it's a little bit more flexible, especially with the runtime.
RICHARD_TAYLOR:
Yeah, definitely. And they've kind of baked in the ability to, uh, to generate releases automatically as well, and it generates a Docker file and things like that with Phoenix now, so I think things
ALLEN_WYMA:
Oh,
RICHARD_TAYLOR:
are,
ALLEN_WYMA:
yeah.
RICHARD_TAYLOR:
are a lot easier than they used to be.
ALLEN_WYMA:
That's, yeah, that's been a lifesaver. I love that Docker command. Or the Docker, it's a flag you actually have to add it in, right? But otherwise, it's been a lifesaver. And it's cool because it also reads what version of Erlang you have, what version of Elixir you have, and it finds it. So it's perfect.
RICHARD_TAYLOR:
Yeah, very handy.
ALLEN_WYMA:
Yeah, but now we have this new tool, right? Maybe you can kind of give the intro about it, because what I like about it the most is, I don't know, I just despise AWS because of how complicated it is and how annoyed I am at the thing. But I mean, obviously you should know the background about what basically happened from 30 seconds for signals, right?
RICHARD_TAYLOR:
Yeah, exactly. So, you know, 37 signals have grown and they've, they kind of chose to, to boy to the cloud originally. But they've seen over the years, their cloud bills like escalating to ridiculous levels. And they kind of decided, right, that that's enough. They're going to invest in, in servers like in bare metal and deploy to that instead. And I guess because of that, they needed a tool to replace their current deployment tools that they were using and decided to build the tool which is now MRSK or MERSK here. I think that's a play on the container shipping company MERSK. I think that's where the name comes from. So yeah, yeah, so they basically built out MRSK and I first found out about it through a GitHub notification just saying a new release of something had been... been made and I thought, oh, that looks interesting. This was before the big announcement. And so I started dabbling with it then just to kind of understand it. And immediately I thought, oh, wow, okay. I wonder if we could use this to deploy Elixir apps. And so that was kind of my first introduction to it. Yeah, so basically in a nutshell, they kind of say it's like Capistrano for deploying containers. So... The beauty of it is that you are just deploying Docker containers to a host. MRSK takes care of setting up the host. So you to start with, you literally just need to boot a VPS somewhere or a server anywhere with an IP address and SSH installed. And you stick the IP address into MRSK and run deploy. And it basically will SSH in check if Docker is installed, install it. If it isn't. So it literally bootstraps everything from scratch for you, then builds your app and then deploys it to the host. So for the actual build process, you can build that on a remote host as well. So you can kind of designate a host as your build server. And so it'll send your Docker context up to the host, build your Docker container, then it pushes that to a registry somewhere so you can use one of the... a number of registries, you can use Docker Hub or GitHub Zone kind of container registry. And then it kind of pulls that image down on your deployment hosts and then deploys that. And so you can actually run all that on the same host if you want. You can just have a single host that has all these different responsibilities, or you could basically say, oh, I want a dedicated beefy build server. And one of the benefits of that is that you get caching layers. on the build server, so the next time you deploy, it doesn't have to rebuild everything from scratch, it just rebuilds the bits that changed. So, yeah, it's very handy.
ALLEN_WYMA:
Yeah, I mean, I'm trying to kind of follow how this thing works because I watched the video at a high level. It all made sense, but I was kind of curious about because, OK, let me just kind of go back to your article. Maybe you can have kind of clear this stuff up. So for one thing, you have to run on port 3000. That's the default port. Right. And in your article, you only say exposed three thousand. But I think you also have to change your. I know you can just set port. Right. I think the fault there's a port. But you don't do that in here. Maybe it's kind of at least I don't see a separate pulled out part.
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
So
RICHARD_TAYLOR:
you're right.
ALLEN_WYMA:
you can set
RICHARD_TAYLOR:
So
ALLEN_WYMA:
the port. Yeah.
RICHARD_TAYLOR:
the MRSK basically defaults to port 3000. So in my
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
article, I just told the app to also deploy to port 3000, but you can override that. You can basically choose whichever port you want. So, I'm gonna go ahead and start the demo.
ALLEN_WYMA:
And now do you have to deploy as root or can you just create a deployment user?
RICHARD_TAYLOR:
You can create a deployment user as well. Yeah, it doesn't have to
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
be a root. Yeah.
ALLEN_WYMA:
So OK, so the init command will create all the stuff you need on your project. You have this config deploy file, and this is the one that's coming in after that, right, with the service hello and what image you have. So I see this style of the two less than equal or less than greater than symbols. Now, is that actually what's there, or is that going to read from a config file and swap out for you? Because I see that used a couple of times, like with this rooted IP address for the builder.
RICHARD_TAYLOR:
Oh yeah.
ALLEN_WYMA:
Those
RICHARD_TAYLOR:
So,
ALLEN_WYMA:
are
RICHARD_TAYLOR:
uh,
ALLEN_WYMA:
just variables that you would change yourself, right?
RICHARD_TAYLOR:
yeah, exactly.
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
That is basically what you have to change in that config
ALLEN_WYMA:
Gotcha.
RICHARD_TAYLOR:
file. Fill it in with your things. So your GitHub username or the IP addresses of the servers that you're deploying.
ALLEN_WYMA:
Now, this one I have a question about. The password over here, right? You have a Merse registry password. Now, there's some kind of configuration file for this that has all the passwords in plain text or something, or how do the passwords get passed around?
RICHARD_TAYLOR:
Yeah, so the way that it works is it will read from a local M file, but by default. So
ALLEN_WYMA:
Oh.
RICHARD_TAYLOR:
you have your M file locally with all your secrets in it.
ALLEN_WYMA:
I see.
RICHARD_TAYLOR:
And then the config file itself doesn't need to have those. You just refer to the variable in your M file.
ALLEN_WYMA:
Now could you also just use environment variables yourself, or you have to use an nfile?
RICHARD_TAYLOR:
No, so you can just use them as well. So
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
I can't remember if there's an example there. Yeah, yeah, you can.
ALLEN_WYMA:
But the database URL is another one, too, right? Because I see that they have this command. It's like, mersk. Was it add accessory Postgres or something? It was a little bit. Yeah, mersk accessory boot Postgres. So that's going to actually install Postgres. Because I see over here, you have the config file with the Postgres password, Postgres DB. So you have to already say which IP address you're going to host the DB server on, right?
RICHARD_TAYLOR:
Yeah, so I mean, you don't have to use the accessories that that is like an extra thing that they provide,
ALLEN_WYMA:
Mm.
RICHARD_TAYLOR:
but it kind of gives you. So you might want to have like four hosts for your web application, but you would only
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
want your database to run on one. So it allows you to have that separation. So in this example, I wanted to kind of deploy a fully blown Phoenix app. So I wanted to have a Postgres database so I can specify that. And again, you just specify the image and it will run on the same host as the.
ALLEN_WYMA:
OK. Now, the other thing I see over here, too, is that I'm happy you put it down over here, because it's hit me a couple of times. The check origin, right?
RICHARD_TAYLOR:
Oh yeah.
ALLEN_WYMA:
So how does that one work? Because usually, if we want to deploy this with the domain name, I mean, I guess you could just put the domain name on there. It's not going to be a big issue. But you're over here. You put the IP address. Now, is there a way that you can configure that? Because you have just the one runtime exs file, right? Is there a way you can just pull the IP address from the environment so that it gets automatically set? Or because it wasn't so clear from here how this would work with just IP address if you have multiple hosts.
RICHARD_TAYLOR:
Yeah, so I think in that, in that part of the example, it is just all deployed into one host. So I just wanted to
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
make it. I wanted to basically have like a, like a beginner's guide almost to deploy
ALLEN_WYMA:
Gotcha.
RICHARD_TAYLOR:
the next app where
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
you've just deployed it, you've only got one. So you've got an IP address. So if you, if you put the check origin, then your live views will work nicely.
ALLEN_WYMA:
you. All right. All right. Because I was thinking, like, because you're saying, because the article is all about multi-cloud deployment. But if you only have one IP address, it doesn't, you know, do the
RICHARD_TAYLOR:
Yeah.
ALLEN_WYMA:
multi-cloud. But I guess it would work for multiple if you figured it properly.
RICHARD_TAYLOR:
Yeah. So
ALLEN_WYMA:
Um.
RICHARD_TAYLOR:
actually the blog post kind of, I kind of started out by saying this is, you know, even though we're going to go multi-cloud, this first step is just single host,
ALLEN_WYMA:
Which is a single host, OK.
RICHARD_TAYLOR:
understand what MRSK does and all the rest of it. And then the next section in the blog post actually goes about, you know, multi-cloud,
ALLEN_WYMA:
from the
RICHARD_TAYLOR:
so
ALLEN_WYMA:
cloud.
RICHARD_TAYLOR:
running on different clouds and different hosts.
ALLEN_WYMA:
Yeah, this is the interesting part. This thing called tail scale. I think they do talk about tail scale in the 20 minute article video. Am I wrong? I thought he talks about something about the, like there's a VPN wire guard or something happening. Maybe
RICHARD_TAYLOR:
Um,
ALLEN_WYMA:
I remember wrong.
RICHARD_TAYLOR:
yeah, I don't think so. So, um, I don't remember seeing that as far as it's basically nothing to do with MRSK. This is just, um,
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
elixir. And basically I kind of got to this point where I was deploying a Phoenix app and I thought, well, that's okay. You know, it's just as good as anything else. You could basically deploy out there, you know, there's some conveniences, but, uh, you know, I deploy some of my stuff to fly.io and I was thinking, well, you know, if how far could we get basically to the feature set that fly.io gives you. And so if you're deploying to multiple clouds or even just like different data centers for the same cloud, obviously with Elixir, you're gonna want to cluster that at some point probably. And so I started looking around and trying to figure out, okay, if we were gonna do that and provide this private network that's secure and allows you to cluster your nodes together. like Fly does basically, then what could we use? And so I'd kind of dabbled a little bit with Tailscale before. So I kind of, you know, I kind of knew it would probably be possible, but it took quite a bit of work actually. I actually like wrote another blog post that was just dedicated to using Tailscale with Elixir because this blog post was growing so big at that time. I thought that would kind of be a good post in its own right, just like, how would you? cluster nodes across a private network for Elixir applications.
ALLEN_WYMA:
Maybe can you talk a little bit more about tail scale? Because I've actually never heard it before. Is it built on top of WireGuard, which is like a standard protocol for VPNs? Or
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
OK.
RICHARD_TAYLOR:
exactly. So yeah, so WireGuard is, is like a VPN technology. It's a lot easier to use than like OpenVPN and a lot of the ones that a lot of its predecessors, and then tailscale basically sits on top of that and just makes it easier to manage. So you
ALLEN_WYMA:
OK.
RICHARD_TAYLOR:
kind of tailscale provide like a central hub where you can view all of the nodes that are connected and like it'll automatically discover each other and add all of the nodes that connect to each other. So So basically you run tail scale on one machine, you run it on another machine, and then those two machines now have an IP address that they can talk to each other on that goes over the secure VPN base.
ALLEN_WYMA:
Well, I like the fact that you also, I mean, you have to have SSL running over here, according here. And I like the fact that you show people, okay, you have to add it to extra apps. And also for the migration, because that's that caught me for a while on a recent app. Like one of the hosts, I think I run it on I forgot what it is. I've run in the popular cloud providers. And they they make you use SSL to Postgres, which I'm not a I don't mind. I like secure stuff. But it took me a while to figure out how to do the database migrations because SSL doesn't get started up in the default migration.
RICHARD_TAYLOR:
Yeah, it's really annoying actually.
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
I've hit it a couple of times on different projects as well. I think we've done some stuff where we used SSL to connect to RDS postgres. And so in this article, I basically use Crunchy Bridge, which is another postgres provider, hosted
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
post provider. And so yeah, being able to connect to that, you need to connect over SSL, which yeah, it's just... But... One of the advantages of Crunchy Bridge is that they also give the option to connect to your tail scale network, which is really cool. So basically you can put your tail scale key in Crunchy Bridge, it'll automatically connect to your tail scale network then so that the app servers that are connected to the tail scale can talk to the database over the private VPN as well, which is pretty cool.
ALLEN_WYMA:
Now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now, now
RICHARD_TAYLOR:
So it's a service you need to pay for, but they have a very generous free here. I think you can connect like a hundred servers for free. So yeah, I
ALLEN_WYMA:
That's
RICHARD_TAYLOR:
haven't
ALLEN_WYMA:
nice.
RICHARD_TAYLOR:
had to pay for it yet.
ALLEN_WYMA:
Yeah, OK, that's cool. Because I was always interested in doing that multi cloud deployment just because I don't know, I always have issues with AWS. And I'd rather have another cloud provider just in case something happens.
RICHARD_TAYLOR:
Yeah. So again, one of the benefits of, uh, MRSK, one of the, one of their kind of core principles is that it is completely, uh, cloud and or bare metal agnostic. It doesn't like it. It just needs, um, an IP address to SSH into, and then it can run on that. So you could, you could literally build up your entire application stack in, in some way, like digital ocean. And then you could just boot a bunch of servers in AWS or in some other cloud provider in line or something, change your IP addresses and run deploy. And then
ALLEN_WYMA:
Thank
RICHARD_TAYLOR:
basically
ALLEN_WYMA:
you.
RICHARD_TAYLOR:
you'd have moved clouds. You could like literally have your entire stack running in a different cloud somewhere, update your DNS and you're basically done.
ALLEN_WYMA:
Yeah, I was looking for it because actually, Merck also relies, I mean, it's building upon a couple of different services put together. I think one was traffic. Is that right? So it can help to, because everything's running with Docker, right? It runs Docker locally, and then it uses traffic to help move the traffic from one container to the next, if I remember
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
correctly.
RICHARD_TAYLOR:
exactly. So the, the, the kind of IP address that you're hitting, the port you're hitting is traffic or traffic. Um, and then it load balances over the actual application. Um, so it allows it to do zero downtime deployments by. Booting up, it'll boost up your, your, um, new Docker container. It runs some light health checks to make sure it's running and then just tells traffic to switch over from, uh, from the old version to the new version. If, uh, your health checks pass. So. So that's another thing you get actually, which is quite hard to do on your own if you're just deploying to your own server is getting zero downtime deployments. It's quite nice.
ALLEN_WYMA:
So you got traffic or traffic, Docker. And I think it's just like straight bash command or something. Is there anything else that's part of the stack that comes from MERSC?
RICHARD_TAYLOR:
No, that's it literally. Yep.
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
And then Merck kind of has loads of commands that you can run as well to like access all the log files from all of the Docker containers from your local machine, which is quite nice. So you can stream all of the logs back to your development machine, which is quite handy. So, so
ALLEN_WYMA:
Now, if there's a way that you can easily, what do you call that, remotely connect to your machines, in terms of, not just SSH, but with the observer, that'd be really nice. I guess there must be a way to do something like that.
RICHARD_TAYLOR:
Yeah, you can, because it's all like if you set it up as a cluster, it's easy to do because you can get your local development machine to also connect to the same tail scale cluster.
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
And then you can just like automatically connect to the same cluster from your from your development machine, if you want your elixir cluster, which
ALLEN_WYMA:
Oh
RICHARD_TAYLOR:
is brilliant.
ALLEN_WYMA:
yeah, that's true. Yeah, they might be useful. They might be useful.
RICHARD_TAYLOR:
Yeah, yeah definitely.
ALLEN_WYMA:
OK, yeah, I mean, going on to your article, I mean, it's cool. So you have to add tail scale to your Dockerfile in order to use it.
RICHARD_TAYLOR:
Yeah, exactly. So basically you want each node of your app to connect to Tailscale, so not like the host that's running it. You kind of want your app itself to be able to connect to it. And so when you connect your node to Tailscale, you kind of give it a name. And in the Tailscale dashboard, each of those names is then shown uniquely. So it'll say like host one, host two, host three, even if you connect it as host to each of them. And so I was sitting there trying to figure out, well, if I want all of these different nodes that are connected to tail scale to cluster together, how is it gonna know which of those nodes on the tail scale cluster should cluster together? Because if you were deploying a new version, you wouldn't want that new version to connect to the same cluster as the previous version, because your code might've changed and, and you know, it might break. Um, and so I was playing with the, the tail scale API and I figured that, uh, in the actual API, it tells you what the node name was that you tried to connect with. So using the API, I created lib cluster tail scale, which is like a lib cluster strategy, um, for Elixir. And so what that does is it calls the, it knows like what the name of the application and the version that's currently running. And then it uses the tail scale API to go and find out any other hosts in that cluster with the same name and version. And then it automatically clusters them together. So you can boot up another server and it'll automatically discover it and connect to it in the cluster, which is quite cool.
ALLEN_WYMA:
Now you said based on the version, right? You're talking about within the mix file, right?
RICHARD_TAYLOR:
So no, so MRSK basically has well actually I had to add support for this in MRSK So when when MRSK deploys your app it gives it like a brand a unique version that uses I think that uses the git chart to try and figure out what version of your
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
app is only running And so that's how it kind of makes sure that it knows which version is currently running which versions can be the new version and then switches them over and so I added an upstream thing to MRSK that adds an environment variable when booting your container so that you can read inside the container to find out what that unique version of that app that you're currently running is. And then I use that to connect to, to basically connect to tail scale. So when we use the API, we can actually find all of the versions of this app that are currently running on tail scale and just make sure all those are connected together.
ALLEN_WYMA:
OK, yeah, that's obviously extremely useful. OK. And it seems like that's just a straight binary that gets run. And then, I'm guessing the lib cluster knows where that is and could just call that binary file to help communicate. That's what it looks like over here. Because,
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
I mean,
RICHARD_TAYLOR:
yeah,
ALLEN_WYMA:
you
RICHARD_TAYLOR:
exactly.
ALLEN_WYMA:
are, yeah.
RICHARD_TAYLOR:
So the tail scale D is a binary which which will connect to your, um, to your own tail. I think they call it a tail net, which is like your own private network. And then lib cluster tail scale is a lib cluster strategy, which starts up in your inside your elixir app. Um, and then it uses the tail scale API to discover, um, all nodes with the same version of the app running. Um, and then basically clusters them together.
ALLEN_WYMA:
Okay, and then you have a script file that includes the health check and also just running tail scale to start
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
the process.
RICHARD_TAYLOR:
so the health check stuff is again, it's all MRSK stuff. So
ALLEN_WYMA:
Mm.
RICHARD_TAYLOR:
when you do a new deployment, MRSK boots up your Docker image after it's built it. And then it does like a HTTP check to make sure that the app is responding. And then if it is, it tears it down and then it basically boots up the production version of that and then switches the traffic over. So in the script here in the blog post, basically, I'm checking. to see if it is the health check container because when it boots this temporary health check container up, I don't want that to connect to the tail scale network
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
because I don't want it to join basically with anybody else. And there will be nothing else for it to join with. So it's basically a waste of time. So in those scripts, it's literally just saying, do this stuff if it isn't the health check container. So it just checks to see that it isn't the health checkers. the MRSK container name contains the word health check on your health check containers when it's
ALLEN_WYMA:
Gotcha.
RICHARD_TAYLOR:
starting out so you can kind of
ALLEN_WYMA:
OK,
RICHARD_TAYLOR:
just yeah.
ALLEN_WYMA:
I see. Yeah. It's been a while since I've seen EEX. I'm so used to seeing heaps all the time. Yeah,
RICHARD_TAYLOR:
Yeah.
ALLEN_WYMA:
OK. Yeah, but then for the health check one, can you actually define an endpoint for the health check? Or does it always go for the root pass?
RICHARD_TAYLOR:
No, you can define it, I believe. Sorry, checking. Oh yeah, so you can, so in the MRSK config, basically there is a health check there and I've got just the path slash at the moment, but you can change the path to whatever you want. And I think there might be some other options in there as well for the different types of health check that you want to do.
ALLEN_WYMA:
OK, I mean, it seems pretty straightforward.
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
the hardest part of all of it was literally trying to figure out how to get the cluster working because I kind of deployed it and thought, well, it's not very exciting if you can't actually cluster these nodes together. So vast majority of the time actually I spent figuring out tail scale. I had loads of issues getting tail scale running inside the Docker container. I had to provide some upstream fixes to MRSK to allow some extra flags so that we could get that working as well. So yeah, it was quite a lot of work. And then trying to figure out how to auto discover them and things like that. Well, at one point I thought it was probably not gonna be possible, but thankfully over the tail scale API, it was possible to kind of determine that as well.
ALLEN_WYMA:
I'm having just a little bit of difficulty to understand. So what I see is you have two bash scripts in here. You have a env.sh, and you also have just the server one. So that means you always source the env.sh, and then you're going to run the start server. Is that how this works?
RICHARD_TAYLOR:
Yeah, so the env.sh is part of the releases, it's part of the Elixir releases. And so that is something that gets invoked when your release starts by Elixir.
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
And so inside there, it's got basically the release name, the release node and the release distribution. And so those are...
ALLEN_WYMA:
Oh, OK.
RICHARD_TAYLOR:
Those are pretty standard, like, clustering environment variables that you need to use whenever you're clustering. So they get added in when you're deploying to fly, for example, as well. And so inside there, what we needed to do is to make sure that the node name in the cluster contains the IP address that that node has in tail scale. So otherwise, they wouldn't be able to talk to each other. So this script basically just gets the tail scale IP address that that machine currently has, that node currently has, and then uses that IP address as the node name when starting the Erlang cluster. And so then all of those nodes can talk to each other because they're all on the same private network and they can all see each other on their tail scale IP addresses. And so the second one, the second script is, this one is the script that starts the server. Oh yeah, sorry, this is the one I think that starts the, when the container's booted, it starts the server. And so in this one, basically it just says that if it's not the health check and it is like supposed to be started. make sure that tail scale starts first. So it boots tail scale and then it starts the Phoenix, the Phoenix application.
ALLEN_WYMA:
Wait, is tailscale IP hyphen hyphen four
RICHARD_TAYLOR:
Yeah.
ALLEN_WYMA:
for the sh file?
RICHARD_TAYLOR:
Yeah, so that is the IP for address. So I think you get an
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
IP six address as well from tells.
ALLEN_WYMA:
Okay. Gotcha. Now I got it now. Okay. So you have to boot and then then going back to the start right to start server, you have to boot up tailscale D and then use tailscale up. I guess it's going to talk back to the daemon right and say okay, do this. It seems a bit weird to have two two two binaries but okay.
RICHARD_TAYLOR:
Yeah, so tailscale D is like the networking stack. So
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
it basically runs everything. And then tailscale up is just that node connecting to the tailscale network.
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
And so when you do tailscale up, you have to pass in an auth key. And so it then knows that it's connecting to your tailnet because that auth key basically defines
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
which tailnet it's connecting to.
ALLEN_WYMA:
You know, I never look at this stuff since I have that command to generate the Docker stuff.
RICHARD_TAYLOR:
I'm sorry.
ALLEN_WYMA:
So I just learned a lot today just from this. I'm like, what is this? Where the heck is this coming from? And then just looking at it as you're talking, oh, OK. I didn't know I could do this.
RICHARD_TAYLOR:
Mm-hmm. So yeah, a lot of this stuff kind of gets generated automatically by the fly when you're deploying to fly as well if you like use their generator to generate the
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
conflict. A lot of this stuff gets added automatically. So I was using that quite a lot as a resource to try and figure out some of this.
ALLEN_WYMA:
OK, yeah, I mean, this is really nice.
RICHARD_TAYLOR:
So there's quite a lot in there, but I think it's one of those that
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
once, once you've got it set up and you've deployed it, um, then you'll probably never, you'd pretty much never have to touch it again. Um, and. You know, uh, in the back of my mind, well, when I was, when I was kind of going down this rabbit hole, I was thinking to myself, well, why wouldn't I just deploy to fly, you know, it's a lot easier that you get the same kind of thing. But, um, you know, one of the main reasons is that it's cloud agnostic. It allows you to kind of have the same feature set that you get there, but. but not be tied into a single vendor. But the tail scale blog post got picked up on Hacker News and the first question someone asked was like, why wouldn't you just use Fly? Great.
ALLEN_WYMA:
Well, I actually had not a very good experience with Fly. For the first time I used it, I haven't really tried it out again. But yeah, I mean, I can see how it is easy to use. Like the one thing I have found difficult is like it wasn't straightforward to create like another environment. Because of course, you have your test environment, you have your production. So that wasn't so clear for me.
RICHARD_TAYLOR:
Yeah.
ALLEN_WYMA:
But yeah, I mean, other than that, like if you just have one environment, you just push. It just runs and having nice. But once you start doing things besides just deploying, I think sometimes there could be some issues there, which is actually what I ran into.
RICHARD_TAYLOR:
Yeah, so I've used it quite a lot and I've had issues and they've obviously had a few issues lately. They've
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
kind of built a new v2 stack which they're migrating everybody over to now and apparently that's a lot more stable than the original
ALLEN_WYMA:
That's what they
RICHARD_TAYLOR:
stack.
ALLEN_WYMA:
said, yeah.
RICHARD_TAYLOR:
But with multiple environments basically when your config file is generated it kind of has the app name in it but
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
if you delete that from your config file. and then just provide it at the command line every time you deploy. You can do dash a and then the name of the app.
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
Then you can basically deploy to multiple environments then just by changing the name, which is quite easy.
ALLEN_WYMA:
What I ran into before, and I think I mentioned on the podcast, is I transferred the app from my own personal to an org. And
RICHARD_TAYLOR:
Okay.
ALLEN_WYMA:
then it broke. Yeah. And somebody today was asking this question on Reddit. And I tried to give honest feedback, like, well, this happened to me, and I did this. And then somebody replied back saying, well, that's what happens when you're too cheap. I was like, what are you talking about? I literally ended up paying for the service. And then it took three days for them to reply back over email for support. Like, I
RICHARD_TAYLOR:
Oh,
ALLEN_WYMA:
don't know what you were.
RICHARD_TAYLOR:
yeah.
ALLEN_WYMA:
How was that cheap? I paid $30. What do you want from me?
RICHARD_TAYLOR:
Yeah, no, it's not good.
ALLEN_WYMA:
Yeah, like the database became detached, and I couldn't reattach it. And I tried to command, I found, and it didn't work because they want you to pay in order to make it work. I was like, that doesn't make sense. Why would you let me change orgs and then ask me to pay just to fix something when it could? When you move the app, move the app along with the database, why would I just want to move the app and not the database? It doesn't really make sense. Anyways, I'm going off on a tangent. But yeah, I mean. Um, it works pretty well, but when you start doing things, not on just deployment, once you deploy it, then I think, uh, it's not so clear. It sounds like you had some issues yourself too. So there's no perfect service.
RICHARD_TAYLOR:
No, no, exactly. You're right. Nowhere is perfect, is it? But I guess if you're under your own control, it's
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
the lack of control, isn't it? So if you're using a third party and it's broken and they won't respond, then there's nothing you can do about it. Whereas if you're deploying yourself, you know, if you can't access your host or whatever, you can just boot up a new host and run deploy again, and this will kind of fix itself. So it's quite nice.
ALLEN_WYMA:
I'm happy I had a patient client who understood. But
RICHARD_TAYLOR:
I really think it's a great client as well.
ALLEN_WYMA:
yeah, actually, the one thing I do have a question about is, of course, there's different versions of Linux, and everyone has their own package manager. So what I understand is that you're actually able to run Merse on a brand new host, nothing installed. But how does it know how to add the things that you need? That part is still a little bit alluding to me.
RICHARD_TAYLOR:
The only thing it really needs is Docker.
ALLEN_WYMA:
Okay, so once you so you have to install Docker before then you can ready to go.
RICHARD_TAYLOR:
And it will install Docker for you as well. Yeah, it might actually
ALLEN_WYMA:
OK.
RICHARD_TAYLOR:
be it might have to be you've been to actually I can't remember if they've got a restriction
ALLEN_WYMA:
Okay, because
RICHARD_TAYLOR:
on exactly
ALLEN_WYMA:
that was the one thing in my head. It's like if you're using like, SUSE over here and Ubuntu over there and Debian over there, can it actually figure out all this stuff?
RICHARD_TAYLOR:
Yeah, I think it might, I think they may say that you have to use Ubuntu. But if you run it on a, on a clean host, it will literally download and install Docker for you as well before starting.
ALLEN_WYMA:
Yeah, I'm just looking at it. It says, installation connects the server over SSH, installs Docker on any server that might be missing it using apt-get. So I think that counts for both Debian and Debian Base, which would be Ubuntu. Does that make sense? Yeah, that was just the one thing, because that was my understanding. It's like you just have a brand new host. You run setup, and it just does everything. And I'm like, how does it know how to install what it needs then? That doesn't really make sense.
RICHARD_TAYLOR:
And so
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
the kind of final part of this whole thing as well was like the ability to deploy to different cloud providers at the same time, which was quite interesting. So you can have one host running on a line node, one host running on digital ocean, both in different continents if you want. And so yeah, I mean, I can't think of many real life use cases where you might want to do that, but it was still interesting to do.
ALLEN_WYMA:
Well, I don't know much about traffic, though. Now I'm thinking in my head, how would you handle this? Does traffic also handle SSL?
RICHARD_TAYLOR:
I don't, I think you probably can get it to work with SSL, but I think in this setup, the idea might be to run this behind Cloudflare or something like that. So your
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
app would actually, or your SSL termination would happen at Cloudflare.
ALLEN_WYMA:
Cloudflare. So that's how they recommend you do it is over Cloudflare.
RICHARD_TAYLOR:
That is one of the examples they give, yeah, but I think they say, you know, any, any kind of front, front end proxy would, would be fine. But I think Cloudflare is one of the examples they give.
ALLEN_WYMA:
So let me take a look at that. That's the only thing I think it's missing, other than if they have some, if there's, I mean, I think there is like S3 things that you can run through Docker. So that would be the only other thing I would think people really need.
RICHARD_TAYLOR:
Yeah, yeah. You could, I kind of think you could probably set it up so that it does the SSL termination in Elixir as well. I think Sacha,
ALLEN_WYMA:
Oh yeah, yeah, I forgot,
RICHARD_TAYLOR:
Sacha-Eurek's
ALLEN_WYMA:
yeah.
RICHARD_TAYLOR:
got a library I think
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
that does let's encrypt SSL certificates automatically. So I was
ALLEN_WYMA:
True.
RICHARD_TAYLOR:
thinking, yeah, if you just wanted like a one server setup with no fuss, then that's one way of going about it.
ALLEN_WYMA:
Or if somehow you can point it to Nginx or something like that, and then that would traffic it around. But I think I'd probably just run it with. I think the solution that you pose, I think, is probably a really good one. The only other issue with that one is that you'd have to have the SSL certificate stored in a place that could be reused. You know what I'm saying? It's like the key plus all that stuff. I think, I don't know, I have to go back to see how it works for his thing. Because you wouldn't want to just keep, like, everyone keep retrying, getting new keys. Or does that make sense? I don't, I don't know. I'm trying to
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
think of how that would actually work.
RICHARD_TAYLOR:
I think it stores the keys on the file system. So you can mount your host file system
ALLEN_WYMA:
Okay.
RICHARD_TAYLOR:
to the Docker container. So as long as you're writing it to a specific path, I think it would store it and then reuse it.
ALLEN_WYMA:
So could you actually tell Merce to upload files and then mount those?
RICHARD_TAYLOR:
You can tell it to mount folders within the container just like you would
ALLEN_WYMA:
OK,
RICHARD_TAYLOR:
with
ALLEN_WYMA:
perfect
RICHARD_TAYLOR:
volumes on Docker containers.
ALLEN_WYMA:
then.
RICHARD_TAYLOR:
So
ALLEN_WYMA:
Then
RICHARD_TAYLOR:
in
ALLEN_WYMA:
that could work.
RICHARD_TAYLOR:
the Postgres example there, it basically uses the host
ALLEN_WYMA:
Oh yeah.
RICHARD_TAYLOR:
file system to store the Postgres so you can reboot the server and your data will be present.
ALLEN_WYMA:
Oh, duh. Yeah, obviously you need that. I almost forgot about that.
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
Ah.
RICHARD_TAYLOR:
I think when I first said it I forgot about that too and I was like, oh wait, okay.
ALLEN_WYMA:
Make sense. OK. Yeah, over here. Yeah, now I see the health check path. So yours was on the route path. Makes sense. Yeah, I mean, it seems pretty straightforward. I mean, did you have some specific issues trying to get this going? Because obviously, there's some things that are not so clear at the beginning. Like you said, oh, I didn't mount the path for the Postgres, right? I mean, was there other things that kind of caught you? And you're like, oh, this wasn't clear? Or? you know, how did you get through it?
RICHARD_TAYLOR:
Yeah, I think for the for the initial setup on the single host, I think it was pretty straightforward. I don't think I had too many things that I think it was trying to make sure that the Postgres database had the right username and password as the one that elixir was expecting it, Phoenix was expecting it to have. And the data directory making sure that that was mapped correctly. Most of the hard work and like the complicated stuff was was around the clustering side of it. That was definitely the, uh, it took, you know, it took quite a lot of time to unpick all of that and get to the bottom of it. As far as like just deploying to a single host, MRSK kind of just worked out the box really.
ALLEN_WYMA:
Yeah, and I think the amount of time that Rails has been out and those guys have been solving issues, that's the one thing I really miss and love about Elixir. And some of that rubbed off into the Ruby community and rubbed off into Elixir is solving issues. And then people either using that or looking to that and copying or what's the word for that? Not emulating, but. It's like where you look at it and then you kind of do a similar thing that maybe works better for your community. You know what I mean?
RICHARD_TAYLOR:
Yeah. Yeah.
ALLEN_WYMA:
That's something I love about Ruby. And then, like I said, Elixir is doing similar things.
RICHARD_TAYLOR:
Yeah, it's so nice. And, you know, as soon as I saw that, that 37 signals or DHH specifically was working on something, I thought, you know, it must be, it must be interesting to look at because, you know, he doesn't like put projects out for no reason, you know, if he's created something new, there's usually a good reason for it. And the fact that they've battle tested it like live, I think they've nearly migrated all of their services over now to, to Biometal using MRSK to deploy it. So. you know that it's probably pretty reliable as well.
ALLEN_WYMA:
So Jeff Bezos must be pretty upset about this. He's missing
RICHARD_TAYLOR:
I'm sorry.
ALLEN_WYMA:
out. I think they said they pay like a million bucks a month or something ridiculous. It was crazy. The
RICHARD_TAYLOR:
Yeah, yeah.
ALLEN_WYMA:
article is, I like his article because it's kind of like fact-based, number-based kind of thing, like this is what we're paying. And you figure out that this doesn't make sense. And it's like we have these, even they're old servers, but they still work great. And we know our traffic is steadily growing at a certain pace. And it doesn't make sense for us to pay this bill because we already have. like everything that we need. And I like the fact that he kind of called BS on a lot of these things, like, oh, move the cloud. You're going to save money. OK, no, because here's the numbers. OK, move the cloud, and you can save money by not hiring more DevOps people. No, I think, did they say they hired more? They never actually lost anybody in terms of turning it down. I don't think they've ever turned it down. They
RICHARD_TAYLOR:
No.
ALLEN_WYMA:
may have actually had to add some people. I'm not too sure. So it's actually made things more complicated, if not the same, right?
RICHARD_TAYLOR:
Yeah, exactly. I've seen a lot of companies though, that you get into the cloud when you're starting out and it's really cheap, right?
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
And definitely attractive. But if you hit scale and you start growing, then basically your costs spiral so quickly and most companies just can't be bothered with the cost of moving that. And you're kind of tied in forever. You've got to work at a company that kind of really understands that. your team can handle it and you will, if you wanted to migrate to another cloud, you can do it. But yeah, I think it's very rare that that happens.
ALLEN_WYMA:
Well, the other thing I think is pretty interesting is like now each cloud is going to be different in pricing too. So it's not like, okay, just choose a cloud and you're, and you're good to go. Like, and you'd think that AWS is be cheaper. At least that's, that's what I think is in most people's minds, but I've had AWS contact me a couple of times and they're like, what are you paying for your hosting? And then like, I'm sure we can save you money if you switch over. And I sent them over my, my thing. I'm like, this is my stats, you know, like how many, how much RAM, et cetera. This is what I'm using. This is my, my price. And then they replied back, okay, well, this is comparable. And then this is what you would pay. And like, it was still like, like, cause my hosting is like 40 or 50 US dollar a month. I don't have like, my clients have their own hosting. I, and I just host like small stuff on my side. So 40, 50 is like nothing. Right. But, and on the comparable stuff, they can't even match it. They're like another 20, $30 higher a month. I was like, well, and I went and I even replied back. I said, did you just say that you're actually more than what I'm paying right now? and then they didn't reply back. They're a little bit embarrassed.
RICHARD_TAYLOR:
Yeah, that's been my experience as well, actually, whenever there have been services that pop up that say, Hey, we'll manage your AWS infrastructure for you. And we'll, we'll
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
make sure it's cheap and things like that. And then under the hood, you see that they're using Kubernetes and they're spinning up three like service clusters and things like that. Um, and you're, and that's just the bare minimum that you have to pay before you start adding on,
ALLEN_WYMA:
here.
RICHARD_TAYLOR:
uh, all of your apps and things, and it's just always like ridiculously more than, um, just pushing it to like Heroku or Fly or even your own like VPS, just logic.
ALLEN_WYMA:
The crazy part is the calculator. It's like ridiculous. It's like I'm calculating coordinates to send a missile somewhere.
RICHARD_TAYLOR:
I'm sorry.
ALLEN_WYMA:
It's like nuts, all these different variables you've got to punch in. That's
RICHARD_TAYLOR:
Yeah,
ALLEN_WYMA:
f-
RICHARD_TAYLOR:
it's impossible, isn't it? Like you literally just have to boot it and see how much they charge you. That's the only way of actually
ALLEN_WYMA:
Yeah,
RICHARD_TAYLOR:
figuring it.
ALLEN_WYMA:
we started to turn on servers for one of my clients, they're closing down and they're like, okay, can you turn off everything? And then like, you have to wait a couple of days, or at least a day, and then you'll see what is your projected months and then what's next month's going to be based on current usage, right? Because you really can't tell you have to look at the billing, whatever the fact that it's crazy that they have a billing section in terms of like, cost analysis. It's insane, like,
RICHARD_TAYLOR:
Yeah.
ALLEN_WYMA:
that they've built so much and just billing itself in terms of forecasting the amount you got to spend.
RICHARD_TAYLOR:
I think until recently as well, I think that billing dashboard has got better, but it was the point where you couldn't really figure out how much you'd spent either. Like even if you went into the billing dashboard, it was all segregated by region and things like that. And you couldn't
ALLEN_WYMA:
Oh,
RICHARD_TAYLOR:
like
ALLEN_WYMA:
yeah.
RICHARD_TAYLOR:
one. Yeah,
ALLEN_WYMA:
Now, yeah, you had to switch regions to see what you're doing, right?
RICHARD_TAYLOR:
exactly.
ALLEN_WYMA:
And then actually, I remember this now, because I remember you have to check every region and see what the billing is. Because you may forget that somehow, somewhere, you turn down a service by accident or whatever. And then you're getting a bill, and you're like, what the heck is this? I don't even have this thing running. And you have to check all the environments, or also all the regions. It's nuts.
RICHARD_TAYLOR:
I think it's got a little bit better in
ALLEN_WYMA:
Yeah, it's better
RICHARD_TAYLOR:
recent
ALLEN_WYMA:
but still
RICHARD_TAYLOR:
years. Yeah, it was. It was a bit of a nightmare originally.
ALLEN_WYMA:
Yeah,
RICHARD_TAYLOR:
Thank you.
ALLEN_WYMA:
sorry, but going back to your article, yeah, really great article. You hit a lot of points. Obviously, it's not going to include everything, but it includes all the major parts. And I like the fact that you include the tail scale, right? Is that how you say it? Yeah,
RICHARD_TAYLOR:
Thanks.
ALLEN_WYMA:
that's nice. That was always my question. It's like, how can I have these things? Because I know it's been done. I mean, it's done all the time, right? This multi cloud and how they connect. And I'm happy that you. have something here because I was never actually sure how the heck to do it. It seems pretty straightforward with tail scale.
RICHARD_TAYLOR:
Cool, thanks. Yeah, on the Hacker News, when it got posted on Hacker News, there were a few other alternatives as well that were posted on there, like some open source alternatives to
ALLEN_WYMA:
Mm-hmm.
RICHARD_TAYLOR:
a tail scale, which is quite nice. So it'd be an interesting project to have a look at some of those as well and see if you can set up your own, like, tail scale cluster, tail scale-like cluster. So, yeah, I think that's it for now. I hope you enjoyed this, and I hope you found it useful.
ALLEN_WYMA:
Do you know the names of those in case people are interested in some of the more open source ones?
RICHARD_TAYLOR:
I can't remember them off the top of my head, but I can find the links and I can send them to you afterwards.
ALLEN_WYMA:
OK. Yeah, OK. I mean, other than that, I think the article itself is clear. I think we went pretty in depth. I don't really have any more questions other than just checking it out and trying it out myself. Is there anything that we missed that we kind of need to go over?
RICHARD_TAYLOR:
No, no, I think that was it.
ALLEN_WYMA:
I think
RICHARD_TAYLOR:
Yeah.
ALLEN_WYMA:
we talked like 10 times longer than the actual article. Because there's a lot in here in terms of stuff behind what's happening, right? The story of why it's going on, what's the background of all the stuff, and how the cloud is slowly eating us all monetarily.
RICHARD_TAYLOR:
Yeah, it was, it took a long time to put together actually. And
ALLEN_WYMA:
Yeah.
RICHARD_TAYLOR:
I had to kind of keep, like cut some stuff out because it was just getting too long. And I said one whole section of it, I deployed as a different blog post in the end because I thought if I tried to just do all that in one, it'd be just too much to take it. So yeah, thank you.
ALLEN_WYMA:
OK, yeah. I mean, with that, let's transition over to picks.
RICHARD_TAYLOR:
Cool.
ALLEN_WYMA:
For me, actually, I have what I call a non-pick. So I've been picking games recently, because I've been trying to relax at night, and games do it for me. There's a game I played for a short while, and then I shut it off, and I uninstalled it immediately, because I hated it. So it's kind of like a non-pick, basically. It's Aliens, Colonial Marines. I don't know if you've ever heard of that game or not, Yeah, it's basically kind of known to be a disaster and I thought it wouldn't be as bad as it was, but it was pretty bad. I'm
RICHARD_TAYLOR:
I
ALLEN_WYMA:
not
RICHARD_TAYLOR:
love you.
ALLEN_WYMA:
usually one to just turn off a game and just uninstall it immediately, but that was the first time.
RICHARD_TAYLOR:
Cool, I'll stay clear of that.
ALLEN_WYMA:
Yeah, stay clear of that one, please.
RICHARD_TAYLOR:
Cool. Yeah, I had a game actually as well that I haven't actually played it yet, but it's been on my list of games that I really want to play. It's called Lunark. L-U-N-A-R-K. It's basically a 2D platformer game inspired by two of my favourite games from the 90s. One called Flashback and another called Another World. And so yeah, it just, it looks amazing. And I think it was a Kickstarter project. So it's like an indie game dev and it just looks really good. So yeah, looking forward to play that.
ALLEN_WYMA:
Cool. That's the only one. I just want to make sure. I thought you said you had a couple.
RICHARD_TAYLOR:
Oh, yeah, sorry, that's the game. Yeah,
ALLEN_WYMA:
OK.
RICHARD_TAYLOR:
so an app that I just started using this week called Mime Stream, which is new. It's basically a Mac OS email client, but specifically for Gmail, Gmail, or Mail. So it uses the API, so you get quite a nice integration rather than just over iMac. And then another one, like this week, I was setting up a laptop and I've discovered Homebrew auto update. And I'm probably late to the party on that one, but it basically does what it says. It updates your Homebrew dependencies on a schedule automatically every 24 hours. So it saves me having to run the commands. So I definitely recommend that.
ALLEN_WYMA:
I kind of enjoyed running like brew update and
RICHARD_TAYLOR:
I'm gonna
ALLEN_WYMA:
outdated
RICHARD_TAYLOR:
go.
ALLEN_WYMA:
and all that stuff. It's kind of become a habit now.
RICHARD_TAYLOR:
Yeah, I mean, I do it as a habit as well, but just having it, it basically does it for you and then sends you a notification if it's updated or anything, so it's quite nice. You still get an idea of what it's done, but you don't have to do it yourself.
ALLEN_WYMA:
I mean, you have to be careful too, because sometimes you update it and it's actually broke stuff. Something that happened to me recently is I have basically one database server on Digital Ocean that runs all the databases for my apps. I don't have that many things running on there. And I updated it to the recent version of Postgres. And did you know what Postgres did recently in terms of
RICHARD_TAYLOR:
No.
ALLEN_WYMA:
this? So you know how you always have the public schema, right? So now the public schema by default is actually not writable or whatever you want to call it. You can't actually use it.
RICHARD_TAYLOR:
Oh well.
ALLEN_WYMA:
Yeah, so was it like I couldn't like I created a brand new database and I wanted to go deploy an app. And I was like, what the heck? I'm like, I added SSL. I'm like, what am I missing? And I couldn't figure it out. And I spent like an hour or so and then I just sent an email over and then they're like, Oh, did you try this? And I was like, I tried all that. And then like, Oh, here's a link. It looks like you, you're using Lace version of Postgres. This is a new change. So it was quite a shocker. I was like, why the heck do they do that? And they have a reason why they did it. I won't go through it on here because I kind of forgot it. But what I understand is I think it's actually because my understanding from what I remember is that like when you put things into public schema, it'll put like they're accessible through all the different schemas. So they're trying to have like that stop
RICHARD_TAYLOR:
Oh yeah, I haven't heard of that. Is this in Postgres 15 or something then?
ALLEN_WYMA:
Yeah, the latest one, I think it's 15. So it's a good thing to know.
RICHARD_TAYLOR:
Yeah, yeah, definitely. I'll try and remember that.
ALLEN_WYMA:
And so I'm also wondering, is it that now, do we need to start setting this within when you create a new Phoenix app, should we just automatically create a new schema now that matches your database name or something, or what?
RICHARD_TAYLOR:
Oh, well, yeah, interesting.
ALLEN_WYMA:
They haven't talked about it, so I'm kind of curious.
RICHARD_TAYLOR:
Cool, I haven't heard anybody hitting that yet, but it's a good one to know about for when I do.
ALLEN_WYMA:
Yeah, that's why I got shocked by this. What the heck is that? Okay.
RICHARD_TAYLOR:
So yeah, that's it for my picks.
ALLEN_WYMA:
Cool. Awesome. Yeah, and with that, I mean, thanks for coming on. And I'm happy that you corrected me that you were not on the podcast before. And you actually remembered exactly which episode we talked to Anna about your article. So I guess that was a highlight in your life. But yeah, it was, again, thanks for the article. And thanks for doing all the legwork for the rest of us. So now I can just reap the rewards.
RICHARD_TAYLOR:
No worries, thank you for having me.
ALLEN_WYMA:
Great.