CI/CD Pipelines - ELIXIR 192

The Elixir Mix panel they discuss how they run their CI/CD pipelines, how they set them up, how they run, and what they do to make them a valuable part of the development process. They also discuss caching, how deep it needs to go, and how they approach getting the best/most information out of the system they're running.

Show Notes

The Elixir Mix panel they discuss how they run their CI/CD pipelines, how they set them up, how they run, and what they do to make them a valuable part of the development process. They also discuss caching, how deep it needs to go, and how they approach getting the best/most information out of the system they're running.

Links:


Picks:

Sasha
Allen

Transcript


Sascha_Wolf:
Hey everybody and welcome to another episode of Elixir Mix. This week on the panel we have Alan Weimar.
 
Allen_Wyma:
Hello?
 
Sascha_Wolf:
And me, Sasha Wolf. That's it. We don't have any guests today. We have no Adi. Adi is busy with work. Bad Adi. Bad Adi. If you listen to this Adi, I'm very disappointed. And we are going to talk about CICD pipelines because we had an episode on this a few weeks back and I wasn't there. And I have lots of things to say about CICD pipelines. So I just grabbed Alan and I was like, Alan, I want to talk about this. And you are not allowed to say no. So
 
Allen_Wyma:
Nine! Cannot say
 
Sascha_Wolf:
nein,
 
Allen_Wyma:
nine.
 
Sascha_Wolf:
nein. Yes, exactly. We don't want to rehash what we talked about in the previous episode, but we want to more talk about, OK, what are some best practices in general, and how do they apply to Elixir? So with that said, Alan, how does your usually CI slash CD pipeline look like? Do you have some templates you always use? What does CI CD constitute for you personally?
 
Allen_Wyma:
Yeah, I mean, mine is pretty straightforward. So usually I have like a staging branch, which I call it for my pre production or everyone call that one. And I also have a branch master, usually remain for if you guys are doing it that way for production. And then every other branch. Yeah, we also run ci on but of course it's different, right? So I'm using GitLab ci. If you're playing around that one before.
 
Sascha_Wolf:
Mm-hmm.
 
Allen_Wyma:
So that was pretty straightforward. I mean, basically you can copy and paste everything. But yeah, so for every branch, you always run all the tests. Now I've changed, I didn't have this one before, but because of Audi, I kind of decided to change it now. So now every time I run a mixed test, I run mixed test, dash, dash, cover, and try to get my coverage up high. And for mastering staging, what I usually do is build the container, run the tests, and then deploy. which I think I'm going to change it to first test, build container, then deploy, because sometimes you build a container and then you test it and it's broken. So that's a problem. Otherwise I think for that, for most of my cases, I think it works just fine, right? When we had the episode, I think we talked about, you know, the guests gave a lot of really, really cool stuff that I'm thinking to add in, but I'm not too sure about all of them. I think I gave the list, the link, right? Did you take a look at those?
 
Sascha_Wolf:
I briefly, I mean, like I said, I wasn't there and also didn't dig too deeply into the episode. And I also haven't listened to it. So you cannot educate me. But I briefly
 
Allen_Wyma:
Bad
 
Sascha_Wolf:
went
 
Allen_Wyma:
Sasha,
 
Sascha_Wolf:
over the list.
 
Allen_Wyma:
Bad Sasha.
 
Sascha_Wolf:
Yes, I actually occasionally do listen to episodes of ours if I want to refresh myself on something we talked about. So that happens. It's my own little Notebox sometimes to remind myself of things. But yeah, like the episode with the guest, he actually has a quite extensive list of checks. I'm not sure if it's worth going into those too much, because I mean, we did really have that episode talking about it. But it's definitely more than I usually do. And I already am somebody who likes to cross his teeth and dots his eyes, so to speak. For example, I mean, I publish a number of open source libraries. Uh, there's another one in the pipeline of the making. And actually I wanted the episode to be about that one, but then Adi didn't show up and I wanted to talk about with you, Alan and Adi. But there's, that tends to be like a template I'm using there, which I wrote myself, um, because depending on what kind of product or software product you have writing, uh, it's sometimes very, very useful to also have multiple. different versions running the code tests, right? In this case, for my libraries, I run the tests on like a bunch of Elixir versions and OTP combinations, just to make sure that it's compatible with like older Elixir versions and newer OTP versions. And on there, I then always, always, always use like tests, but I also have like one target, which is just doing coverage. I usually use coveralls for that, which of course, because I like the integration with GitHub there. and also how it posts comments on pull requests. And I also tend to do dialyzer for my personal projects, for my libraries. I don't tend to do dialyzer anymore for, yeah, private projects, products, software development kind of thing, because I feel not worth the hassle. But for libraries, when people actually might consume those libraries and they... also might run Dialyzer on their projects, I don't want to have them deal with broken type specs, which is why I use Dialyzer for these libraries, but not for products. Yeah, sue me.
 
Allen_Wyma:
I'm surprised you actually run Dialyzer because that is a pretty expensive process, right? And also you need to hold on to those PLT files until the next
 
Sascha_Wolf:
Yes.
 
Allen_Wyma:
time you run it.
 
Sascha_Wolf:
Yes. Which is also why I don't deal with that complexity anymore for private code. Let's say that which isn't consumed by somebody else because I mean, and that is like, I think, or we can kind of go into the best practice part of things. Right. I mean, at the end of the day, to really give you value from the CI-CD pipeline, it needs to run fast as fast as possible. Right. In my experience, the magic number is kind of a minute. So if your CI pipeline tends to finish this in under one minute. then people are even willing to say, okay, I'm going to push something. And then I was going to sit here, lean back, drink a sip of coffee and wait until this thing finishes. Right. If you go significantly over that, then people start to do other things in between, which basically it doesn't invalidate the CI process, but it removes a whole lot of value from it. Because having this super fast feedback loop is just, it's basic. It's the, it's the. where you get the most value out of a pipeline, where you can also say, maybe you have some complex tests set up, or I don't know, with Postgres running, and yeah, you can run it locally, you should be able to run it locally, in the best case, it runs locally as best, as good as in the DCI pipeline, but maybe you also have some more elaborate acceptance tests. And if those also run super quickly, then you can really have like this fast feedback loop, of okay, I'm writing code, I'm pushing, I'm seeing, I'm seeing with the ISIS, then pick it up, so on and so forth. And Dialyzer.
 
Allen_Wyma:
No, sorry, I wanted to ask you a question about the one minute part, right? Because that's really short. Uh, cause every time I run my CI, I always do everything basically what I would call in quotes era from scratch, right? Grabbing the dependencies, compiling the whole project. I mean, let alone, I mean, compiling the project by itself for production will take a while, especially if you're building a container. That's definitely over a minute all by itself. Right? So are you caching like everything
 
Sascha_Wolf:
Yeah,
 
Allen_Wyma:
all
 
Sascha_Wolf:
yeah,
 
Allen_Wyma:
the time?
 
Sascha_Wolf:
yeah.
 
Allen_Wyma:
Okay.
 
Sascha_Wolf:
I'm cashing over time. And I also deliberately exclude container building out of that. So I would actually, I mean, earlier you said you would, maybe you're considering to move container building after testing. From my point of view, I would do both in parallel and only push the container if the tests were successful. But container building, that's I mean, image building or container building, image building is something which is slow by nature because you do. everything from scratch. I mean, Docker does have some nice caching capabilities, but again, those are non-trivial to set up. So I would only really reach for those if you actually have, like, I don't know, like image build times of 15 minutes or longer. Then I might go for those. But for the sweet spot, like really CI part of CI CD, where you want to run your tests, where you want to run some checks, where you want to run some things, those, I'm always aiming for this one-minute mark. because then I can just iterate quickly. I can push a commit and then I can say, ah, now it works, now it doesn't, or oh, this will decrease code coverage, so I'm just gonna write another commit. And I don't need to, I can still run tests locally, but I don't need to run the whole shebang of everything. For example, the formatter or credo, those all run usually inside of CI-CD and I tend to rely on those because they run so quickly. But that only works by using caching. And... But like I said, to get into the whole best practice part, that is where I feel a lot of room, a lot of growth potential is there in general among developers, because some people tend to really dig into CICD topics and I'm one of those. I just find it interesting. I don't know, I get an insane amount of satisfaction when I actually have a CICD pipeline and it just works. I mean, to give you one example, which is... arguably over engineered, but I enjoy the heck out of it for all of my projects on GitHub. And I use a GitHub actions. I re fetch the version number also for the elixir like mix kind of version. I fetch it from the release version. So I basically, when I draft a new release inside of GitHub, I trigger a release pipeline, which takes the version number I put in the release and the tag. It puts it in a file which is just called version and writes that to disk and then it publishes that to hex and the version file is published alongside everything else also to hex. It's like an additional file. And I load that file from disk to fetch the version number of the library. So I only really ever put the version number inside of this one release and nowhere else. It's a single source of truth. And I don't know. took me probably like two days to figure out how exactly I can make it work, but oh my god wasn't satisfied when it finally worked. So I'm that kind of guy. I wouldn't expect that from everybody else to do it that way, but I really think there is a lot of value to be gained from having a smooth and stable and fast CI-CD pipeline. We can go into more of like all of the sciency things there later on the episode because there's actually an organizations benefit a lot from investing more into ICCD than they might initially think. But for now, maybe let's focus on the best practices. And one of the best practices there is caching. And caching is just the lowest hanging fruit you can use to get your ICCD pipeline as fast as possible. And in that case, for example, for future branches of a pull request, I think it's acceptable at the very first build is somewhat slower. Like maybe not 10 minutes slower, but if it takes, I don't know, like five minutes, let's say that, but then through use of smart features of caching, if every conch is a subsequent build after that, it should go super fast. And for example, for Knigel, the library, I wrote the CI pipeline. If you, if all the caches are hit, it's like 30 seconds, everything's done.
 
Allen_Wyma:
Now I'm kind of curious too, like, yeah, you're talking about your versioning, right? Um, something I was investigating for a project, which was like, how can we easily deploy and mark a version as X, you know, like one dot two dot three or whatever, right? Cause for me, what I would like to do is not only just of course, update the file or whatever you want to do for the version for your elixir application, but also to tag it too, because it'd be good if you can just go to set tag and say, good checkout. get that version because if you want to track down a bug, you want that specific version, right? How do you handle something like this in a CI CD? Do you because I've seen some people where they'll actually like, when I push a tag, then do this stuff, right? Or I've also seen people saying, Okay, if I bumped this version, or if I merge this in there, I want the the CI to actually make a commit for me and actually tag it for me.
 
Sascha_Wolf:
No, so what exactly is your question right now? I'm not sure I'm following on the question.
 
Allen_Wyma:
I'm curious, like, you know, how would you handle like versioning your system within CI CD?
 
Sascha_Wolf:
Okay.
 
Allen_Wyma:
Yeah, and like, do you have the system tag? Or do you run it like, like, because I'm sure you must do get tagging for versions
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
right when you release them. So
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
are you taking by yourself? Are you having to see I do that?
 
Sascha_Wolf:
Yeah,
 
Allen_Wyma:
Or,
 
Sascha_Wolf:
okay,
 
Allen_Wyma:
you
 
Sascha_Wolf:
I
 
Allen_Wyma:
know,
 
Sascha_Wolf:
get it now. This is just details on how exactly I do it for these projects. I mean, your mileage may vary, right? I've, let's say I've seen systems where the CI picks up when you push a tag and then they do things that, so like somebody is tagging manually basically and then pushing that. What I've now been doing and I, to be honest, enjoy more is I draft the release in... So in GitHub create new release, then you can also either select a tag or you can create a new one I always create a new one I give it a version number and that then is used as the version number So in this particular example, it actually this I forget up the the pipeline gets triggered by a published release it's the thing you can do in GitHub and Then this pipeline it fetches the GitHub release the attack version basically the tech name and that's just an environment variable in this case. And it writes it into a file on disk. And then I use that file on this publish it also to X, blah, blah, blah, to F as the source of truth. So I do it manually in the GitHub interface, basically, which for me works in this case, because I also always want to massage the release notes a little bit, right? Like, yeah, I copy paste links to... pull requests which have been merged, so on and so forth. But basically I'm using the change log file format for that. But I also might want to include like a somewhat human readable message, which is like, hey, this really starts X, Y and Z, right? This is how, this is maybe like, maybe take a look at that documentation over there, so on and so forth. And then that case also can then decide, this is like a semantic minor version, is it a semantic major version, is it a semantic patch version, right? I can make the decision. while I'm writing the release notes. And that is again, like where my whole desire was born to have the source of truth, because before that I was always, I had to remember to tag the version on GitHub, but also to change the version in mix, and then to push that, and I regularly forgot one of the two. So most of the time I take the version, but I forgot to update the version in the mix file, but I was like, oh, okay, actually I now need to... update the tech again because I also need to change the version and the mix file and now I can create a release. I just found it super annoying because that's why at some point I came up with this whole workflow of saying, hey, I put it in a version file and then I actually fill this version file in my continuous deployment step from the tech version. Yeah, but this is like me specifically. And I think one thing you can learn from that, regardless of how you do it. If you push a tag, if you do it from local machine, if you do it through GitHub releases, if you do it through, I don't know, increasing a version number in your software project, regardless of where exactly that version lives, you benefit a whole lot if you have only one source of truth. So whether or not you have your CI CD system pick up a version from a Git tag or pick it up from a configuration file or from, I don't know, somewhere else. You benefit a lot when that version is the only version which is true, and then wherever else it needs to go, it basically gets fetched from there and put there. So that is, I think, the learning you can take away, and also goes into this whole notion of making CI, CD as smooth and frictionless as possible. Does that answer your question?
 
Allen_Wyma:
Yeah. Okay. Yeah. I just want to hear your method, see how you do it. And if you had any, you know, previous working with the other different ways and how it worked for you.
 
Sascha_Wolf:
Yeah. And I mean, my previous company, what we had there, at some point things actually came out kind of nice together. We had, we were tagging versions on the main, at that point it was the master branch, but we renamed it to main at some point on the main branch. And that was then, those were then the versions which got deployed to production. So anything which got merged to main got deployed on staging and anything which then got tagged on main. got deployed to production. So that's how the system used to work there. The thing is about versioning in general, if you don't have a project which needs to be consumed by somebody else, I don't see a whole host of reasons to use semantic versioning. And I think that company would actually use a date stamp or something. So. I don't remember the details, but we did not use semantic versioning. We did use semantic versioning in the beginning, and at some point we were like, there's not much value to be gained actually, because I mean, the whole system as a whole needs to work together and only we are working with it. Nobody else is consuming it. So why not use something which is more easily generated by a machine? But again, whatever you decide to use as a version, have one source of truth. That is definitely, I think, a big learning for me personally, at least.
 
Allen_Wyma:
Yeah, what I did for that project is like, I capture the dates in UTC, obviously. The version number we got from the mix file and the git commit hash, the short one. And I try to show all the all three of those on the UI, just so I have an idea about kind of where we are.
 
Sascha_Wolf:
Yeah. Yeah. Yeah, that makes sense to me. Um, I got the previous days we also had at some point we had some CLI tooling we brought ourselves in Ruby where you could then say, okay, please create a new version for this, uh, what's the microservice of what we're doing there. So please create a new version for that microservice and that CLI tooling then actually went ahead, fetched all the, uh, pull requests, which were merged until that created a regitab release with a new tag, put the description into the github release, like okay those are the pull requests which have been merged, you could also give it like a human readable description string and that also came nicely together. So I'm just saying this to showcase there are multiple different ways to go about this and there's no one true way to go about this but it's super useful to have this one source of truth and this one interface you're using, right? For Knigge, it's the GitHub releases interface for me. At that point, we had this command line tool, which was the interface, but you didn't have to go to multiple places and fiddle with multiple things to do a release.
 
Allen_Wyma:
Okay. Yeah, we went off on a, on a side tangent, but you were in the middle of saying something at that time.
 
Sascha_Wolf:
No, I think it makes a, the moodlems, as I say, it's a side tangent. I mean, it goes into the whole CI-CD pipeline deal. And yeah, maybe some, to come back to where we also talked about earlier, the whole, okay, how do you, can you make it fast and reliable? Um, to revisit that, I mean, like I said, fast, like one minute for me is a good rule of thumb. Um, another thing is reliability, right? Uh, because if, I'm not sure if. Avenue, if you ever had that experience, I do where you have a CI pipeline which fails occasionally out of because of random reasons. Right. And that is a pain in the butt because especially when you have a CI system which is slow and fails occasionally at that point that the AI system loses all its value because if you have a pipeline which is like 20 minutes long. And then sometimes it fails and sometimes it's successful. Nobody's gonna rely on the CI pipeline. Then it's just gonna get a chore. Then you need to get it to pass. And then people do things like comment out tests, blah, blah, blah, blah. And the one major culprit in my experience for systems which sometimes fail are then actually with a test like flaky tests, right? Where you have tests which sometimes succeed and sometimes fail. But it can also be like, I don't know. maybe in the setup of a CI-CD pipeline where you need to spin up a Postgres container and you need to make sure that your application actually, the Postgres container becomes ready before you start up your application. They could connect to it. They can also be a source of friction sometimes, but in most cases it's flaky tests. And yeah, why am I saying all of this? It just highlights why. Okay, it's the ISIS and how this way, for example, speed is also important if you have a CI system, which sometimes fails, but it runs on 40 seconds, then yeah, it's annoying, but it can be dealt with, of course, you should get rid of it. But if you're the other way around, if you have a slow CI system, but it's always successful, at least you can rely on that. Right. But if you have a slow CI system, which sometimes fails, that's the worst of both worlds. So speed and reliability are kind of the both axes, let's say that which which constitute how valuable a CI system is. And then maybe the third would be transparency. So like if your CI system fails, you want to really, what should be easy to understand why, right? Only then it's really giving you value. And to come back then to all of this, to like, well, how does this constitute, how does this relate to Elixir? For example, for speed, caching is the obvious one. And there are some gotchas in Elixir when you do caching. There's like two things you might want to cache. There's the deps folder and there's the underscore build folder. But the fun thing is if you have only elixir dependencies, it works like that. You can have like a set up where you say, okay, please cache my deps folder. Now do mix install deps, blah, blah, blah, blah. And now please cache my build folder. Now do compile, right? That is what usually works. Except some Erlang dependencies, they actually put things inside of the deps folders. So like if you compile an Erlang dependency, it might be that there are some artifacts of that compilation, not in build, but in depth, then you also need to cache those. Depending on your CI, CD system, that's not much of an issue. For example, GitHub, when you say, hey, I want to cache this, what they do automatically is like, they register like a hook at the end of the, of a CI running where they then, do the caching and they don't need to do the caching manually, they only need to edit there once and then at that point it restores the cache, but it basically automatically puts things into the cache. So this is not such a big deal with GitHub, but for other CI systems, for Circus CI for example, then you might have to deal, might need to jump for more hoops if you want proper caching. That was a fun little journey to figure out like, why, why, wait, I'm caching my build here, like why is it recompiling those things? So yeah, fun little
 
Allen_Wyma:
Right.
 
Sascha_Wolf:
learning
 
Allen_Wyma:
But if
 
Sascha_Wolf:
there.
 
Allen_Wyma:
you, you already said cash steps and cash build folder, but you're saying there's a problem
 
Sascha_Wolf:
Yep.
 
Allen_Wyma:
with the build folder or with
 
Sascha_Wolf:
Yeah,
 
Allen_Wyma:
the
 
Sascha_Wolf:
the,
 
Allen_Wyma:
depths.
 
Sascha_Wolf:
the, but depending on how your caching works, you, after compilation, you might also need to cache your depth folder again, because
 
Allen_Wyma:
Oh,
 
Sascha_Wolf:
some
 
Allen_Wyma:
I see.
 
Sascha_Wolf:
compilation artifacts might end up in there. If you have a CI system, which automatically does the caching at the end of a run, right, then it's not a big, big of a deal because then it's just going to get captured there. But if you have a CI system where you need to explicitly say, hey, cache now. Then Yeah, then you might have to jump through with some more hoops.
 
Allen_Wyma:
Got it. What about, what about cash busting? I mean, you ever need to do something for that?
 
Sascha_Wolf:
Yeah, but only ever for Dialyzer. So we'll maybe come back to earlier, right? Because Dialyzer is the only thing I feel it's impossible to get it fast. I explicitly always put Dialyzer inside of a separate CI runner. So I have tests in one runner and I have Dialyzer in another runner. Dialyzer is always the thing which is like super slow for Knigga. It's like, I think eight minutes if you run it from scratch, but I mean, it's also not, there's not much type spec in there. But yeah, then you need to look into, okay, how do you cache the PLTs by default? The PLTs, I think it added to the build folder. You might actually want to put them in a different folder because they don't change that often. So you actually benefit from putting that in a different folder and having more generous cache keys. But then. And I never could figure out why exactly, to be honest. Dial as I might say, hey, the PLTs, I don't like those anymore. And the only solution for me at that point was to then say, okay, I'm gonna have to bust this cache. And for most CICD systems in my experience, that just tends to be the, to change the cache key. So what I've been ended ending up doing for all my CICI systems, I have always a cache key prefix, so it's version for one, right? V1. And that. I tend to put those in an environment variable, then just add those to the cache key. And then when I need it, I increase those. That's how I do it. That's how you usually, in my experience, bust caches from CI, CD systems. I haven't seen, like CircleCI, for example, they definitely don't have a feature to say, I want to explicitly bust this one cache. And I don't think GitHub does that.
 
Allen_Wyma:
I guess it depends on where you're storing your cash, right? So for GitLab, I know initially I was storing cash on S3. And so what I would do is I would just go into the S3 bucket and just delete
 
Sascha_Wolf:
Not fair
 
Allen_Wyma:
whatever
 
Sascha_Wolf:
enough.
 
Allen_Wyma:
I wanted to get rid of.
 
Sascha_Wolf:
Yeah, I mean, I always use the built in cache functionalities from so-called GitHub and I don't even know what they're doing with that. It's a black box. But there's also no, no, no angle to say I want to delete this cache there. But yeah, I think this is like another best practice than having explicit cache, uh, cache key prefixes. You can just increase increment is, is, is a sure fire way to bust your caches. It's very simple. And That is, I guess, another best practice you can use, especially with dialyzer, because at some point dialyzer is going to tell you, hey, I don't like this PLT over there anymore. I never was able to figure out why. Maybe listen on those and reach out to me on Twitter and explain to me, because I would actually be interested to understand this, but I never digged into it.
 
Allen_Wyma:
It's weird that like you cannot just, um, it'd be nice if you could just somehow include a flag that says if you can't use his PLTs, just destroy them and recreate them.
 
Sascha_Wolf:
Yeah, it's
 
Allen_Wyma:
That would
 
Sascha_Wolf:
a...
 
Allen_Wyma:
be nice.
 
Sascha_Wolf:
I guess that makes a lot of sense. I've never thought about it, but yeah. Yeah, I mean, maybe for everybody who's not familiar what those PLTs are, because we have talked about it then for the last five minutes, but I'm not sure we defined them. I also don't
 
Allen_Wyma:
I don't
 
Sascha_Wolf:
know
 
Allen_Wyma:
know
 
Sascha_Wolf:
what
 
Allen_Wyma:
exactly
 
Sascha_Wolf:
the...
 
Allen_Wyma:
what they do other than they're like a cache of tracing
 
Sascha_Wolf:
Yeah,
 
Allen_Wyma:
of something.
 
Sascha_Wolf:
I also don't persist. No PLT stands for persistent lookup table. And what it exactly does, it's basically, it's the cached version of the inferred types from the standard library. So basically what dialer needs to do to type check your project, it needs to get all the type definitions from all the functions you might potentially use, right? Which includes the standard library. And not everything in the standard library is type spec, so what isolation, because what isolation does, if it's type spec, just use the type specs. And maybe, maybe not. I'm not sure. I'm not sure if that constitutes over some library, but at least for your user and code, it definitely does both. Like it looks at type specs, but it also infers the types from the code to tell you if type specs are wrong. Otherwise, it couldn't tell you that. But yeah, basically all this whole process of inferring the types from the standard library, that takes a very long time. I think if you run it raw on a Macbook, it can be like an M1 Macbook, it can easily take like five minutes. Like it really takes a while. And if you had to do that every single time you ran Dialyzer, you would go crazy. Or to quote the docs, the README, you would stab yourself in the eye with a fork. This is literally what's standing in the README of Dialyxio. So that's what the PLTs are. They're like this cached version of the unfurl types. So you really want to cache those if you have Dialyzer inside of a CI-CD system and they don't want to run the beat super slow. But yeah, it feels brittle. That's my experience with Dialyzer there. Which is also why I don't deal with Dialyzer anymore in like non-library user facing scenarios where people need to consume the code. So, Alan, what, do you have some secret source for how you do CICD systems? Like, I mean, we talked about caching now, we talked about some pitfalls. Is there anything you, any learnings you had when like setting up CICD pipelines for Elixir projects? Like, or Varva, how do you usually set them up? Is there like some, I mean, if we talked about the recording with the guest a while ago, right, but there's, is there like some, some important. milestones you always aim for.
 
Allen_Wyma:
Yeah, I mean, I try to aim for a high test coverage, maybe not 100%, but definitely high as possible. So I think running the cover definitely helps. Yeah, building a container and kicking it out. I mean, I think those are the big ones. I mean, there's no, you know, when I run a project, I let go to another project and I just copy paste the YAML file and change a
 
Sascha_Wolf:
Yep,
 
Allen_Wyma:
couple of things
 
Sascha_Wolf:
yep
 
Allen_Wyma:
here and there. That's about it. I think everybody does something like that. So,
 
Sascha_Wolf:
Yeah,
 
Allen_Wyma:
cause you run about
 
Sascha_Wolf:
yeah.
 
Allen_Wyma:
the same stuff. I mean, the only thing is like, I do have his list written down and I do want to bring in most of it. The one thing I still can't bring myself to, to do that he does is running up and down for his migration files, which I think is really excessive. Um, But I understand that. Yeah. If you actually run, if you ever actually run rollbacks on your system database, and it makes sense. But for me, I've never ran one. And how about you? Have you ever run a rollback before?
 
Sascha_Wolf:
I can't remember that I've ever done that. No. I've
 
Allen_Wyma:
But then again, he...
 
Sascha_Wolf:
religiously written rollback code, I mean I use it locally. That I've done, but I don't think I've ever run it on a production system.
 
Allen_Wyma:
I mean, for me, yeah, locally, I've done that. But some things you just cannot roll back, right?
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
But yeah, I mean, I think it's good. It's good to do. But sometimes if you're really crazy about it, you may go nuts, because it could be very pedantic right when you do it.
 
Sascha_Wolf:
Yeah, agreed, agreed. And one reason learning for me was, and I've not yet incorporated that anywhere, is that in a CI, usually when you write code, you tend to focus on the happy path. You, of course, still consider the edge cases and what might go wrong, but in general, you want to make the happy path work. And that is the path you want to optimize. In CI-CD pipelines, I... come to the realization that you want to optimize for the, what is it, unhappy path? I don't know, the sad path. And the happy path should just be like a side project of that. And what do I mean with that? What I mean is that when, like in the ICD system, like when it runs through and goes green, great, right? The thing is deployed, great. But what is way more interesting is when the thing does not go green, right? And that is like what I mentioned earlier when I said the third dimension is transparency. And that is the thing you want to optimize for. So if your CI-CD pipeline actually fails, you want to make it as easy as possible to understand why. Why did it fail? Was it a credit check that failed? Was it a test that failed? If a test failed, which test failed? Why did it fail? All of those are things you want to make it accessible and easily accessible as possible. Modern CI-CD systems do a whole lot of lifting for that. So like a GitHub. For example, like if thinkTails, it makes it very easy to access, okay, the error part of the logs, right? You don't have to sift through all the logs to get to that part. But in general, that is what you want to optimize for. And that then goes beyond the immediate CI needs. It goes to the C deep needs because we, so fun little goose chase on, when was it? Was it, I don't remember, last week sometime, sometime last week. We... got an alert from Datadoc where it said, hey, you have elevated restart rates in your production cluster. And we were like, no, in your cluster. And we were like, wait, what? Okay, well, what's happening? We don't see any user facing issues. So let's figure out, maybe it's gonna calm down on its own, but it didn't. So we went into a little goose chase there. And at some point it turned out, hey, we had this. We have this feature branch deployment for our web frontend. So there actually every feature branch gets its own deployment and you can test it and it gets like a little sub domain so you can access it. Not yet there on every project but for web frontend that already works. And it turns out there was one feature branch deployment which was just broken because they had like one component. It's like a view project, one component which did not render and the health check. Yeah, the health check. The health check of the project was just accessing the root of the webpage. And that particular component was part of the root, so the root page didn't render, returned a non-200 response. So Kubernetes was like, wow, this thing is not healthy, let me restart it. Which obviously didn't fix the issue. So that then went into a restart back-off loop, that went overnight because the commit was done late evening. went to her home. So overnight the thing started like hundreds of times. And then at some point our monitoring picked it up like, hey, you have elevated restart rates. And that is where I then took this learning away was like, we actually want to make this obvious. It should be obvious when something like this happens. It should not have to involve me. Okay, I have this alert. Okay, what is restarting? Okay, let's look up. Okay, this service over there is restarting. Why is it restarting? Oh, okay, there's this one pull request and there, oh, okay, that's the ICD system. That kind of failed, but it still went into deployment because the feature branch kind of blah, blah, blah, blah. Right? Instead, it should have been surfaced in the CI-CD pipeline, maybe with like a smoke test or whatever, where you just push it, it kicks off with the ICD system and then... this failure bubbles up immediately and becomes visible. And you maybe through a GitHub comment or whatever. I don't really care what the exact mechanic is, but it should be obvious and easy to access that, hey, this thing is broken because of that reason. And that means you need to optimize for the set path, which is weird, because it's so counter to how we usually do software development, at least in my experience. Yeah, sorry, I meant one bit of a rant and a ramble there, but do you have any thoughts on that, Adam?
 
Allen_Wyma:
know that that makes sense. Yeah, if you can somehow kick out the bad stuff early, right? Yeah, I mean, I think before what I've done before is run checks and stuff, but that's something I'm also thinking about too, because, uh, with GitLab, the, they always say that you should build then test and deploy. Will that make sense if you're actually testing the image, right? Which is something that maybe like you guys would have, could have used, right? For your, your thing.
 
Sascha_Wolf:
Yeah, yeah, yeah,
 
Allen_Wyma:
So
 
Sascha_Wolf:
like some kind of smoke testing or something that could have been very
 
Allen_Wyma:
yeah.
 
Sascha_Wolf:
useful.
 
Allen_Wyma:
So that's something that I'm interested to take a look, but, you know, that, that would take some time. Like, I don't know about you, but most of my products, they have a lot of configuration with environment variables sometimes, so that would take some time to kind of set up.
 
Sascha_Wolf:
Yeah, it's definitely non-trivial to make that work. I've also never done it. Like I definitely have seen the need for some kind of smoke testing setup. And for everybody who's not familiar with the term, smoke testing is basically you, like, I'm not sure where the term exactly comes from, but I think it's basically you turn it on and you see if it starts to smoke. So if it starts at all and doesn't just blow up immediately. And in this particular case, for example, you could make the argument that anything we deploy to production or staging in this case, it could benefit of like one step in the ICD pipeline where you take the finished image, you spin up a container with all its dependencies, and then you hit the health and the ready end points, right? And just say, okay, they should become green at some point. Ready, I think you can make the argument that it's not that much value, but the same thing again, like if you have a pod deployed inside of your cluster. And you have a ready endpoint, what the ready endpoint does, where it already check, it checks if the pod of the container is ready to receive traffic, right? And then you can have a pod in there, which is healthy, but never ready because something is not working as it should. Again, that should be obvious. You don't want to have to dig inside of a container and like, wait, this thing is not ready. Why is this thing not ready? And I've been there, I've deployed things. And the CI pipeline was green, it went through, it deployed on staging. But the change was not there. And I was like, huh? Like what's happening? Why is the change not there? Why is the page still looking the same? And then I look into the cluster and I realize, oh, the part is not ready because of the reasons, but those reasons again, were not obvious. So yeah, if you have, if you have the luxury of of having maybe a greenfield project, I would definitely, or maybe have a luxury of container which is easily running like a more CI-CD setting, then I would definitely consider to have some kind of smoke tests for the deployment process. Where you say, okay, this thing should become green at some point. I'm not sure how easily done it is though, because I've never done it. And I mean, depending on the system, it might take a good few 10, 20, 30 seconds until it really becomes ready to receive traffic. And doing that inside of a CI-CD pipeline. Yeah, sounds like work. Let's say that.
 
Allen_Wyma:
Yeah, I don't know. Like, do you have any opinion on like, actually, when you run your quote unquote smoke tests, we actually hitting third party services for some things.
 
Sascha_Wolf:
No. No, no, not that I can't think of you.
 
Allen_Wyma:
No, I haven't. But I had a manager that was doing it before in a previous company. And I thought that was super weird. Like every day he'd run his tests and they would like, create a project in our internal GitHub and then close it. And I'm like, well, that seems weird. And then this is gonna like, you know, make that database ID keep going up every single day. And I just felt that's something you should be doing. I mean, it's okay. It's kind of like the XVCR, right? You run it once. And then you kind of record it and then just keep going with it, right? That's the way I kind of see that you can do it.
 
Sascha_Wolf:
Yeah, I think like inside of his ICD project pipeline, maybe not, but I think there is like, there is value to be gained from testing some things for real, like from production. And I listened to a talk a while ago where they made the argument that for your business critical path, which like literally makes you money, right? You might want to test that for real. And if that means you get a package delivered to your fucking office every day, then so be it. But that is the thing which if you have an e-commerce thing, like that is the thing which gives you money. So better make sure that that thing works. Right. Because if, for example, if you say I'm using a test credit card or anything, which doesn't really do the purchase, then there's still a part of the path which is not tested. You can make
 
Allen_Wyma:
I
 
Sascha_Wolf:
of
 
Allen_Wyma:
guess
 
Sascha_Wolf:
that as
 
Allen_Wyma:
that's,
 
Sascha_Wolf:
you will, but...
 
Allen_Wyma:
I guess that's a, that's a good point, right? Like if you really do order something from Amazon every day and Amazon doesn't actually run through that path. I mean, the amount of money they would lose in just like a half hour
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
of it not working is way worse than like bang, what, like a dollar pencil or something.
 
Sascha_Wolf:
Yeah, but I would not expect necessarily that to happen on a CI-CD path. That would be like a thing outside of that. Like maybe, I don't know, like some kind of active monitoring, which regularly does that every day. Because also then the feedback loops get very big, right? Depending on what exactly your core business is, if it's really is e-commerce, the feedback loop can easily be a few days because you go for the thing, you order it, and then you expect the package to be shipped. And I mean like a f- If you need to perform, then maybe order, I don't know, something funny, like a six pack of beer. I don't care. But yeah, I would see it outside of the CI-CD pipeline, but I mean, I'm happy to be challenged on that. I don't have a strong... Okay,
 
Allen_Wyma:
Mm-hmm.
 
Sascha_Wolf:
then maybe let's come back to something I said earlier, right? Like with the sciencey things, uh, and I don't have like any studies to point to, but I do know that this is something the whole DevOps movement has been very big on. So like a lot of the things that just laid out have been influenced by things I learned from the DevOps movement. And I don't mean like now DevOps engineers and that kind of thing, but I really like the core idea of like having deaf operations move closer together. And there is the case studies. which showcase maybe kind of unintuitively if that you know maybe phrase it differently so there are case studies which give the indication that it's a good idea to throw your best engineers not into core product work but into dev tooling and especially CI-CD pipelines which is kind of weird because I mean CI-CD is pretty much nobody's core business unless you're CircleCI Right? But it's the tool which software developers interact with on a daily and potentially hourly and potentially minutely basis. So if that thing runs smooth and reliable and stable and gives you all the information it needs, it gets out of your way, right? It's good tools. It's basically the same idea as if like a carpenter would buy the cheap tools from, I don't know, the dollar store. Or if a barbiter buys for cheap, the good tool, it's from whoever does good carpentry tools. I don't know. And it's the same there. So you might think, oh my god, our teams have a velocity which is too low, in our opinion. Maybe let's do less of CI-CD work then. Let's do more product work. But that actually does quite the opposite. I mean, it might work for a short while. that then at some point software rot sets in, the ICD pipelines break because of outdated versions, blah blah blah blah blah. And then your velocity is actually going to drop down. That is the thing which has been confirmed again and again and again and again and again and again and again, which is something where like the whole, like I said, the whole DevOps movement is like very big on it. You really want to focus on your tools and make your tools as smooth and reliable as possible. Make the feedback loops as small as possible, which comes, which this circles back to the whole thing about do you want your CI pipeline to run fast? And what did I want to do? End this up, I had one destination I wanted to go to. No, I forgot it. But yeah, so the whole DevOps movement is very big on those kind of areas. And they have some science-y case studies and studies which show that, yeah, there's truth to it. Organizations which have a high velocity. They also tend to invest some serious manpower into their CI-CD pipelines and like platforms to run software on and so on and so forth. There is a clear correlation between those. I mean correlation doesn't mean causation, but still.
 
Allen_Wyma:
Yeah, I have a lot to say about this one. When I come to these kind of companies, and I just kind of talk to the engineers who are already over there and ask them, how can we not do automated testing, CI CD, etc. Mostly what I hear back is, oh, management doesn't want us to do that they want us to focus on features only. So we just kind
 
Sascha_Wolf:
Exactly.
 
Allen_Wyma:
of keep doing that. And then when you have a talk with management, and you know, like, sometimes you have this, it's like, you people say, No, no, no, I didn't say that. But then you you kind of know that they did say it because of the way they reply back and also other people kind of confirm the story. But you kind of find out after a while who's telling what what I usually find out is that management has no idea about this stuff, because they're not the experts to kind of like give it to us, you know, geeks to figure out, and they do their own thing. And they may have mistaken some words like, you know, about, we got to quickly get this thing down. means that we shouldn't be doing best practices, which is interesting.
 
Sascha_Wolf:
Yeah, I think part of where this whole confusion stems from is if you, for example, say, hey, we should focus on features and don't do all of these things, right? That doesn't mean that these things don't happen because I mean, what is CI, CD? CI, CD is the automation of steps, which at least big parts need to happen anyway, but especially CD, continuous deployment, right? Deployment needs to happen anyway. Like at some point you need to take the software you wrote and deploy it somewhere to make it accessible to the world. So if you don't invest anything into that, that doesn't mean deployment vanishes, it just means deployment becomes manual and brittle. That's the thing which happens. So if you look at it from that perspective, you could also rephrase it and say, okay, you don't want to invest into things which are not your core business, but you want to make... to enable the teams to focus on your core business by making everything else which needs to happen anyway as smooth as possible, right? Like testing and testing and all those other things are topic things we do because we want short feedback loops, right? I don't write a test because I think testing is, I don't know, for itself, right? Testing in of itself doesn't serve any purpose. I could also test by... writing things and put them in production and see if it breaks. That's also a way to check if my code does the thing it think it does. But the feedback loop is a lot bigger and also we impact really good Angular customers, right? Um, so that's why we do testing. So all of those things are just tools we are using to at some point actually do deliver value for the core business. And if you look at it from that angle, not investing into that and not making it as smooth as possible becomes, yeah, just dump, to be honest. It's just dump.
 
Allen_Wyma:
Yeah. And I think also, you know, it takes management also to kind of, there's basically two ways to learn, right? One is that, you know, your mom tells you, don't, don't do that. And you just don't do it. And you just kind of know not to do it. And the other one is they tell you, don't do it. And then you do it. And then you find out why you shouldn't do it. So it's kind of like positive negative ways to learn. Right. And I think mostly once you run into a negative way, then it's like, okay, let's try a way and see if this works. Right. So add in a CI server to make sure that Like you said, the biggest one, I think that is huge is deployments, right?
 
Sascha_Wolf:
Yep.
 
Allen_Wyma:
Once your deployment's become automated and like you said, less brittle. Once that stops, it's like, wow, I don't know what life was like before then. And then you start kind of adding in stuff and say, okay, let's try this. Let's try that.
 
Sascha_Wolf:
Yeah, at that point, it's the usual story of automation, right? I mean, automation often starts with having a manual run book, and then you say, you need to do that, you need to do that, you need to do that. Then at some point, the run book is so complete that you can actually take it and script it. And at some point, I would bet money if you somehow could wave a magic wand. And you could get rid of CI, CD, as a whole concept, right? And everybody forgets about it. I would bet money that the practice in the, in the way we do it would emerge again over time, because it's just the natural thing of automating things. And I mean, like automation at the end of the day is what software is all about. So automating, writing software to make software writing easier. It seems like a no brainer. And that is kind of what the ICD is about, right? Like writing software to make the software development cycle smoother and automated. We are not yet there to automate code writing itself, but I mean, at the end of the day, high level languages are nowhere near what machine code looks like. So there's also some kind of automation in there.
 
Allen_Wyma:
Now
 
Sascha_Wolf:
It sturtles all the way down.
 
Allen_Wyma:
I have a question for you. Now, if you don't continuously deploy to production, does it really, can you really say that you're actually doing CD necessarily?
 
Sascha_Wolf:
What would you then do if you don't continue to do the production? Because what is the I mean, at some point you need to do production. So what is the alternative?
 
Allen_Wyma:
What I'm saying is like this, right? So for a lot of projects, we kind of, I mean, we kind of say, okay, for this week, we're going to do these features, right? And we do the features, we do continuously deploy the features to a staging environment. And then once we kind of clear up with business, when we should release this new version, then we just merged whatever's on staging onto, you know, production. And, I mean, I wouldn't really consider that necessarily CD, because you're not continuously delivering features to production. Because sometimes you have to line these things up, right? You can't just update like APIs for mobile app. You know, you have to like sync these kinds of things up. That's one thing. And sometimes like if you're going to release a feature for like, let's say an e-commerce and it has to come out on Black Friday for whatever reason, right? Then you can't really deploy stuff so early necessarily, unless you
 
Sascha_Wolf:
No.
 
Allen_Wyma:
add in like protections and stuff. I mean, you could
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
do it, but there's, there could be constraints where you cannot necessarily deploy certain things. all the time.
 
Sascha_Wolf:
I think I have to answer this question with a classic, it depends so to speak, because the continuous delivery in of itself is a tool. It's a tool to make feedback loops shorter. And at the end of the day, everything is about feedback loops, even like businesses operate on feedback loops, right? Like you do a strategy change in a business, you do a change in a business, you want to see what kind of effect that has. And whether or not you want to continue this, right? Like a fail fast is a principle, not only from software engineering, but also from business, from business development. I don't know, I'm not a business person, but I know that much, right? And continuous delivery in of itself is like a tool to make feedback loops shorter. So yeah, if you don't continuously deliver changes to production, then your feedback loop is gonna be bigger because you will... Feedback from real customers is only come in at the point where you deliver to production That might be problematic. It might not be problematic that deep really depends on your concrete use case So I would not necessarily say dogmatically. You're not doing continuous delivery because you don't deliver every change of production immediately, but depending on your Business on your core business that could have negative effects Or not. I don't know I don't own your business, you know? But if you, for example, at the end of a day... You want to do continuous delivery to get feedback as early as possible. And like, for example, if that feedback is in form of errors, right? Or if that feedback is in form of customers having a higher conversion rate, I don't know that kind of thing. Right. And then you end up potentially investing less effort into paths, which are not as successful. That might be like for software where you say, okay, this thing doesn't work out. It doesn't scale or errors out in ways which are unexpected. So this path is unsuccessful and to try out something else. Or it might be in a bigger scope where, okay, this new feature doesn't change user behavior in the way we expected it to blah, blah, blah, right. But at the end of the day, it always is like, I have this thing I want to achieve. And I do this change. Does this change work? And if you can get the answer to that in a short amount of time, you're always better off. I'm not sure
 
Allen_Wyma:
Okay.
 
Sascha_Wolf:
that did not answer your question because I can't...
 
Allen_Wyma:
No, no, no, it does. Right. And yeah, I guess that's just kind of like how you want to play this. I mean, because if you do look at a lot of pieces of software, they're a version, right? You don't just
 
Sascha_Wolf:
Hmm?
 
Allen_Wyma:
like wake up someday and then you have a one new small feature necessarily. Some, some software is like that. A lot of time it's like, you know, like, let's take iOS, right? you got version 16 16 that one coming around the corner, you know, just wake up and you know, the patch release has got some new small thing that was added to it. You get a bunch of changes all at one time.
 
Sascha_Wolf:
Hmm.
 
Allen_Wyma:
So.
 
Sascha_Wolf:
But I think there is, I mean, I'm not, I would have to now look it up what the release cadence is for Apple. But in general, I feel there is a tendency to have less and smaller releases more often.
 
Allen_Wyma:
Yeah, I feel that's what they're doing recently since maybe 15 or 14. They're like releasing like one to two times a month or even a couple of months. Like as
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
time goes on, I think initially it's like, we have one to two times a month. And then after some time, it's like, what, maybe one every couple of months or something depends on which vulnerability gets found in iOS this week. That was like, what was going on for quite some time.
 
Sascha_Wolf:
Yeah. Yeah. And I think that's just trying to observe throughout the industry. Right. I mean, like you also observe it with Java is a good example, to be honest. I mean, I'm not, I'm not a big Java fan, but everybody knows that. But in general, like how the JVM, they used to have these big, big, big Java versions, which changed the whole slew of things every few years. And now they have, I'm not sure exactly how often, but multiple ones per year, I think. Right. We had Java, what? I don't know, 18, something like that. And...
 
Allen_Wyma:
Something crazy like that, yeah.
 
Sascha_Wolf:
Yeah, probably higher. But then again, like it's the same idea, but they will release more often with less effort and check if things work out as you expected them to. It's a trend, which is observable throughout the industry. And I mean, at the end of the day, it makes sense if you, if you are a business or if you have a software project and you release once per year. Right. And then maybe your release turns out to be. Shit. Like it doesn't do what you expected it to. It happens. And if you have another business or another project which releases twice per year and they have a release and the release turns out to be shit, they can already do corrective action before the other business has released. Like they can potentially push out another version which already takes corrective actions on the previous fuckup and they're better off for that. So they can react faster. There's even... fun thing, there's like this whole, it's the same idea from the military, they have this Oulu loop or something it's called, where they basically observe, assess, I don't know the exact words, but basically it boils down to figure out what kind of situations you're in, deduct information from that, make a decision and act on it. And if that loop is faster with your troops than with the enemy, you're better off, because you can react faster to changing... to a change in combat situation. And at that point, you get an advantage because you can act faster. You can do things more quickly than enemies, so you can outmaneuver them. Right? So the whole
 
Allen_Wyma:
Yeah.
 
Sascha_Wolf:
idea about a heavy-bass feedback loop is as short as possible. It's everywhere.
 
Allen_Wyma:
That's actually
 
Sascha_Wolf:
And.
 
Allen_Wyma:
what I hear what's happening in Ukraine is like the Ukrainian soldiers are found very much a Western star where they have like these mid range commanders who are like really close into the battle. But Russia has a very old star where it's really like totally top down, like several layers
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
up.
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
And they're trying to make the feedback look faster by putting them more into the combat zone, which is this getting them in trouble, getting them curled. So yeah, I guess it's a kind of a more modern day military example of how you have like the Ukrainian troops who are having a Western style where you have very fast acting people on the ground making on on the spot choices and people who are getting late messages and not knowing what's actually what's going on and just kind of making choices and how that actually works out.
 
Sascha_Wolf:
Yeah. But I think it highlights again, like this idea about having short feedback loops, it's everywhere. And if you're having a shorter feedback loop,
 
Allen_Wyma:
Yeah.
 
Sascha_Wolf:
it's always better to having a longer one. Always. There's no reason not to have a shorter feedback loop if you can. You can make the argument if it's like worth it, right? If the value you gain from decreasing the size of your feedback loop is worth the investment you need to do, that's something, a discussion you can be making. But if I have the choice with... two equal choices, one is like a shorter feedback loop, but all things otherwise they're equal. I'm also going to choose a shorter feedback loop, always.
 
Allen_Wyma:
Makes sense.
 
Sascha_Wolf:
It's also the same with TDD, like this is where the kind of TDD also shines, where you write a test. It's red, you write some code, it's green. That's like the shortest possible feedback loop. It's
 
Allen_Wyma:
Yeah.
 
Sascha_Wolf:
like... 20 seconds. One minute, I don't know. It depends on what could exactly how grand are your tests are, but... Again, shorter is better.
 
Allen_Wyma:
Yeah, that's the crazy part. Like I, I, I see some of these guys, like I tried to introduce them to testing. They're aware of the benefits of it, but they're still stuck in the days where like they literally, they try to test, they wrote some new code, they manually set up the case, it doesn't work. Then they had to try to figure out why it's not working. They write more code and they manually set back up the case. I'm like, dude, you could just set the case. Maybe it takes you. I don't know. Let's just, let's just give you the benefit of the dollar. It takes you an extra one, three minutes. of writing up the code, et cetera, right? But how fast you can run that test is probably a thousand times faster than it takes you to set the damn thing. Every single time. Like, I find it crazy that I just run the right domain test. 90% of the time I finish everything. It's ready to go, it works out of the box. Sometimes there's, sometimes it doesn't work where there's like extra case somewhere or I missed a field something that, something
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
strange out the corner, right? I mean, your test is only as good as the way you write them. But yeah, I mean, still, they could have done it a lot faster if they just wrote the test to begin with. Because what they're going to do, actually, once they get it working manually, is they're actually going to write the test. I'm just telling you, it's like you're just, what is it called, delay the inevitable, basically.
 
Sascha_Wolf:
Yeah, and that sounds like you have a subjectively longer feedback loop than if you would write end-up-at-test earlier. Yeah. Okay, I think I feel like we can end it here because we start with the ICD, then we ended up in big picture military engagements and now we're back to tests. So we kind of did the whole loop. The podcast is a loop in of itself. Isn't it beautiful?
 
Allen_Wyma:
Yeah, it's a good day so far.
 
Sascha_Wolf:
Okay, anything, any last words you would like to add, Alan, before we go to pics?
 
Allen_Wyma:
No, I think like you said before, like, check out that episode if you guys look into your CI CD, um, you don't have to do everything from there, but I think that one's a really exhaustive list, uh, that I'm considering to, to put in at some point, whenever I have a moment, um, and I think today was really more kind of theoretical and more practical based. So I think today's episode is pretty, it was all good by itself.
 
Sascha_Wolf:
Nice. Okay. Yeah. Then let us transition to picks. Um, I'm just going to start off for a change. I have two picks. One is a bit controversial. Um, I think I already picked this one, but that's a book. It's called effective DevOps. It's really a thick book. Um, I also never finished it. I tend to do that with books. Like I, I start to read them and then I, at some point when I feel like I, I got everything I need out of those, I dropped them again. That was one of those. So, not sure how good the whole book is, but the parts I did read, I really enjoyed. And they go into what I just laid out also, like the ideas behind DevOps, the principles, why it's important, what kind of organizational challenges you might encounter, those kinds of things. Because I feel the whole term DevOps has become very muddy and that it talks about Kubernetes, is the DevOps tool, whatever, right? And the DevOps engineers. And there's a great quote a while ago, I saw on a talk, which was, what Jira is to Agile. And depending on the person, you might think, yeah, okay, then if I do committers, I'm doing DevOps because we're doing Jira, so we're Agile, right? And it's not like that. Like, if it might just be a tool you might be employing to get better at doing DevOps, but DevOps in of itself is first and foremost a bunch of principles. And the book is pretty good on laying that out. And the other pick, again, it's controversial. I'm gonna pick... a podcast which is another top end dev podcast. So the podcast is called the ideal cast with Jean Kim. And he's the author of the Phoenix project, which is also a book relatively popular in the dev ops space from what I've seen. And I just enjoy the podcast very much because he tends to focus on dev ops topics. But also yet again, same for effective DevOps, like more on the principles behind it and he has like guests there, interviews. And what he does in his episodes, which I find really great is if they have like a concept or something and they mentioned something, he interjects. So he has like little things cut in where he explains what that concept is, why this is important, where does this come from. So he regularly gives context to little things inside of a discussion by cutting in. in basically editing in little snippets. And I find it just very useful. And I learned a lot of while listening to that thing. So yeah, if Chuck now comes, if I'm not here next week anymore, then then you know that Chuck got to me and beat me because I recommended different podcasts on the topic of depth. But I think it's a it's a it's a word for this. And I'm just joking, by the way, Chuck is a cool guy.
 
Allen_Wyma:
So what was the name of the podcast again? I was trying to look it up.
 
Sascha_Wolf:
The ideal cast.
 
Allen_Wyma:
Okay. Yeah, because I think I read that book. It was called What Again? The Phoenix Project. I think
 
Sascha_Wolf:
The
 
Allen_Wyma:
I read
 
Sascha_Wolf:
Phoenix
 
Allen_Wyma:
that book
 
Sascha_Wolf:
Project,
 
Allen_Wyma:
too.
 
Sascha_Wolf:
yeah.
 
Allen_Wyma:
Isn't it like where they... it's a fictional story, right? Or is
 
Sascha_Wolf:
It's
 
Allen_Wyma:
it a surreal
 
Sascha_Wolf:
a fictional
 
Allen_Wyma:
story?
 
Sascha_Wolf:
story about a real project basically. Like
 
Allen_Wyma:
Yeah.
 
Sascha_Wolf:
basically like it's writing a fictional scenario about a software project going wrong and then how it... What they did to make it fix it again. It's kind of a phoenix right, like from Ashes. So, Alan, what are your picks?
 
Allen_Wyma:
I just have one pick for today. It's the Ember Mug. I think you're also a pretty big coffee drinker, right? And so, like, I'm always so busy at work. I pour my coffee. I walk away. I come back to drink it. It's lukewarm. And that's just
 
Sascha_Wolf:
Yeah,
 
Allen_Wyma:
not
 
Sascha_Wolf:
I get
 
Allen_Wyma:
cool,
 
Sascha_Wolf:
it.
 
Allen_Wyma:
right? Yeah.
 
Sascha_Wolf:
Yeah,
 
Allen_Wyma:
So
 
Sascha_Wolf:
yeah, yeah.
 
Allen_Wyma:
you've been there before, right? So I get my exercise by walking over to microwave, stick it in there, I pull it out. Too hot to drink. You
 
Sascha_Wolf:
Yeah,
 
Allen_Wyma:
put it
 
Sascha_Wolf:
I
 
Allen_Wyma:
down
 
Sascha_Wolf:
know exactly.
 
Allen_Wyma:
and then you're, I'm in the food bag loop, right, which is not good for coffee. And so I got the, the Ember mug, which I think is perfect for anybody to when you're sitting at your desk for a while. I also got the, the Ember travel mug, but that one I think is, it's not really, really good. it's okay. Like it makes sense. It lasts for like three hours or whatever. If you're sitting at your desk for a long time, I think you guys should check out the amber mug. Works great. The only thing you have to know is like it's not like a coffee warmer. Do you have one of these or something? Or no? I feel like a
 
Sascha_Wolf:
No.
 
Allen_Wyma:
lot of Germans own this. No,
 
Sascha_Wolf:
No.
 
Allen_Wyma:
I thought a couple of quite a few German people, they all seem to like it and know about it. So the idea is that you should pour in a hotter beverage than what you really want it to be. And it's gonna let it drop down to the right temperature that you like and keep it at that temperature.
 
Sascha_Wolf:
Interesting, okay.
 
Allen_Wyma:
Yeah, that's the idea. So if you try to use it as something where it's like you have cold water and try to heat it up to what you want, you're just going to kill the battery and it's gonna take a long time.
 
Sascha_Wolf:
Mm-hmm.
 
Allen_Wyma:
So that's just the wrong way to use it. Anyways, it's perfect for me. So that my decimal time drinking coffee, the the charger is basically like a coaster. So just plop your plop your cup on there, you don't need to move it. And it can keep your cup hot all day, like, as long as you want for forever. Because it's just sitting there charging
 
Sascha_Wolf:
That's
 
Allen_Wyma:
up.
 
Sascha_Wolf:
cool. Yeah, that sounds useful. I'm definitely going to check that
 
Allen_Wyma:
Yeah,
 
Sascha_Wolf:
out.
 
Allen_Wyma:
see, I told you all the Germans I talked to, I like either love the idea or already have it, right? So it's a little bit expensive. It's like if you get like the 12 or 14 hours, I think it's like 150 bucks US.
 
Sascha_Wolf:
Yooo, it's also 150 euros, good gosh.
 
Allen_Wyma:
Yeah, yeah, yeah. It's not cheap. They also have other ones that are similar, like other companies and stuff. But if you really like coffee, just think about, like you said, investment in tools, right?
 
Sascha_Wolf:
Hmm.
 
Allen_Wyma:
If you buy the best tool, you know, and you're in your professional coffee drinker,
 
Sascha_Wolf:
Fascinating.
 
Allen_Wyma:
maybe it makes sense. I did see one professional coffee drinker on YouTube that was like, he really raved about it because, you know, you get down to the degree that you want, right?
 
Sascha_Wolf:
Mm, mm, yeah, fair enough. I can see the appeal.
 
Allen_Wyma:
Exactly. Yeah. So I recommend it. I got both the travel mug and the the the coffee mug. I think the coffee mug is great for most people. Travel mug is kind of unique because you only get three hours battery, right? If you're gonna be traveling, you're gonna be doing more than three hours. So it just doesn't make sense. Anyways, yep. So that's kind of my pick. Super happy with it. So I would love to let people know about this this cup.
 
Sascha_Wolf:
Nice. I'm definitely going to check it out. Maybe I'm going to wish for it for Christmas. Seems like a perfect Christmas gift to be honest.
 
Allen_Wyma:
That's all
 
Sascha_Wolf:
Okay.
 
Allen_Wyma:
you're going to
 
Sascha_Wolf:
Yeah.
 
Allen_Wyma:
get.
 
Sascha_Wolf:
Okay. Then thanks for listening, peeps. I just want to add one little thing. So if you want to chat about any of these CI, CD, DevOps, feedback loopy kind of topics, it's as you might have heard something I find very interesting. So feel free to reach out to me on Twitter. Wolf4Earth is my handle there. I'm happy to have a chat about that. Seriously. Like I've just, it's the thing I could- talk about for hours. So with that being said, thank you for tuning in. And come back when we have another episode of FelixMX. Bye bye.
Album Art
CI/CD Pipelines - ELIXIR 192
0:00
1:02:27
Playback Speed: