Optimizing the Elixir CI Pipeline with Szymon Soppa - EMx 188

Elixir Mix

Join our weekly discussion of the popular functional language built on top of the Erlang virtual machine. Each week, we discuss the world of Elxiir, Phoenix, OTP, and then BEAM.

Optimizing the Elixir CI Pipeline with Szymon Soppa - EMx 188

Published : Sep 14, 2022
Duration : 48 Minutes

Show Notes

With day-to-day development, it is vital to ensure our workflows are optimized and that developer time is utilized efficiently.  Today on the show, Szymon Soppa shares about what we should do with our Elixir CIs to ensure this optimization and developer efficiencies are maximized for production.

In this episode…


  • Continuous integration (CI) and automation
  • Customizing the formatter
  • Configurations in the formatter
  • Functionalities within a library
  • Other tools 
  • Steps for implementation
  • Communicating with your team on CI processes

Sponsors


Links


Picks

Transcript


Adi_Iyengar:
Hey everyone and welcome to another episode of Elixir Mix. Today on our panel we have Alan Wyma.
 
Allen_Wyma:
Hello.
 
Adi_Iyengar:
And myself, Adi Iyengar. We don't have Sasha today, so I'll be the one hosting. So hopefully, it's not too bad. But yeah, we also have a special guest, Shimon Sopa. Did I pronounce that right?
 
Szymon_Soppa:
Yeah, you did it perfectly. Yeah.
 
Adi_Iyengar:
Awesome, awesome. Yeah, we've had Shimon before on the podcast. I think it was like close to a year ago. And today he's here to talk about CI pipelines in Elixir. But yeah, before we do that, Shimon, do you wanna give us, give a quick introduction?
 
Szymon_Soppa:
All right, so I'm the owner and CEO of the company called Curiosum. We basically started this company with the goal to provide Elixir development services. And this is the only language we use on the backend site. Before that, I had some history with Ruby, but then switched to Elixir totally. We don't do anything in Ruby anymore. On the frontend side, we mostly do React and React Native. However, we are not stopping there. If there's something interesting in view, for example, we also consider that or Phoenix live view, of course. And we are located in Poland, Poznań, which is a center of Poland.
 
Adi_Iyengar:
Awesome. Awesome. As I mentioned, you're here for talking about Elixir CI pipeline that he set up. There's also a blog post, right? And yeah, I'm very curious about what things he did differently than most places, and what was the inspiration to write this blog post and put extra thought into setting up your CI.
 
Szymon_Soppa:
OK, so the thing that I saw repeatedly over years is developers arguing on some basic things in code review, stuff that you wouldn't really want to spend time on. And I think that every project needs some kind of a consistency. And developers focusing on things that really matter instead of small things that doesn't really matter. And I believe that the more tools you have to automate some of the checks that you would normally do in code review, the better for the whole development system. And because of that, I'm a fan of big Elixir CI, maybe not even Elixir, but just big CI pipelines. And by big, I mean a lot of steps that you can automate to check stuff that takes time. and should not take time. And for instance, in Alixir, we have this formatter. That's something that I would see years ago in Ruby, in code reviews, two devs arguing how you should indent some kind of a function class or whatever. This is not something that you should spend time on. And the more you have in CI pipeline, the better for the whole development system, for whole development of the project. And that's basically why I thought about creating the blog post. I thought that it's a good idea to write it down, to write as much steps as I think can be put inside of it. I believe also that some of the developers might not be really interested in some particular steps. Also, some of these steps are like... to be related, for instance, so that if you just create a library in Elixir, you might not really need this stuff. But what I have in mind here is a Phoenix project, like a typical Phoenix project that you would develop. And basically, this is like a container for all of my ideas for Elixir CI pipelines. I wanted to also hear from the community what they think about it, and maybe they have some additional ideas.
 
Adi_Iyengar:
That's awesome. That's awesome. Yeah, totally agreed with having a standard for the code so people don't waste time talking about things that already been discussed. I'm very curious about that. Do you have a how customizes your formatter.exe file? Is it generally what the mix generates? Or did you guys discuss about, I don't know. Because I know, for example, in Phoenix, um, ignore the migrations repo in the formatter and which I'm not a fan of. So we've like explicitly removed that from the ignore files. So I'm curious, like if there's, if you guys have any standards or did just like follow what a Phoenix generates.
 
Szymon_Soppa:
I mostly follow what's being generated in this case. However, we have a couple of teams, and if the given team would like to customize it a little bit differently, then I'm open to it. After all, I want to have a configuration file that's being discussed by the team. But I mostly follow what's generated. The goal for me here is just to have everyone follows and the tool follows basically so that we don't have to discuss it.
 
Adi_Iyengar:
Awesome. Very cool.
 
Allen_Wyma:
That's kind of one of the things of the form of the formatter, right? Is that it's not supposed to be so configurable. I was a little bit shocked when I started seeing some configuration. In the formatter, right? I was like, cause at the very beginning, they're like, no, no, this is it. If you don't like it, don't use it. That was like what the initial talk was. Right. And then all of a sudden they're like, well, maybe a couple of things we can change. I don't remember what you can even change. Cause the only thing I actually ever do is like pull in for man, except from other projects, right? Like absence and Phoenix, I think they have a separate like extension of one matter. And of course they have the Hicks formatter, which is quite useful. What else can you actually do with the formatter?
 
Szymon_Soppa:
Yeah, you can, for instance, say that the formatter should respect the configuration of the formatter of the given library so that you would not format the files that are Acto related because Acto has its own formatter configuration. Yeah, that's one of the things you can do in there. I assume you can configure some indentation or something like this, but I'm not really sure.
 
Adi_Iyengar:
I think in parentheses, that's what Phoenix migration is remove, right? Not adding parentheses around the fields. You can do, I think, the sigils and chars, right? Like some weird stuff. Yeah, I think these all came from big libraries, like Absent and Phoenix. It was a request from those libraries because it kind of complied with their DSL. But for the most part, Alan, you're right. It's not very configurable.
 
Allen_Wyma:
Yeah, I did complain to them like once I was like, why is it that you guys have to change my numbers with the thousand separators? I really complained about that. I thought that was so annoying. I never actually used that. Like I understand why it's useful, but to me, it was just more annoying because like I had to, like, I'm looking for an ID and so I would copy the ID and paste it, looking at it into the code, but because the formatter would put the underscores at the thousands place. all throughout all the numbers I have like hard coded in my app. I couldn't find it. It drove
 
Adi_Iyengar:
Hmm.
 
Allen_Wyma:
me nuts.
 
Adi_Iyengar:
My question is why do you have ID hardcoded in your app, but that's.
 
Allen_Wyma:
Yeah, yeah, this is exactly what they came back to me. Why do you have that? I'm like, listen, man, leave me alone. Because these are like IDs that are static, right?
 
Adi_Iyengar:
Right,
 
Allen_Wyma:
And
 
Adi_Iyengar:
right.
 
Allen_Wyma:
I understand they should be strings and whatever else, but yeah, good point on your side. But then again, I'm working with a client at the time who was using Microsoft enterprise stuff. So there was a lot of goofiness. Like there was no, like, every time you want to edit the database. So anyways, my database is not Postgres. So it was SQL Server, of course.
 
Adi_Iyengar:
Mmm.
 
Allen_Wyma:
And whenever you want to make a, an editor database, I have to, because it's an enterprise, I have to run through what they call those procedures, procedure calls.
 
Szymon_Soppa:
Okay.
 
Adi_Iyengar:
That's so weird.
 
Allen_Wyma:
Yeah, so it was definitely painful and like how do you automatically test that? It's very difficult and I don't have access to migration files for this one too. So it's a lot of crazy stuff.
 
Adi_Iyengar:
Wow, I don't envy you. Yeah, anyway, yeah, yeah, the format. So it sounds like you use, Shimon, the general kind of configuration that Formata comes with. What other tools do you use? I briefly skimmed over the blog post. It looks like you use Credo, a Credo, rather, and a few other tools. Would you like to talk about that?
 
Szymon_Soppa:
OK, so maybe let's go through a couple of steps that I've included in there, and we can discuss these. Basically, the first one is a no-brainer. You have to fetch the depths if you have the mix project. So basically, that's the first step. You're going to need it in the later stages. And the second step would be to run mix-hex-audit. And basically, this task will. scan the packages you have in your project and list the ones that are marked as retired. But to mark a package as retired, maintainer has to do it. So my guess is that not a lot of maintainers keep that in mind, that's my guess. But... I still feel like this is something we can add to the pipeline to check these things. And then you can use a task, mix steps unlock with the check unused flag to check the dependencies that are no longer used in your project. This is something you can do to, you can do it to remove the unused dependencies basically.
 
Adi_Iyengar:
Yeah.
 
Szymon_Soppa:
The next thing is a library that you would know from the NPM, which is Audit, which scans the packages. And NPM basically can scan the packages to see the vulnerabilities inside a project. And there is something in Elixir World that does that as well, and it's called Mixed Audit. However, the problem is it depends on the goodwill of the open source community, and they have to basically list all of the vulnerabilities that they came across. And I could see that there are only a few of them listed inside of this lip, so that I wouldn't This step is something that you have to include in your project because, for instance, if you use GitHub as your Git provider, I mean, Git hosting, you can use, for instance, Dependabot, which now has support for Elixir Elang. And I would say that this is probably a better choice to do this kind of scan.
 
Adi_Iyengar:
Do you think it's like,
 
Szymon_Soppa:
Yeah
 
Adi_Iyengar:
it makes sense to use mix hex audit as well? You know, you mentioned it depends on the community's goodwill, but the more people use it, the more likely that people will report, you know. I forget which one it was in Ruby, which did that, but there was a similar one in Ruby. Initially, it wasn't very good, but it evolved into something that every good engineering team had in the CI. What are your thoughts on that?
 
Szymon_Soppa:
So your question is whether it's still a good idea to use it, right?
 
Adi_Iyengar:
Yeah, totally.
 
Szymon_Soppa:
OK, so I think it is still like it's like additional step that takes just a few seconds, maybe not in a second, few moments to check. And at the same time, if there is, after all, someone who will mark the lip as retired, then I will at least know it. Yeah, basically that's my answer to it. As I said, I'm a fan of a big. pipelines, and if there is a slight chance of knowing before deploying that something can break or something is not right, then I would like to know it.
 
Adi_Iyengar:
I have a quick question for you. So you would probably add the mix audit in, and I have not used this yet, but you'd probably add that in the dev and test environment, or maybe just a dev environment, right? Because you're only using it for a mixed task, would mixed steps unlock checked unused catch that? because it's still
 
Szymon_Soppa:
Um,
 
Adi_Iyengar:
not being used
 
Szymon_Soppa:
yeah.
 
Adi_Iyengar:
in the code, right? I'm curious because you use both.
 
Szymon_Soppa:
Yeah, so the question is where are we going to catch the ellipse that are used inside of a production but inside of CI?
 
Adi_Iyengar:
Yeah, yeah.
 
Szymon_Soppa:
Yeah, so unless you have some specific clips for the production and you're using the mix and fast for that, then yeah, you might have a problem of catching those. However, I'm not 100% sure in this case because the question here is how it's actually checked, whether this is checking for all of that. ants or simply for the ant that you're currently using inside of this shell.
 
Adi_Iyengar:
Right.
 
Szymon_Soppa:
Not 100% sure here.
 
Adi_Iyengar:
Makes sense. I was very curious because, again, I haven't used either of those in the car. I've never used hex audit, and I've not used check unused with the dependency, which only provides a mixed task. But anyway, yeah, that's very cool. I'm so sorry. I'll let you continue with your pipeline.
 
Allen_Wyma:
Now, I just
 
Szymon_Soppa:
No
 
Allen_Wyma:
want
 
Szymon_Soppa:
problem.
 
Allen_Wyma:
to follow back with this one, though.
 
Szymon_Soppa:
Yeah.
 
Allen_Wyma:
So the unused is looking for dependencies that are, obviously, sorry, how do you say that? They're like, would you call that first level, like if you actually required them? I don't forget the word there's like transitive and there's explicit or something like that for for dependencies. So like, if you require a package A, which requires package B, if you do unused B, we'll still see it's being used by another package. So the unused is supposed to be like if if you no longer declare package A within your mix file, but because of the way mix works, it doesn't actually remove it from the lock file.
 
Adi_Iyengar:
Right.
 
Allen_Wyma:
So it would say oh, it's not in the mix file inside of a dependency of something else. So therefore, it's actually unused. So that's how the thing would work. So
 
Adi_Iyengar:
I
 
Allen_Wyma:
it
 
Adi_Iyengar:
got
 
Allen_Wyma:
would
 
Adi_Iyengar:
it.
 
Allen_Wyma:
not be caught. Yeah.
 
Adi_Iyengar:
Got it. If you add something, it's checking the diff between your mix exes and mix lock, not that anything in mix exes is not used. So it's like,
 
Allen_Wyma:
Correct.
 
Adi_Iyengar:
got it, got it, very cool. So
 
Allen_Wyma:
Yeah.
 
Adi_Iyengar:
it's not the required, it's the transient one that you're talking about, like a secondary dependency. That's very useful.
 
Szymon_Soppa:
All right, so the next step, the fifth one, would be to run the mix format. However, you would like to include check formatted flag here to get the proper status of the check. And if your code is not formatted, then you're gonna get a failing status, which will fail the whole pipeline. Yeah, I would say nowadays, I see no reason not to include it in CI pipeline. I think it should be there. And basically, I assume that the whole community already knows what is mixed formatter. So this is just an easy step here.
 
Adi_Iyengar:
Make sense?
 
Szymon_Soppa:
The next step would be compiling the code. But what I do here is I use additional flag. which is warning as errors, which basically won't compile the project if you have a warning inside of it. You can also set an option here inside of the mix exs file, where you can just include this flag into the whole project and whenever you're gonna call mix, then it's gonna... add this file automatically, sorry, add this flag automatically. But if you don't want to do it for some reason, and there is a good reason, for instance, during the development, it can be annoying because you just want to test something out and you want to remove some part of the definition of a function. If you remove
 
Adi_Iyengar:
Thanks
 
Szymon_Soppa:
some
 
Adi_Iyengar:
for
 
Szymon_Soppa:
part
 
Adi_Iyengar:
watching!
 
Szymon_Soppa:
of the definition of a function, then some variables may not be used anymore and it's going to cause a warning. and the project will not compile. Because of that, you might not want to include this to the whole project. So this is like a thing you can do. You can just put it into CI. But after
 
Adi_Iyengar:
There's,
 
Szymon_Soppa:
all, it's, yeah.
 
Adi_Iyengar:
yeah, I was just saying, I think it's actually really cool, the checking for warnings and warnings as errors. I think one problem with that is environments do matter, right? Like if you check it with mix and test, you might get some warning. If you check out of mix and prod, you might get some warning. So that's the case for maybe not adding it to your mix exes file. where mix compile does that because say you're running it in dev locally, right? And there's no errors. You run the test, there's no errors. But when you deploy, maybe because you add it to your mixcxs, the compilation will cause warnings of errors and your production might break because of a warning in production, right? Because again, the environments matter here. Yeah, I don't know, Shimon, how do you deal with it? Do you run it in three environments? I've seen places that do that. mix and dev, like prod dev and test, mix compile warning, you know, warnings as there is. Like, I've seen people do that, but it, I don't know, diminishing, it might have diminishing returns at that point, but I'm curious how you deal with that.
 
Szymon_Soppa:
Do you mean people using it for all of these environments or putting into CI a compilation that would use this flag?
 
Adi_Iyengar:
NCI.
 
Szymon_Soppa:
OK.
 
Adi_Iyengar:
NCI.
 
Szymon_Soppa:
No, I would normally just run it for one AMP, not for all of them, to be honest. So no, I haven't used it for this kind of scenarios.
 
Adi_Iyengar:
Make sense.
 
Szymon_Soppa:
Okay, so this is the compilation step. And then we... And yeah, I just want to emphasize that sometimes a warning is actually a pretty important thing that can break your app or something so that you don't want to treat it like it's not existing. So that's basically why you would probably want to include this flag into MixCompile. And the next step is, as I said, some of the steps are actually DB related. So treat this step as something that you should use inside of the project where you actually use DB. And this is a step that checks the ability of DB to fully roll back and migrate, or maybe migrate and roll back. Because sometimes we write the migrations in a way that they can. migrate, but they cannot roll back. And this is actually a huge problem, because if for some reason you'd like to roll back it on the production because it broke something, then what to do? You have to fix the migration itself, because you cannot do it. And sometimes you get a couple of migrations like this. And it's really annoying if you're work within a team and we cannot perform such a basic operation as rollback. So this step basically is to ensure that your DB is able to rollback with the migration you add to the repo. And
 
Adi_Iyengar:
I think that's
 
Szymon_Soppa:
how
 
Adi_Iyengar:
a
 
Szymon_Soppa:
you
 
Adi_Iyengar:
great
 
Szymon_Soppa:
can,
 
Adi_Iyengar:
one.
 
Szymon_Soppa:
yeah?
 
Adi_Iyengar:
I think that's
 
Szymon_Soppa:
Yeah.
 
Adi_Iyengar:
a great one. I have never tried this on a CI. I am definitely going to add this, because you bring up a great point. You want to make sure your schema migrations are rollbackable. So yeah, this is a great one.
 
Allen_Wyma:
I
 
Szymon_Soppa:
Yeah,
 
Allen_Wyma:
often
 
Szymon_Soppa:
that's
 
Allen_Wyma:
very
 
Szymon_Soppa:
pretty,
 
Allen_Wyma:
rolled
 
Szymon_Soppa:
you know.
 
Allen_Wyma:
back a migration though. I've never rolled back a migration. Just kind of curious.
 
Adi_Iyengar:
Last
 
Szymon_Soppa:
I...
 
Adi_Iyengar:
week.
 
Allen_Wyma:
Really?
 
Szymon_Soppa:
..
 
Adi_Iyengar:
Yeah, I mean, it depends, you know, like, I mean, sometimes, you know, if you have a lot of data in your database already, and, you know, you work in like a, you know, fast paced, quick iterative environment where you know one stream of work and overlap with the stream of work right there's the likelihood that one migration might affect something and might break something in production, but production data is pretty high. or yeah, I mean a lot higher than you would think, right? So it just happened to us last week and luckily our migration was roll backable, but a CI to make sure that's the case would be really nice.
 
Allen_Wyma:
Yeah,
 
Szymon_Soppa:
I
 
Allen_Wyma:
okay.
 
Szymon_Soppa:
had this
 
Allen_Wyma:
Because
 
Szymon_Soppa:
case
 
Allen_Wyma:
I
 
Szymon_Soppa:
a
 
Allen_Wyma:
talked
 
Szymon_Soppa:
couple
 
Allen_Wyma:
to somebody
 
Szymon_Soppa:
of times.
 
Allen_Wyma:
else. Oh, sorry, I talked to I think it was Ben Wilson. I don't know why we talked about this. I just remember this conversation. He's like, I think he told me like he never at that time, this is many years ago. At that time, he never rolled back a migration. They usually just write another migration and just roll forward with the next one. Now that this is the opposite of what he just did. I don't know. I highly doubt that it may be like another like an additional fix or something. But that's what he said. I specifically remember this conversation with him, but I can't remember exactly what was said. But I remembered he said he never rolled back on migration before, which I mean, I've never actually rolled back on migration. I think. Yeah, I don't think
 
Szymon_Soppa:
Even
 
Allen_Wyma:
I've ever
 
Szymon_Soppa:
in
 
Allen_Wyma:
done
 
Szymon_Soppa:
Deaf?
 
Allen_Wyma:
one before. Well, dev Yeah, but to be honest, I mean, I could just act or reset, right? If I really if I can't really roll back, because some migrations really you cannot roll back, right? There's, there's some that you cannot Um, but mostly you can, what was it that you cannot, like if you're removing data, if you remove a database column, you can't really recreate that necessarily. If you're removing Davis, like there could be a time where you're actually going to be removing a database conference or whatever reason. Let's say that you, I don't know, hashed a password wrong or something. I don't know. Just whatever, right? They're probably, I mean, I can't come up with a really good reasonable reason why you cannot roll back something, but there probably could be something where if you removed the column. You can't really necessarily recreate it depending upon what that data is. But this is probably
 
Szymon_Soppa:
Um,
 
Allen_Wyma:
conversation
 
Szymon_Soppa:
yeah
 
Allen_Wyma:
for, should we be doing a, what is it? CQRS
 
Szymon_Soppa:
Yeah.
 
Allen_Wyma:
this kind of thing.
 
Adi_Iyengar:
I mean, there is a way you can add or remove column, which is roll backable. Yeah, not the data, right? Like you're right, but you'll miss the data, but yeah. Yeah, that makes sense. I mean, in that case, the need for it to be rollbackable is probably a lot less, I would think. The instances where I needed migrations to be able to roll back was when I add a column or update. Generally, it's like updating string to text. It's like a problem that I've had a couple of times. or changing from text to JSON for interpretation of columns. That was the error we had last time. So that's where we needed it to be rolled back. But I think you do make a good point about maybe rollbacks in general can be rethought. And all schema changes should go as part of a migration. But then you have to create any release for a patch.
 
Szymon_Soppa:
I think I have a good case where it will be like a good scenario to do a little bug. And it's like you, for some reason, change the name of a column inside of a table. And this change should be also applied in all of the places where you use this new name of the column. Let's say this is a pretty critical column for the whole system. And the developer forgot to apply this new. name in a couple of places, and this causes a huge problem on the production. You would like to just quickly come back to what you had before, and you just rename the column back to what it was. In the meantime, you fix the problems. So yeah, this is something that came up to my mind that can be a problem.
 
Adi_Iyengar:
Yeah, I think that's a great example. Another one would be like, I think a lot of people say like maybe adding tests would fix this. But if you add like another layer of dependency where your columns are being accessed through an API, like if you use Hasara or something that converts your database to GraphQL, and changing one column might update, like changing a few associations, and then a column on Hasara which wasn't done correctly, it might break a bunch of your APIs and stuff. And that's hard to test as well. So yeah, that's another use case. But you're right, like changing field name is generally like a good one, yeah.
 
Szymon_Soppa:
And it's pretty easy to do the whole operation to this check because there is a flag in rollback called all that will just rollback all of the migrations back to the beginning. So I would say I would never probably use it on a daily basis in development. But for the purpose of this check, there is such a flag. Yeah, as I'm saying, this is only to the project where you actually have it to be. And finally, the next step would be to run MixCrito. And this is actually a place where I would say that most of developers like to have a custom configuration, because the team would like to have the aliases in alphabetical order for some reason, or any other stuff. And it's actually up to you which code check you will enable or disable. However, this is like a huge thing, I would say that really have a huge implication on what's in the code base. Yeah. And the next step would be something that actually do security focused analysis. And I just want to say that it doesn't mean that your code will be 100% secure here if you enable this, but it does a couple of checks that may help you in the project. Things such as SQL injection, cross-site scripting, insecure configuration, et cetera. You can see a list of things that it checks. On the GitHub, for instance. And I also think that this is something nice to include because why not having something that takes care of your security? And then dialyzer, next step. I know that some people do not put specs in the project. Yeah, but I'm not one of them. I do use specs and actually dialyzer, even without specs, it also will print out some type mismatches. but the specs helps you go even further with the dialyzer. And, yep.
 
Adi_Iyengar:
I have a question about Dialyzer for you. So in our hex libraries, hex packages, we have a custom test package and custom events package. We do use specs. But in an actual Phoenix project, I just haven't had success in a continued manner to use Dialyzer. The effort to maintain that and the number of errors that come up. In my eyes, generally, it doesn't justify the reward. And I'm curious what your experience has been to use it for all your Phoenix projects as well, actual Phoenix projects, or just dependencies.
 
Szymon_Soppa:
I'm using this for all of the projects we run at Curiosity. So your question basically is whether we use it in all of the projects. But... Okay, so can you rephrase it one more time?
 
Adi_Iyengar:
Yeah, I have had a lot of errors with Dialyzer, right? I think a lot of unpredictable errors that cause. And then I caused by, honestly, not error or bugs in our type specs, right? And it came to a place where amount of time we're spending maintaining Dialyzer to keep it up was, I don't know, it was some unreasonable six, seven hours a week on average over the course of a couple of months. At that time, I decided it's not worth maintaining Dialyzer anymore, or having it as part of the CI. I know you guys have been up in Elixir for at least a year, if not more, right? If not longer. So I'm curious, what did experience has from the Dialyzer? Have you had these kind of errors, intermittently failing, weird errors, which are a lot of times hard to debug, pop up? And if you do, how do you deal with that?
 
Szymon_Soppa:
OK, so you're definitely right about weird errors, because there are a lot of errors that aren't really descriptive enough to figure out what the dialyzer would like to be fixed, like something that doesn't really sound like a clue on what happened. And you have to dig into the function or even other functions that are being called inside of a function to realize what could actually happen inside of this error. So yeah, this kind of errors happen. I don't spend hours on writing the specs, figuring out what Dialyzer would like me to fix or something like this. I would say that at this point, it's more like less than an hour a month apply some changes if something breaks. However, it's true that at the beginning or for a couple of months when you use it, it sometimes happens that you have to spend a couple of hours to fix the dialyzer. And yeah, it kind of sounds like it's an effort not worth the reward, as you said. Maybe with the time, when the time flies, you start to realize what the given error from the dialyzer might mean and how you can fix it. However, I wouldn't say the dialyzer is the best tool in the world. It's not, and it's just a help in the project. And it did help me a couple of times to eliminate the problem before I deployed it into the production. So it did help, and I can imagine that with the growing project, it can help even more. But. Yes, sometimes it feels like it's too much pain to go through it. Just, yeah, hard to justify the spent hours on it because sometimes the errors are not very descriptive. And you try to write the specs. At the same time, you feel like you could spend less time. debugging the actual error if you would push it into the production. I think it's worth once the project is big enough. And there are a lot of functions that may fail when it comes to types. And then you can see the reward. That's my opinion.
 
Adi_Iyengar:
Alan, do you have a similar experience as a Shimano dataizer, or do you even use it in your projects?
 
Allen_Wyma:
I tried to use it before it took forever and then I just never got back to it. I do just use type specs just for documentation for mostly. But yeah, I'm debating to add it in but I feel like There's such a ramp up time just to run the damn thing, right? Like, is it really worth
 
Szymon_Soppa:
Yeah.
 
Allen_Wyma:
it? I don't know.
 
Adi_Iyengar:
Yeah.
 
Szymon_Soppa:
But it's
 
Allen_Wyma:
maybe.
 
Szymon_Soppa:
mostly the first time you run it, then it's much faster. And that's why you want to cache some of these things in CI. One of it is the PLT file generated by the dialyzer. So yeah, it's faster after the first run, but the first one is painful. Yeah, it's sometimes tens of minutes. All right, the next step is somehow correlated with the dialyzer because we use it. And as I said, Alan, it's also worth adding the specs for the sake of documentation. And actually, the next step is about that. There is a library called Doctor in Elixir, to maintain a proper coverage for the documentation of the modules and functions, as well as the specs. It's my personal view, but I mostly prefer to write a spec and the proper, and name the function in a proper way, as well as the arguments. Rather than writing the the big documentation of the function and what it does. I think it's very useful for the libraries, but I feel like it's a waste of time to document every function within the code just for the sake of having a documentation. But that's my point of view. And the doctor allows you to configure what's been discussed within your team. And you can set, for example, sorry. that you'd like to have, for instance, I don't know, and 60% coverage of the spec with the project or at least 50% of the coverage for the documentation. And then it helps you maintain this number. Just a moment, I had kind of a problem. So I guess that's gonna be edited. Okay, so I'm back. So this step once again, helps you maintain a proper coverage for the documentation and the specs. Once again. Test, test, all right. So this step allows you to keep track of the coverage of your specs and documentation. And if you would like to have like a specific number of the coverage within your project, agreed with the team, you can include it with a configuration file to the CI so that you will have this check enabled in the CI as well. That's basically what doctor does.
 
Adi_Iyengar:
Have you guys used like, inch?
 
Szymon_Soppa:
No.
 
Adi_Iyengar:
Inch is, I think, it's like a CI, it's not just for Elixir, but it's like, I think it had a Ruby one too, but it basically grades your documentation. I was trying to, I think Docker might be a little better for Elixir maybe because it's like interprets type specs and stuff as well, but I know Inch is like a more kind of an industry standard, at least when it comes to Ruby Elixir community to like use that. And I think it's like, it just grades it A, B, C, or D based on your. overall documentation. I'm not sure how configurable it is.
 
Szymon_Soppa:
That's why I actually love to talk about CI because I always learn new stuff.
 
Adi_Iyengar:
Nice.
 
Szymon_Soppa:
I'll check this out.
 
Adi_Iyengar:
Yeah. Cool, sorry, go ahead. Let's, yeah, continue.
 
Szymon_Soppa:
All right, so the last step, I think it's the last one. Yeah, the last step is to run the test finally. I don't think I have much to say in here, apart from the fact that you can, for instance, use tools like, I don't know, XCoveralls to set like a... the coverage for the test, for the project you would like to have within the tests so that, I don't know, you can keep track of 80% of coverage for your project. And you can do it with the XCoveralls for instance. But apart from that, it's just, yeah, the step everyone should do, running the test, writing the test, and that's it.
 
Allen_Wyma:
Yeah, definitely the cover one is is really a good idea. I think the default is 90% and I think that's already it could be higher. I know oddies like no, you got to go for 100 right? You're
 
Adi_Iyengar:
Yep.
 
Allen_Wyma:
the one it's like you have to have 100
 
Adi_Iyengar:
Well, again, just to talk about that philosophy of 100, I prefer 100% code coverage and explicitly ignoring files that you don't want to test, right? Because if you put 90% as a coverage, a new PR comes where you've tested only nine lines out of 10, it will still be accepted. But all your new PRs should be 100% tested unless you explicitly add something to the coverall. So the 100% was from that philosophy 100% of the code that you want to be tested should be tested, right?
 
Allen_Wyma:
Uh, do you test your, you know, those default files that come in automatically? I'm guessing you probably ignore those, right?
 
Adi_Iyengar:
No, we ignore those. Yeah. I do have an endpoint test and a router test that we add to our Phoenix apps. It actually does matter how the endpoint would function with a specific web server. We use that for load testing. So I mean, again, you can find reasons to test different parts of the code, but because it's part of our template repositories. writes every Phoenix app we create from the template repositories, those tests get carried over. So it's like zero effort to maintain it.
 
Szymon_Soppa:
All right. And I would say the bonus thing within this post was about local CI. And this is pretty easy stuff because, well, I guess most of us use Git for the code versioning. And there is something called Git Hooks. So basically, this is a script. that's being run before the, I don't know, for instance, commit or push, that can block the push or commit. If it fails, then the commit will not be performed. So this is a nice thing to do if you would like to ensure that you don't push code that will fail anyway on the remote CI. So you can either just edit a pre-commit file in the hooks folder, or you can add a library from Elixir World called, just a moment, it's called git hooks, which will inject the commands into pre-commit for instance. You can specify it in there. The commands that should be run before the commit. And the configuration has been done in config file in MixedProject. The file itself is injected during the compilation process, MixedCompile. So that every dev will have it. Of course, it's possible to skip it. You can always, for instance, use a git commit with the no-verify flag. Then it will just skip the hooks. Or you can simply remove the pre-commit file that's generated during the compilation process. But that's not the goal here, I suppose. So I would say put only the operations here that does not take ages to perform stuff like, for instance, checking of the formatting, or I don't know, something that is not that long to run. Because if you include all of the steps and the project is big enough, then it can take, I don't know, five minutes sometimes even to do the commit. And
 
Adi_Iyengar:
Does
 
Szymon_Soppa:
I
 
Adi_Iyengar:
that
 
Szymon_Soppa:
know
 
Adi_Iyengar:
make
 
Szymon_Soppa:
that
 
Adi_Iyengar:
sense?
 
Szymon_Soppa:
this is a topic where not every dev agrees on. So just do it if it feels like you'd like to have it in your project and your team agrees on it as well.
 
Allen_Wyma:
So we don't want to run Dialyzer on a pre-commit
 
Adi_Iyengar:
Hehehehe
 
Allen_Wyma:
hook, right?
 
Szymon_Soppa:
Yeah, you probably won't run it within your team.
 
Adi_Iyengar:
I think another
 
Szymon_Soppa:
Alright.
 
Adi_Iyengar:
thing about the whole dialyzer stuff, what it adds, the type, and types are obviously very useful, type checks and stuff. We are actually starting to think, if there's a part of the project that is complex enough, that types would make a huge difference. Maybe use Gleam, and in fact, we're using it for one of our upcoming projects in production. So or another language, really. But Gleam is easy because you can just compile that and call that binary itself within a Phoenix project. So yeah, it maybe solves that dialyzer problem within a small context or a domain. So maybe that's an approach to if dialyzer is too much of a pain and your project is already big enough, where should I start? Where should I add dialyzer? Maybe start. by either creating a hex package with Dialyzer, a small package which would take less effort to maintain, or just use Gleam for that small project.
 
Szymon_Soppa:
Yeah, after all, I would say that when it comes to creating these CI pipelines, just talk it through with your team, because I know that some of the steps might not be very welcomed by the other members of the team. Yeah, and you just want to have good decisions being made at the early stage of the project. It's very hard to. for instance, add a credo in a project that has four years, and developers didn't really take care well of this project. We've seen that at Curiosity, and it takes sometimes weeks to fix all of the problems.
 
Adi_Iyengar:
Yeah,
 
Szymon_Soppa:
So
 
Adi_Iyengar:
wow.
 
Szymon_Soppa:
yeah, at the early stage of the project, just talk this through. And. decide on what should be included inside of it.
 
Adi_Iyengar:
Do you guys have any periodic checks that you run? Because all of these seem like they will, these are the checks that might be set up on a push, right? If you push a commit or merge to main. But have you dabbled with other tries or triggers? One of the reasons to do periodic checks is if you are using XVCR for an API call, make sure every night the cassettes are updated. to make sure that those updated cassettes, your tests are passing, or a dependency that you're using internally, make sure that's always at the latest version. Stuff like that, like have you dabbled with that?
 
Szymon_Soppa:
No, we don't. However, it does seem like a good idea. So maybe do you do that in your project?
 
Adi_Iyengar:
Yeah, we use internal dependencies very heavily. We have an events library, which it's like the core function of our event-driven architecture. So in the MixedExes, for example, we have events greater than or equal to 0, 0, 0. And in that CI task, we just run mix-steps update events and check if there's any get diff in MixedLock. If there is a get diff, it breaks. That means that you're not at the latest version. And if there isn't, then you're not it just runs. And we check that every night to make sure, not only during triggers, but also every night, your apps are the right, all the apps are the correct version.
 
Szymon_Soppa:
So I think I have the next step in the CI process, thanks
 
Adi_Iyengar:
Nice.
 
Szymon_Soppa:
to you. It really sounds like a good idea, so I think I'm going to edit. Yeah.
 
Allen_Wyma:
So here's a something that you may consider to add to your stuff is I think I forgot what the name is. I think it's called Is it down or something? I think I can get the name. Have you heard about this one? It's like automatically just ping your site every five minutes or something like that. Do you know what I'm talking about?
 
Adi_Iyengar:
Right, like a health check?
 
Allen_Wyma:
Yeah, hold on. Let me get the name because it's a really good deal. I just had this. So the reason. Yeah, hold on a second. Here it is. Uptime robot. So this one, you get 50 free health checks. Yeah, they're health checks, right? And you think it's not super important or whatever, but one thing I ran into very interestingly is that I usually I deploy on Kubernetes and we use cert manager. which I'm actually pulled the latest certificate. Well, I got a message from one of my clients and he was like, hey, nobody can use the app right now. And it's all like API based. So it's a mobile app and we all have GraphQL endpoints and it wasn't working. And I thought to myself, okay, what's going on over here? And then we just ran the app locally and see what's going on. What actually happened is that certificate didn't fail to renew using cert manager. So the app was running, but in the infrastructure, it wasn't actually quite working. And Uptime Robot will actually check your cert and let you know when it's like seven days need to renew, etc. So yeah, that's something I think is good to have. It's not really elixir related necessarily. Nothing really to do with elixir itself. But I think it's a good thing to have overall is because I've never had that happen before where a certificate failed to renew. And I think it's a good thing to have.
 
Adi_Iyengar:
I feel like that's more of an SRE than CI. I consider CI more towards a development. I think you
 
Allen_Wyma:
I'm
 
Adi_Iyengar:
talk about.
 
Allen_Wyma:
sorry,
 
Adi_Iyengar:
Right.
 
Allen_Wyma:
I'm not CI only. Okay. Sorry. You, you, you said periodic jobs. I was thinking about that one rather than a CI
 
Adi_Iyengar:
Right,
 
Allen_Wyma:
job.
 
Adi_Iyengar:
right, right. I guess periodic thing that's relevant to the development. But I mean, you bring up a good point, like uptime monitors. I know a lot of Alexa teams use AppSignal, right? It works really well. AppSignal also has very configurable uptime monitors. And yeah, I think totally makes sense. Every app, every Phoenix app that we launch, we add an API slash status endpoint to that. And there's a deep check and a shallow one. a call can be made, a deep one checks the database connections are alive and stuff, like having those endpoints to all your Phoenix apps kind of does make sense too. And like that API check, the shallow one will make sure in the certificates are up to date and the app is kind of accessible in the web as well. But yeah, I mean, this SRE, I mean, with the Kubernetes and like all the cloud stuff, the line between dev and... ops is like already very blurry so you can very well make that as part of your CI.
 
Allen_Wyma:
Yeah, I've actually never thought about monitoring database connections, but apparently they can go down at any moment, right?
 
Adi_Iyengar:
Oh yeah, it's, if you're, again, depending on your scale, it happens very often with the events part of our systems, because everything, every event, every update, every access to the site, we generate events. And it's very important for us to monitor. We have every minute, we make the call to our event system to make sure it's up. It also took us a lot of effort to get it to zero downtime, which wasn't it. you know, Phoenix out of the out of the box that was not set up for that.
 
Allen_Wyma:
Now, do you run, what is it that Netflix does? They have like a self
 
Adi_Iyengar:
Chaos monkey,
 
Allen_Wyma:
like killing.
 
Adi_Iyengar:
chaos monkey.
 
Allen_Wyma:
Yeah, I was gonna say
 
Adi_Iyengar:
We
 
Allen_Wyma:
chaos
 
Adi_Iyengar:
have
 
Allen_Wyma:
monkey, but
 
Adi_Iyengar:
one.
 
Allen_Wyma:
I wasn't sure of that, do you?
 
Adi_Iyengar:
We have a similar one. We use stream data to generate the bad data. And we use the whole chaos architecture is within a stream data property test to make sure the results are correct. We run it as a test as well. It's not nearly as sophisticated as Netflix, but it is at least, I think what I like about it is that instead of. you know, if there's something complex, like an NP-hard problem, instead of humans, you're letting the machine find errors for you that you can then address and, you know, solve the edge case. Cause like if there's an error, it does print, here, this is the, um, date of birth of the patient. This is the appointment and time. And here's the dental office where it was working on, you know, all the relevant stuff to our domain. And we can literally copy and paste that into a factory call and set up the data and like replicate that locally very easily. So. It has, we've caught, I want to say, like 16 last day check bugs in two weeks in our new scheduling portal that we launched. It's like an NP-hard problem. So it's huge if you're solving an NP-hard problem. Types would have helped for sure. But if you don't have types, property-based testing with a chaos engineering is very useful. But I guess we're talking like very, we're beyond CIN now. I think.
 
Allen_Wyma:
We're
 
Adi_Iyengar:
Shimon,
 
Allen_Wyma:
in a...
 
Adi_Iyengar:
have you guys played around with this?
 
Szymon_Soppa:
I'm sorry once again, can you repeat it?
 
Adi_Iyengar:
I was curious if you guys have played around with any kind of chaos or even like property based testing.
 
Szymon_Soppa:
Yeah, I played a little bit, but it's not a part of every project we run. Yeah. But definitely something to consider.
 
Adi_Iyengar:
Yeah, this is really cool. I definitely learned a lot from this. I did not think about the rollback stuff for sure. And we also don't use depth audit. So I am very excited to go back and update RCI to add these things to that. So thanks for the knowledge here, Shimon.
 
Szymon_Soppa:
Yeah, thank you, too. As I said, it's always a nice thing for me to talk about it because I don't want to spend too much time on code reviews. Of course, it's not that you can automate everything, but still, as much as you can, and I'm for it.
 
Adi_Iyengar:
And do you have any other closing questions or thoughts?
 
Allen_Wyma:
No, I literally took this link and I sent it to somebody, one of my people. And I was like, we should implement at least some of this stuff. Uh, I think really most of it, the database thing, I'm, as you know, I'm a little bit not too sure, but, uh, everything else, I think is pretty good, especially, I mean, so below is good. Um, I had no idea about the other stuff. And then also inch is interesting too. So that's something I may consider also, cause we're, I'm trying to push everybody to do docs. because it's very useful.
 
Adi_Iyengar:
That's great. Awesome. Well, I guess this is it for our podcast. We do do picks, Shimon. I'm not sure if you remember from your last time. So I'll let you, you know, give me some time to think about your picks. But Alan, do you want to go first?
 
Allen_Wyma:
Yeah, so I just started reading this book because I'm working on, I had like the staging environment set up and I need to get a demo and a production environment set up on AWS. And I've been trying to get more and more into Terraform. And I just started going through this book called Terraform in Action from Manning. And it's been super helpful. Like I just been reading from the beginning and I got through Terraform by myself. And just reading this book already as I think I'm on the second chapter. It's not a lot, but it already explained quite a bit more to me than when I found out trying to kind of crash course through it all. So I think it's a pretty good book. It's a little bit old, but to be honest, it's still about the same. I don't see much difference so far. So I think it's a that's my pick for for this week.
 
Adi_Iyengar:
Awesome. Awesome. I have a couple job-related picks. This time the candidates. I have two really good candidates who are still looking. Both are like late entry-level, early-mid Elixir engineers. Both are amazing. Their names are John Hitz and Neil Techni. But I'll put their contact information in the show notes too. But if you guys need more information on them, reach out to me. I've been mentoring them for some time. They also join like a weekly mentoring group that I'm a part of with Bruce Tate. He's the one who leads that. So they've been really investing time into learning Elixir. Very motivated people. So yeah, if you need like two awesome entry-level motivated Elixir These will be great candidates. That's it for my picks today. I don't have any video games or anything else today for a change. So, Shimon, do you have any picks?
 
Szymon_Soppa:
I have an invite. We have meetups every month. Elixir meetups. And we decided to do, because of COVID, we decided to do it online. Maybe someday we'll do the hybrid. But we meet every month and you can sign up for that on our page, koreazm.com. At the bottom you can see in the footer. the next year meetup, maybe you can also link it to the podcast episode. And it's, as I said, it's happening every month. You can join as a speaker or as a listener, but there are around 50, 60, one time it was 70 people joining and there's a lot of knowledge in there. So if you'd like to share it or learn it then. Feel free to join.
 
Adi_Iyengar:
Yeah, I have seen you guys advertise on LinkedIn. And I've been wanting to attend one for a while. Just it hasn't worked out for me. The time hasn't worked out for me. But it does sound, some of them sounded really interesting. I know you had one about, I think, career development, if I'm not mistaken. There was one about becoming senior, if I'm not mistaken. From junior to senior, something like that. I'm not sure. I can't remember. But I know some of the topics are very interesting. And We can do it on LinkedIn. So I get those notifications every now and then. So yeah, I can recommend that as well to others to join it. Those all sound like very interesting topics.
 
Szymon_Soppa:
Yeah, we try to cover different topics as well so that it's not only for seniors, but also juniors, et cetera. So yeah, you can expect different type of topics in there.
 
Adi_Iyengar:
Very cool. Awesome. Well, I think this is it for this week, folks. We will see you next week. Until then, have a good week doing Elixir, or whatever you do. Bye.
 
Szymon_Soppa:
Bye bye, see you guys.