Sascha Wolf:
Hey everyone, welcome to another episode of Elixir Mix. This week on the panel we have Alan Weimar
Allen:
Hello?
Sascha Wolf:
and Adi Anger.
Adi:
Hello, that was a interesting... Hello, Alan.
Allen:
Yeah, I think I'm going through puberty again. Sorry.
Sascha Wolf:
Well, it's starting great already, isn't it? I'm Sascha Wolff and we have a special guest this week as so often. This week, this is Kevin Matthew. Kevin, why don't you tell everybody why we love you, why you're here and what the topic of today's podcast is going to be.
Kevin:
Also, thank you for having me everyone. And yeah, as Sasha already mentioned, I'm Kevin Matthew. I work as a backend developer for a company that creates basically loyalty programs for brands. And we do this, all of it, it's on blockchain. And yeah, so I do a bit of web three work. I don't necessarily write the smart contracts. I just write the programs that interact with it. then a lot of back-end work as I already mentioned and some other integrations as well. Apart from my day job, my hobbies as well is in programming and that's what I used to do most of in college. So just code code code the entire day and work on several different projects. And apart from that I'm also quite into sports. I'm a cyclist as well as I go running as well. just today I did a 5k just because I had a very strong coffee and I had to burn off all the excess energy. So my friends found it quite weird when I told them about it. Right and I'm also a musician. I sing a little, I play the keyboard, I play the guitar as well. So for today's discussion over here I will be talking mainly about the blog that I wrote for with X unit, obviously it's nothing very complex, it's just some minor details that most people don't really know about because I was in that situation myself and then I used to have really funny workarounds which wasted a lot of time of mine. And then we will probably segue into general software engineering and how things really work, all of that. So yeah, that's it from me for now.
Sascha Wolf:
Hey, there you are, folks. I'm actually curious, Kevin, how did you end up writing a blog post for Elixir School? Did they like get in touch? Right? Like what, what is the story there? How did you end up writing this blog post?
Kevin:
Uh, so it's actually nothing, uh, very fancy. So yeah, the thing is this February, this year, February, I was just very interested in getting into open source development. I haven't done a lot of it actually. And just wanted to give it a shot. And I was happy that I could do it in Elixir because not to just please people that are on this, on this panel, but I love this language quite a lot. and it has given me the job that I have now and I really love my job as well. So it was very emotional for me when I started with open source development. Coming to Elixir School, how I got to it, I wanted to just find a way to give back to that resource from where I learned and a lot of other people themselves also learned from. And so the best way for me to do is or at that point was to write a blog. I can't write a lesson because... anything you can think of is covered in it. There's nothing new
Sascha Wolf:
Mm-hmm.
Kevin:
or nothing more to write. So a blog post, whatever you found interesting, you just write up and then I scrounged a little. There's nothing about tests. And the way to contribute to Elixir School blogs is that you just go to their GitHub repo, just clone it, and then just fork it, I'm sorry. Just fork it and then create a, write a blog and then open a PR, that's it. And then somebody will merge it and it's live. That's all there is to it.
Sascha Wolf:
Nice. I actually didn't know that you could just do it through GitHub. I mean, probably could have just read it on their website, right? But it is news for me. So like for whoever hears this and is like, wait, I can do that. You know,
Kevin:
Yeah.
Sascha Wolf:
Alexey Skud, if you're surprised by the amount of blog posts coming in, we're at fault. I'm sorry.
Kevin:
Yeah, and actually it was, again, I kind of felt that most people might not know about this because the frequency of the amount of blog posts that are made over there is quite few. I think the last one that I saw was in 2021 or 2020 probably,
Sascha Wolf:
Okay
Kevin:
yeah, and mine is the latest one in 2023.
Sascha Wolf:
Okay.
Kevin:
So probably not a lot of people know about it.
Sascha Wolf:
Yeah, I think that's potentially also a good pick for later, right?
Kevin:
Yeah, so two new things we learned today.
Sascha Wolf:
Yeah, indeed. So how did you, like, I mean, like you said, he's ground the blog already a little bit about, okay, what kind of topics are already covered, not a lot about testing. I think in your blog post, you also go about a specific thing on how to run all specific tests. How was this useful for you? And like, was this something you already knew before? Was this something like you could just... kind of on accident discovered, right? Writing a blog post or was this also the same thing where you like solve the specific problem at a job? Like what's the story there?
Kevin:
Okay, so the story over here is it all started with my job. I just graduated just a little over two years ago and this is my first job here. And the requirement for this was Elixir, I had to learn it. And the thing was over there that, so my tech lead over there, he's very particular about writing tests. Like if you write a function, you write a test. If you don't write a test, don't bother using it.
Sascha Wolf:
I know somebody else, which is also very particular about writing tests. I forgot his name, I don't know. Adi, can you refresh my memory?
Kevin:
Yeah. Yeah. So it's very particular about that. And I was quite happy that there's someone like that over there because although the projects that I did while I was in college, while I was an undergrad, I just made an MVP. I didn't know anything about software engineering practices. I never wrote a test in my life properly. And this was the first time I was introduced to tests. Since I'm new, I'm also introduced to a huge code base and that has its own test for controllers, for context functions, for DB models, everything. And if I want to run a very small test on a very small feature that I worked on, first one is I just run mixed test and all the tests will run. So the way we have it is we have umbrella apps. So seven tests for seven different apps altogether, which is quite inefficient. The other thing what I could do is mix test and the paths to the test file that. So yeah, I just run the test cases in those files. But again, what if that file has 400 tests? I can't just run 400 tests every time. Then the next idea was, you know what I'll do, I'll just comment out all the tests that I don't need. So I comment out all the tests that I don't need. And then obviously that you run into errors that the do block is not closed. There's an extra end. end keyword. So I often used to run into such situations. But then I came across this co-worker of mine who just by mistake he just wrote this at tag and then some keys over there a key value pair and he pushed it into his PR and then I was just wondering what it is. So I just asked him about it and then he said yeah well so you can just tag tests like this. and then run that very specific test. You don't need to comment out anything. You don't need to put the whole file path out of my mind was instantly blown. Like you just write one statement and then you know what to do, right? And the beauty of that was that you can actually group it together. So if you want to run two different tests, you don't need to run two commands together. You can just put the same tag for it and those two tests are done. That's it. So it was very impressive for me and the fact that I didn't know about it is because while learning Elixir or XUnit or anything, I never came across it nor did a lot of people. I went through a lot of learning resources, YouTube videos, but yeah, nobody really spoke about it. And Elixir School also didn't have a blog focusing specifically on tests. And so I thought, yeah, why not just write about it and then let a lot more people know about this and save a lot more time. wasted like this the way I did. So that is the story.
Sascha Wolf:
Yeah, that makes a lot of sense to me, but I'm curious, like, Adi, is EdTech something you also tend to employ in your testing practice? Because honestly, you're the testing guru of the podcast, not gonna lie.
Adi:
Well, I use tags for a lot of things. I think grouping tests is something I sure sometimes. But I do it for making my set of blocks better, like setting what they call a context,
Sascha Wolf:
Mm-hmm.
Adi:
right? I use tags for those quite a bit. Yeah, I think running tests, I very rarely run tests by running a command. Generally, in my environment, I have a script set up that dispatches the lines that I want to run, or a reader that keeps reading based on what files I save. It keeps on tests,
Sascha Wolf:
Yeah, yeah.
Adi:
right? Because I do TGT, I have tests written already. So generally, that is my approach. But I mean, it's very interesting to hear Kevin say that, yeah, I mean, a lot of resources, books, and blog posts, and maybe YouTube videos don't cover. You know, a lot of these things, but I think that's where I think elixir. I think it's a little different from other languages where the best resource is hex docs. Right. And I think if you go through mixed desk, I've not checked their head stocks in a while, but I. I'm willing to bet anything that this, that will be there there, you know? I think that's also something I think maybe. As an elixir community should like promote more that, Hey, you know, read 5% of your, if you code 100 hours, maybe spend a couple hours reading Hex docs and stuff like that. I know one thing I read still very recently. I want to say last year that you could run two different line numbers by just adding another colon. So mixed
Sascha Wolf:
Mm-hmm,
Adi:
test,
Sascha Wolf:
yeah, you can do
Adi:
test
Sascha Wolf:
that.
Adi:
file name 10, and then colon 20. It runs those two.
Kevin:
Whoa.
Adi:
I did not know that until I read Hex docs last year or something. It was like, it's still, I think I would say like, six or something feature. But yeah, it's stuff like that. That's why it's good to keep checking docs. And that's a great thing with Elixir, that docs are a huge part of the ecosystem. And that's something Jose has kept since day one of Elixir. So yeah.
Kevin:
Yeah, absolutely. So yeah, Hex Talks is a very great resource that way. So after learning about Tag, that's when I went back to it. How did I even miss this? Yeah. So it was quite
Sascha Wolf:
Yeah.
Kevin:
interesting that way.
Sascha Wolf:
I think, I mean, you're pointing something interesting out there in that, um, I feel, especially when you've been in the industry for a while and not necessarily that much involved in Elixir yet, that you kind of expect the default documentation to suck. So
Kevin:
Oh no.
Sascha Wolf:
I could imagine a fair share of people out there, um, maybe already being somewhat familiar with Elixir, but never really having. duck into hexdocs that much because well, it's the official documentation, what are you supposed to expect, right? But just to take plus one there, honestly, like the documentation mindset of the Elixir community at large is like amazing, top notch. So if you haven't, do it.
Adi:
Yeah, I think one more thing to also add to this is, I think not just Elixir, I think last, I want to say like last 10 years or so, there has been like a very, I don't know, kind of like targeted way of like putting pressure on the each programming language community to focus on documentation. Look at all the new languages
Sascha Wolf:
Hmm.
Adi:
coming out, Rust, Elm, oh my God, the documentation was ridiculously good, right?
Sascha Wolf:
Mm-hmm.
Adi:
Python has increased it so much, they've created new ways of doing documentation. Even Haskell has good documentation.
Sascha Wolf:
I don't believe you.
Kevin:
people.
Adi:
But seriously, I think Elixir is part of that new generation of languages that is embracing the fact that not having documentation is a type of tech deck that everyone
Sascha Wolf:
Yeah, agreed.
Adi:
incurs. So yeah. not just flexibility learning like any of these new modern languages, check out the documentation rest has an amazing documentation for example.
Sascha Wolf:
Yeah, I think you're spot on there. I think that makes a lot of sense. I also think it's interesting what you said earlier, Kevin, about using the techs and tests to run only specific subsets of tests, because my workflow is super different from yours. And that is fine. We both work. For a while, I was a very avid Spacemax user. So like Emacs with a specific. pre-configuration around the space bar and kind of using the Vimkey bindings. As far as I know, the project is kind of grinded to a halt and also, I mean, it's Emacs, it's ELisp and like updating dependencies really gets, becomes a pain after a while. So like after using it for like, I think two years it was, I migrated away to Visual Studio Code just because it was more convenient. But there is one thing I really, really, really miss about SpaceMax and that is like you had super. convenient ways to switch between the implementation file and the test file. There was just shortcuts for that. And it always, it just worked. And you could then like in the test file say like on your cursor, hey, this test on my cursor level, just run this, just run this. Then it would do that
Kevin:
Well,
Sascha Wolf:
automatically and like a little buffer on the side. And if you went back to the implementation thing, you could then use a shortcut to say, hey, the last set of tests I ran, run those again, please. And that was such a nice workflow. Not gonna lie. You never was able to quite reproduce it in Visual Studio Code. Probably you can, but just, well, it went out of the box, you know? So like,
Kevin:
Right,
Sascha Wolf:
yeah.
Kevin:
yeah.
Sascha Wolf:
And that is also like a super, perfectly fine way of doing things, right? I tend to, my test case files tend to be relatively small. So like, I mean, you said earlier, 400 test cases in a file. I was like, oh boy, I would break that up. Right? But I mean, Sometimes you work on legacy projects also where these things happened over time and like there's really nobody to blame and those blaming doesn't really make it better. Right. So like at that point in time, having something like tags, for example, to reach for just to be able to say, hey, I want to run a specific subset of tests. Yeah. It's good that we have all these tools.
Kevin:
Absolutely. And then I like how the emphasis is on the docs, especially in the Elixir community, because I do this quite often. I go into Hex docs just to see how a certain function is implemented. For example, we have native functions within Elixir C enum.map.
Sascha Wolf:
Hmm.
Kevin:
And I want to know how is the map function implemented. I just need to go to Hex docs and then find the section where map is explained and then there's a little icon over there with the code with the brackets and then just click it and takes you to the GitHub repo. It's fantastic. And just imagine doing this in any other language. I come mainly from C, C++, JavaScript, Python, nothing of that sort. And the thing about the other languages is that it's so customizable that every person, every programmer has their own way of doing things. especially when it comes to JavaScript. So even if you wanted to write a test, you go online, you try to learn it, you're just learning one person's interpretation of things. Yeah, that is what I don't like quite a lot instead of having something standardized. So imagine you're just working at one company or one project, you move on to something else. They use a completely different way of doing, writing things, structuring things, testing anything, all of that. So in Elixir, that is what really helped me, especially with... the open source development. I just didn't waste time to see how it is structured, why it's structured this way. I just knew that this folder, this is going to be there and this place, this test will be there. I know it for sure.
Sascha Wolf:
Yeah. Yeah. I think there's like a little bit of a, of a tendency for, for some engineers in the Elixir space to put test files next to the actual code file, right? Like not having a separate test folder, but say, okay, I actually put my, my test files directly into the same folder as the code that gets compiled. I've seen people do that. And I think there's an argument to be made for it. Um, basically you don't have to, Adi goes like, what?
Adi:
I've never seen that in Elixir, I've seen people
Sascha Wolf:
I've
Adi:
do
Sascha Wolf:
seen
Adi:
that in
Sascha Wolf:
it.
Adi:
Rust and other languages,
Sascha Wolf:
Yeah.
Adi:
I've never seen that in Elixir.
Sascha Wolf:
I've seen it, especially for people which have a rust background do that. And honestly, I can, I can, I can see the value of it because I mean. Why are we putting things in the test folder? Give me a good reason for it, right?
Adi:
Dependencies, not needing some, I don't know, I mean, yeah, different dependencies, right?
Sascha Wolf:
But the compiler doesn't care about the exs files, which is the test, right? It just ignores it. But yeah.
Adi:
Oh, you mean not in the same file.
Sascha Wolf:
Not in the same file,
Adi:
Same, same
Sascha Wolf:
having a folder.
Adi:
parent.
Sascha Wolf:
Yeah. So like you might have like your lib whatever dot controller, and then you have a controller test directly next to it. And I think there's an argument to be made for having a structure like that. But that's besides the point. In general, when you come, what I wanted to go to, except for like some of these, these little opinionated differences when you come to an approaching an elixir, things are there where you expect them to be. Right? In general, exceptions,
Kevin:
Thank you.
Sascha Wolf:
there's exceptions to that rule, but in general, like you kind of know where to look. I think that's also a big strength in general of how OTP helps us structure applications, right? Like if you really want to understand what does this thing even do, well, a good starting point is the application, right? Like what does this thing start? Which other dependencies do get started and then you can kind of hang from there. So like having the structure and also having this opinion nation in the default tools. helps a lot in not having to relearn the specific flavor of the language for every single project you come across. In general, from my experience, and I've seen a fair bunch of Elixir projects at this point, there's rarely any really weird surprises. And sometimes people go overboard with macros and then you're like, what the heck is happening here? But a part of that, things are where you expect them. At least that has been my experience so far. I'm actually curious to ask this to Alan because Alan, many of you are also doing a lot more freelancey work. I could imagine working on existing code bases. Has this been your experience so far as well?
Allen:
mean the test files next to those? No,
Sascha Wolf:
I mean,
Allen:
never.
Sascha Wolf:
things being where you expect them to be.
Allen:
Actually, yes, I would say so, to a certain extent. I mean, I kind of know where to start, because you always start from kind of the application EX file. And then from there, you can kind of figure out which things are running. And then if it's a Phoenix one, then you know which routes to take a look at. So most part of things kind of make sense. But there are some times where you see something like, why is that? What is that? And, uh. I think the most difficult part of any application is really the database structure. Because sometimes people do, they have an idea and it doesn't really make sense, I think, when you think about it. Or like they just kind of, I don't know. When you're in consulting, you tend to be quick. Like, let me just do this thing and let me just kind of hack it in there because the clients yell at you or they don't want to pay enough for you to really do a really solid solution. So you get kind of like this code that's not so nice. And so like, for instance, I had this recent thing where one of my guys, he's usually in Taiwan. I'm based out of Hong Kong. We have about the same time zone, but he's in Brazil with his family, visiting some family over there. And the issue we have is like, he did this data structure and it didn't really make sense to us. So we had to wait till at night or early in the morning to get ahold of him. And we even asked him like, you know, what does this stuff, if you mean, because like we think this is what it means. He was like, actually I forgot. So like. Even for this one, we don't have the documentation saying, OK, this is the structure. When these fields are filled in, this is what it means. When these ones are filled in, this is what it means. So maybe I'm kind of long-winded about this whole story. But I mean, in the end, it's like the hardest part I find is just the logic and the data structure, really the hardest parts to find. But finding where everything is, I think, is pretty straightforward, especially if you follow the name and structure where it's like the top level, so you call it the app, dot, and folder name, dot, name of the file. Right? And then
Sascha Wolf:
And
Allen:
the
Sascha Wolf:
honestly,
Allen:
hard,
Sascha Wolf:
I-
Allen:
sorry, one more hard thing I also find too is how do I know should I make a new context or not? That's also another difficult part I find.
Sascha Wolf:
But I think that's fair. If the complexities of grokking a code base boils down to, well, what does the logic of that particular code base do? What is the logic here? What do things do? What does it do in general? And if that is the thing you need to grok, then in general, you can say that that's a good thing. And that way you don't have to first understand what is the structure of a code base. They may make a good drawing a parallel to something off the, it's not necessarily specific to the DDD community, the management design, but it's something often used by them is this comparison, comparison, comparison, comparison between essential complexity and usually called accidental complexity. I'm not sure if you've heard that before, but essential complexity is really just that. It's the complexity of the product, of the code base at hand. Like what really is. the business rules, for example, right? The thing it really does, that where it delivers value and accidental complexity as well, the accidentally kind of comes on top. There's always a certain level of accidental complexity. I mean, like you need to deploy the things you, if you, for example, the computer cluster, right? Like you need, it needs to be packaged in a Docker container somehow, and it needs to be shipped into a running port into a cluster, and that is a level of complexity that is not essential to the product, that's a problem at hand, but it's still there. And, but excellent complexity can also come from a whole other slew of sources. Like for example, you look at the code base and you're like, I am not even sure where to start. Right. That in itself is also an excellent complexity. So yeah, I'm not sure I would 100% agree with like the hard thing to work with the database structure because that sounds dangerously like database driven development to me. But in general, I get what you're saying.
Adi:
And people do build their applications that way, though.
Sascha Wolf:
Yeah,
Adi:
And
Sascha Wolf:
they do.
Adi:
whether or not they should, right? And that's what Alan's point is. And I think that, I mean, in a production environment, once the database is there, a structure is there, it's very hard to change it. Because you
Sascha Wolf:
Yep
Adi:
already
Sascha Wolf:
yep yep.
Adi:
have real data to play with. Code, you can still change and whatever, build boundaries. But from a data perspective, it's it's very hard to change it, and how a lot of people code is very data-driven, they think in terms of those entities. And that influences their entire code base for good. So I've experienced that too. Luckily, none of the places I've experienced that have been crazy. But I could see a company building their application driven by how data is being used. base looks and not change it for a few years, and it'd go totally, totally wrong. I can totally see that happening.
Kevin:
Oh wow that is... I don't understand why would anyone do that but yeah
Sascha Wolf:
usually just happens because nobody has fields like they have ownership or like have time to do it properly. Like when you always have pressure in your neck. Well, yeah, then you add another column to that one database table that already has 100 columns. But
Kevin:
Ha ha
Sascha Wolf:
what is one more column going to hurt? Right. And I've
Adi:
Yep.
Sascha Wolf:
one time I've been in a place where we had a database where it was one table with I'm not even joking, 200 columns. And
Adi:
Yep.
Sascha Wolf:
Oh, I mean at that point migrations and changes become very hard to do.
Adi:
And that's a problem with calling it an MVC framework, especially model getting tied to the data. You are assuming that your entire request response cycle is tightly coupled with the model, which for a lot of people is the database. And I think that's why when you think of A solution you think in terms of columns and you're like, oh, do have this flag throughout this little pipeline. I need to put that in the database. You don't Because I'm playing with X data structure. It needs to be in that you know you don't create a new column unless The relationships dictating to create a new call. Right. But it's, it's, it's, it's quite interesting. And I think this kind of problem happens, you know, I want to, I want to say like After maybe the first I don't know, in the software development lifecycle, if you do a bar, like a bell, whatever, curve, right, after the first quadrant, right, the growing pains. And this kind of problem happens only after that. And a lot of people like using Elixir
Sascha Wolf:
Hmm
Adi:
right now, still new, the MVP phase. I don't think that, I think they're yet to hit that. A lot of companies, at least, that I talked to. But yeah, it could totally be a real problem. 200 columns is crazy. I mean, yeah.
Sascha Wolf:
Yeah, it was a very big, very old application. I think they literally, I mean, that has been 10 years ago, nearly. And they started that project. I'm not lying on Java 1.0. So that's how old it was. But yeah, I'm actually, I mean, I wonder if there's like a. I feel like there's a connection between where we started from, with tests, right? And this complexity growing over time. Usually when I see complex projects like that, those are often also projects that have very much a lack of tests. And I wonder if there is a connection there, if you actually take testing more seriously and have a higher level of auto-unit and integration and potential to end to end tests. If that level of complexity can even, I mean, it can still happen, but maybe if that is already like a tendency there to make it easier, to make it less more simple because well, then it's easier to test. I'm not sure.
Adi:
I think there is something there. Yeah.
Kevin:
But I'm not sure to what extent it will be. If you're building something really complex, then you would need to have tests. You don't want someone new to work on it. And then, have you seen the episode of Silicon Valley where they hire this guy to work on their cloud systems? And then ends up screwing up everything. The guy is called the Carver. So they hire a freelancer like this to work on their cloud systems. So anyway, yeah. So to know that whatever mistakes this person did has been reverted and the system works properly as it was before, they had to just run the tests and do it. So in the same case, when I was new and I was working on something, to know that I didn't mess up anything else. So it could be possible that a context function is changed somewhere and then there's no test for it because we thought, hey, this is going to be simple. But something did change in fact, and some other controller somewhere else has been affected because of it. we wouldn't know if the change has happened unless somebody finds out that there's a problem. So in order to avoid such things and when you're building something really big, something useful, something that is meaningful, it really makes sense to have all the tests in place that you need.
Adi:
Yeah, I totally agree with that. I think what Sasha was getting at was, say you're testing something, right? You'd work on a feature and you're testing something and you're like, how the heck am I even going to test it? That's probably also a symptom to maybe a possibility that the implementation of your solution is way too complex. Is that what you were getting at, Sasha? That's how I interpret it.
Sascha Wolf:
Yeah, basically, it's still very vague in my head, but when I think about some of these, I mean, one of his projects had 200 columns, we also had some of the Java classes that grew over those years. I came in much later. There were some classes that were like 40,000 lines, right? Those things existed. And at that point, if you don't already have tests, and I'm not sure how the testing situation was at the beginning, I think it was a lot better later on. Like how the frick do you test something like that? Right? So
Kevin:
Yeah,
Sascha Wolf:
I'm wondering
Kevin:
then that's it.
Sascha Wolf:
if you have like a healthy test practice, right, like you have a healthy test setup where you do regularly write tests, maybe even TDD, like a big TDD proponent, if there is like kind of already in itself like a counterweight towards this level of complexity, having these super big files and with code, large database tables with lots of duplication potentially. Because to keep it easy to test, to have this practice, you kind of have to break it down further. You know what I'm getting at? Every time I had to work with legacy code with a specific level of complexity, I mean, this is an extreme example I just mentioned, but still, I would presume all of us had to at some point touch legacy code that was... untested, I always write, always, always, always start with writing tests. And then I try to break it down. Right. So yeah, I don't know. It's a test of the safety net. Like where you also, that is like something I realized very painfully throughout my career is it's really not only about writing the test in the moment right now to know the thing I'm building right now works as I expect to. That is a very nice bonus point. But honestly, the biggest value of tests is that safety net for doing changes later on. So for example, refactoring, but I mean, it's, there's a reason why TDD is the cycle of red green refactor and the refactoring part step is non-optional. So like I, I'm very much very big at TDD pronance. Like I really try to do like a red test and then I do the simplest possible implementation that could satisfy this red test. And that kind of takes this mindset of If I was super lazy, what is the laziest way to solve this? Right? Like, often that means returning hard-coded stuff. And then you kind of have to... You have to... Basically, you have to trick yourself into, okay, how do I need to write the tests now to actually force myself to write the real deal? Like, I can no longer trick the tests, so to speak. You can make a little game out of that. And then every time after the Reflector, Reflector, Reflector, and you end up with some code that is well tested, that does the thing you want to do, and you have a very high level of confidence that all cases are covered. Because you did this little game with yourself of how can I trick the tests to not really do the real deal? And honestly, it can be fun. But if you have this super big large code, large chunk of complex legacy code that is like, has a lot of internal state potentially, right? Like a lot of branching branches, then, well, it's hard to write tests for that. And it's hard to see all the potential states this thing can be. So if I were under time pressure and somebody says, Hey, we have to fish it this feature like tomorrow. And if you don't do it, but like we are in big trouble, then yeah, I would just add another if else thing in there and call it a day, right? because I wouldn't have the trust on myself to understand this in the short timeframe and then to change and simplify it without having a safety net such as tests.
Kevin:
Yeah, absolutely. So that's where I was getting at mainly about the safety net being there, just so that you know that you haven't messed up anywhere. And
Adi:
Yep.
Kevin:
then especially for open source projects, right? You don't want to say you forked and cloned the Phoenix project. You wrote somewhere and you have a typo somewhere and then you don't know where, if it changed without the tests or not. Yeah, so it's quite a good safety net that way. and basically to understand it. Have I done the right thing? Have I, is my code good enough? And all of those things.
Sascha Wolf:
You want to say something, Adi?
Adi:
I was saying, I think that's why I think, again, my favorite thing to say, the 100% code coverage is also
Sascha Wolf:
Hello?
Adi:
so important. I mean, I've been actually trying to go a step further with 100% real coverage in my recent projects. And it makes a huge difference. Again, if you could do it early on in a project, and I don't think it's that much work to do it. But maybe I'm just used to it. That's why I don't think it's a lot of work. As Kevin and Sasha would point out, that it gives it safety net. It really becomes so much useful when the project becomes so big that you cannot comprehend its entire domain in one sitting. Like, oh, combination of everything that can go wrong. It's going to get that big at some point if you're working for a moderately successful company or whatever, building a moderately successful product. So that's where the 100% code coverage stuff really gets awesome. And yeah, I'm not going to miss out on opportunity to plug 100% code coverage.
Sascha Wolf:
You kind of made me come around on this. The first time we proposed it, I was like, I'm not sure, but I think there's an argument to be made. I've been incorporating it in my own work and also in the own
Adi:
That's awesome.
Sascha Wolf:
discussions with colleagues. Honestly, because I mean, usually when you talk about this, people say like, yeah, we do 80% or 70%. And then I'm like, okay, and what about the other 20, 30%? They're just, I don't know. They're just the I don't care percentage. What is
Adi:
Exactly.
Sascha Wolf:
that, right?
Adi:
Here's also what I say. Say something is generated. You don't want to test it. Explicitly ignore it. Still
Sascha Wolf:
Yeah, exactly.
Adi:
keep it at 100%. Because if you put it 80%, and if you test only 8 out of 10 new lines, your CI will still pass. You want to test 10 out of 10 new lines. You need to explicitly ignore what you're not testing and have 100% code coverage on everything else. So anyway. I've said it so many times, I think the listeners might be bored of it.
Sascha Wolf:
Yeah, but I think it is even like, how do you do ignore it? Ignore it? Do we do it in the code, like where the actual code is?
Adi:
Yeah, so if it's like a file ignored in the coveralls, there's a coveralls for example, where I use the entire file, I rarely do it. But yeah, put it in the code. Yes, magic comments don't look good. But in my opinion, it's better than explicitly ignoring something, or implicitly ignoring something.
Sascha Wolf:
I think there's also value in that to be had, right? Like if you come to a project with this mindset and then you see in a specific code file, you see, okay, this thing is ignored. You know, okay, if I change that, right? Like this is not going to be caught by a test. So you see that immediately being
Adi:
It's
Sascha Wolf:
pointed
Adi:
documented.
Sascha Wolf:
out.
Adi:
Yeah,
Sascha Wolf:
It's
Adi:
exactly.
Sascha Wolf:
documented, yeah.
Adi:
Yeah.
Sascha Wolf:
So there's even value in that. Like magic comments, yeah, they are not pretty, but at that point, you can see, well, that thing over there, if I change it, things might break and the tests might still be green.
Adi:
Right? And I wouldn't even take it a step further. If in the companies where I get to dictate these things, and the startups that I have been founding, engineer, warranty member, I've built the projects. I usually put the magic commons with a due date. And I built a small tool to see how it breaks when the due date is passed. And you can run it periodically, whatever, however you want to run it. But it's also like, if you don't put a due date, it breaks. If you put a due date and it's past that, it breaks. It's like explicitly thinking about, OK, this code is not being tested right now, but doesn't mean we're not going to think about it any time in the near future. So I dig it a little further than maybe I need to. I get that. It's an obsession. But I think
Sascha Wolf:
Oh, nice
Adi:
100%
Sascha Wolf:
value.
Adi:
is still. I think a reasonable thing to say. And you don't have to go to a place where you know what due dates. I get that. That's a little, that's a little,
Sascha Wolf:
Yeah,
Adi:
so that's a little crazy.
Sascha Wolf:
but even then, crazy is always so judgmental, but even then, you can do it in different ways. Like you're going to have a specific CI check that breaks, right? Like that
Adi:
Exactly.
Sascha Wolf:
doesn't break your main pipeline. So like there are ways to go about this. I'm actually curious, like, now Kevin, I mean, you have been not that long in the industry, right? Like, what do you think when you hear Adi talk or Test Guru talk about I'm curious, because I mean, Alan, Adi and me, this is like a recurring theme, honestly. So give me your perspective on what do you think about test coverage and writing tests in that kind of fashion.
Kevin:
So test coverage is again a related topic to how we spoke about safety nets earlier that Yeah, you are basically testing everything that you did and then it works as expected all of that but in my case, I've never really found it to be an Important metric so to say that you write 100% coverage because the philosophy is that whatever you write make sure that it is testable All of that's none. And there are certain things that we don't really necessarily need to put it in the test. For example, if you're trying to maybe connect to a message queue, for example, in the test. Generally don't really need to do it. And even, I think, it's a Broadway. Broadway also has a certain way of testing it as well without actually starting an instance and. creating the supervision tree for all those things. So yeah, so the important things that are likely to break so to say, things like connecting to the message queue and all of that, TV models, maybe, where things don't change often. Sorry about the noise over there. Yeah, so things that don't necessarily change often, that is good to, rather we don't really need to test them. And if something does break, we'll just find out really quickly. If we run, if you do change something, then you obviously need to test it locally as well and then see if it actually works and all of those things. Yeah, so as a metric, we never really considered 100% code coverage as highly compulsory, but just that whatever you do write, make sure it is testable. So I think I'm probably saying something very contradicting. I don't know, from your perspective.
Sascha Wolf:
I think you're saying something which is very much like what you usually hear about the topic. I mean, like what Adi's position is very extreme in that sense, but there's merit to it. That is why I was curious.
Adi:
So the extreme version is not 100%. I'm saying explicitly ignore it. The real coverage
Sascha Wolf:
Yeah,
Kevin:
Yeah.
Sascha Wolf:
that is
Adi:
is
Sascha Wolf:
the
Adi:
not 100%,
Sascha Wolf:
thing.
Adi:
right?
Kevin:
Yeah.
Adi:
Instead of saying some things are not important in your mind and letting it be subjective,
Sascha Wolf:
That is the thing, yeah.
Adi:
explicitly ignore it and test 100% of the rest of the things, right? But the reason why, again, this has been from experience and working in systems especially that are especially very Microsoftian distributed. there's no way for you to test things locally for certain things. And yes, certain things about this really, like I said, like subscribing to MSHQ and stuff, you don't have to test as part of ACI, right? But there might be other aspects of the distributed things you might want to test. I don't know, like if you have a GraphQL API, maybe your schema file needs to be updated up to date with the expected schema file, right? Anyway, I feel like it's important to be explicit about what you're not testing. And I get that even for whatever reason, that feels extreme. In my mind, that's not extreme.
Sascha Wolf:
I-
Adi:
Extreme is I go for now 100% real code coverage, including the ones that you talked about. But that's, again, a different story.
Sascha Wolf:
Yeah, but I think there's an interesting little overlap between what you were saying, Ardi, and also what Kevin has been saying. But Kevin, because you've been saying, and there was just one paraphrase here, and please correct me if I'm wrong, but you only test what needs to be tested. We trust you to figure that out. But that is it. This is in your head because it's
Kevin:
Mm-mm.
Sascha Wolf:
an implicit thing you think. And somebody else out there might think, well, obviously that thing over there is tested and the other is not. There might be a mismatch. And that mismatch... is an implicit one, it's not visible. And in all my years working as a software engineer, that's where shit breaks. If one person thinks A and the other person thinks B, that's where break, things break. And that is like why. There's like, I'm very much a fan of making the implicit explicit. There's a level where you have to kind of, kind of find a balance. And there's a very great artigraphic. I mentioned it a few times throughout the podcast already. I think it's from somebody of a Core Rust contributors, where they write about this, like code having this implicit footprint. I forgot how they called it. Basically the idea of how much context do you need to hold in your head to understand a certain piece of code. They had a name for that, I forgot. And... There is a sweet spot you want to hit, because if you write out everything explicitly, there's a whole lot of boilerplate. I mean, look at earlier Java versions where it was getters and setters, everything written out explicitly, blah, blah, blah, blah, nobody read that. And that's also where sometimes bugs were hiding because well, most of the time they were auto-generated and sometimes you maybe wrote them and changed them. And then there was a subtle bug at then. But nobody read that code because well, it's getters and setters, right? So there might be bugs hiding in there. So this level of explicitness is not great. At the same time, a super low level of explicitness, so a lot of things being implicit, that can also be hard to grok. And honestly, that has been my experience with a lot of very poorly aged Ruby on Rails monoliths, where there were certain pieces of code that were doing things through convention and everything, but like also configured convention, some things turned on, some things turned off, and you were looking at that code and you were like, Why does it do this? Where is the code doing this? I don't understand. And there is a level of mental footprint where you need so much to know about a system to even be able to read a piece of code. And why am I saying all of this? Just to come in back, right? To say, okay, I have a specific piece of code and I have to choose to not write test for it. That's at the moment, I think, honestly, that's no critique to you specific, or anybody else, but that is a very common... a way to approach this, it's obvious, right? It's obvious which things are supposed to be tested and which are not. But well, often yes, but sometimes no. And then making that explicit, well, why not? Why the heck would you not? So this
Kevin:
I
Sascha Wolf:
is,
Kevin:
think
Sascha Wolf:
I
Kevin:
we
Sascha Wolf:
think,
Kevin:
just.
Sascha Wolf:
where Adi is coming from, right? Just to make... Yeah.
Kevin:
Yeah, I mean, yeah, true. So I just, I think it's just, we need to find a balance. And then if you're working in the team, you just come to a general consensus about what is important and then what should we actually test and what is okay not to be tested, but as long as it doesn't break. So the implicit explicit thing, I totally get where you're coming from. As long as everyone agrees on what we do, then it's fine.
Adi:
Yeah.
Kevin:
with you.
Adi:
Yeah, my current team doesn't have 100% code coverage because I am not, I'm probably the most senior in the team, but we don't have a team leader or something and you can't make a decision. I mean, if it was in my hands, I would do it. But you're right, like consensus is important, right? I think everyone approaches engineering, software development differently. And I think that's also something to keep in mind. What I feel comfortable with is different from what others feel comfortable with, even though they're wrong.
Kevin:
Oh
Sascha Wolf:
Like,
Kevin:
yeah,
Sascha Wolf:
objectively
Kevin:
obviously.
Sascha Wolf:
wrong, right? Can I take this snippet and share it with your colleagues, Adi?
Kevin:
Hahaha
Adi:
They listen to the podcast, so yeah.
Sascha Wolf:
I say it's fraud. By the way, I found the article again. It's like the Rust's language economics initiative from March 2nd, 2017. And the word and the term he's like a coining there is the reasoning footprint. The reasoning footprint is the amount of knowledge you need to hold in your head to understand a certain piece of code. And that's just a beautiful, such a beautiful way to look at it, honestly. And in that article, we make the argument about, okay, that's implicit and explicit. And you want to strike this balance. And especially for somebody, which is maybe new to a code base. Well, obviously they don't have so much, so many information yet in their head. So like you want to be potentially be more on the side of explicit while somebody, well, which has been working with a code base for like five years. Everything is obvious to them, right? Doesn't mean it's obvious to a newcomer. So yeah, I like to use that as like a, it's not really a metric, it's maybe a bit too high, but to keep this idea in my head, like how much do I need to understand, how much do I need to know about this code space before making this piece of code make sense. If you can find, if you can keep that level low without being too boilerplate-y, then yeah, go for it.
Allen:
What do you guys feel about specifically changing your code just so you can test it?
Sascha Wolf:
I mean, there are certain scenarios in which you have to do that every time, especially when it comes to datetime shenanigans, right? Where you kind of want, okay, this thing should do something with the datetime and now by default maybe uses now and then you kind of have to inject the now. There's this very small, I feel, group. a very small tipping point in which it can either be like perfectly fine, right? Like, I mean, the case of daytime now, well, that's just an unnecessary evil, so to speak. But sometimes it can also be a smell. Sometimes it can be a smell that there may be here, there are multiple concerns mixed into the same code base and you're potentially better off in separating those. But this is really hard to see sometimes.
Adi:
Yeah.
Allen:
Well, sorry, let me be a little more particular. Sorry, maybe let me say it like this. One of the trickiest bits to test, right? So obviously, what Sasha just mentioned is one of the classic problems, right? What day is today, and that's going to implement. That's going to mess with your stuff, right? Now, what about this one? Like, if you're doing something like Cypress where you're testing front-end interactions, because right now I'm working with a team, and they always like to use this data. hyphen test equals and then some value. And they match against that rather than looking for a class or something like that.
Sascha Wolf:
I think that is a smell. I mean, like what I've been seeing people do there and I think there's really an argument to be made for it is if you look at accessibility, for example, from day one for your product and you say, okay, how do I make this page, for example, accessible to screen readers? Well, flash news, what screen readers are doing is very similar to what testing tools are doing. So if your page is very well accessible for a testing tool, then... Spoiler, it's also going to be very good, and generally pretty accessible for somebody with a screen reader.
Adi:
necessarily know if it's a code smell. I think so why, again, feel free to correct me if you guys disagree or if you guys think I'm wrong. Why a code would be code smell in this case would be because if you're making it less readable or slower or whatever you want to call it to make it testable. I think readable might be the biggest concern. So if you're adding tags to your HTTP element, So it's easier to read by Cypress or whatever you're doing it. If that, and that's decreasing the readability of the code, then it's a code smell. But if you do it in a way where in test environment, that component adds the test attribute, but not in, you know, if a flag is turned on through application configuration, if you do it that way, I think you can minimize, you know, you can push yourself in that. a non-code side of the spectrum a little bit. I try to keep these things at the configuration layer as much as I can. I know it gets harder the more specific you get. And try to keep the interface in this case, which the interface for the engine is, which is a code. The components identical to what it'd be in a non-test environment. But the implementation could have some configuration details. Like if it's in test environment, add this attribute. If it's not in test environment, add this attribute. non-test environment don't and stuff like that.
Sascha Wolf:
Mm-hmm.
Adi:
So there's
Sascha Wolf:
Mm-hmm.
Adi:
ways to make it not code smelly, but it is definitely pushing the boundaries a little bit. Yeah.
Sascha Wolf:
I think what the litmus test is, what the thing is you can use to determine is this a smell or is it not? What's the underlying motivation behind doing this? What was the motivation and the way you came to that decision? Was it an informed decision potentially of saying, okay, we want to minimize impact here? We looked at other options, for example, embracing accessibility, but this is like a... admin interface tool used in factories where we know that there are not going to be any users which have accessibility needs, right? Like if all of that has been considered and you really end up, okay, this particular solution, that's the best fit in our case, fine, not a smell. But often enough what I see, and I'm going to make an example from my work right now, is that you have an ad decision that might be fine, but it's been made out of the quotes. wrong motivations. We had, for example, recently, we were working on something new at my place, and where we've shipped an MVP kind of to internal testers. And what happened is, is that for some part of the whole application, the team decided not to do tests to kind of get this deadline, right? And if you look at it from the lens of, okay, you know what, this is an MVP. we expect things to change anyway, right? Like the first time we had that thing to create a people to do user testing. So chances are whatever we write now is going to be changed three months down the road anyway. So having tests now for those particular UI bits, whatever. But that was not the motivation. The motivation was, oh, we have a deadline, no tests. And that's a smell. You can make the same decision. The outcome is the same. But depending on like, why you make it that might be a smell or it might not be a smell.
Kevin:
Right. And then, yeah, so as Adi already mentioned quite a bit, for me, at least to such kind of things, I do use the configuration files quite a lot. And even if even when it comes to say mocking certain modules, for example, you want to call an external API. But when you're running the test, you don't really want to call an external API. And that's when you use a different module, a mock. and you input that mock within the configuration file and in the main code, wherever it is, you basically access the module or the mock, whichever it is, via the environment variable. So that really makes things a lot simpler. However, however, I did credo, I used credo once, I ran the mix credo command once and then it gave a lot of errors saying, this is not how you're supposed to do things. And then... this is very risky and all of that. But when thinking specifically about writing tests with very low smells, so to say, I think that helps a lot. And in general, like when you want to have different configurations depending on your environment or where you want to run things, it's really good or very helpful to have different configuration files for it. So really more. in the lines of having a lot of, not a lot of, separating configuration, such details within a general config file, depending on whatever environment that you have.
Sascha Wolf:
Yeah, I think that kind of goes into a direction of what we said earlier, right? Like you want to cut down your problem into testable chunks at the end of the day, right? And like one pattern you can be using there is, and then it's basically a dependency injection at that point, right? Like you have a thing which does something and you don't want to do the real thing in your test. So you inject something that does something differently. The pattern used in Elixir is... bit different from what you tend to do in other languages, although of course you can also pass in the module to be called for whatever you want to be doing. But I've honestly, I haven't seen that happen very often. Usually it happens through like behaviors with like mocks and then like things being accessed from the application configuration. But it all boils down to well, divide and conquer your figuring out. Okay, those are the things I want to be tested in this test. And those are the things I don't want to be tested. And I'm making decisions on where to divide it.
Kevin:
Yeah, absolutely.
Sascha Wolf:
We've gone full circle!
Kevin:
And then a lot of the discussion also goes towards how you use mocks and then I read a great article about mocking as a noun as opposed to mocking as a verb and it
Sascha Wolf:
Mm.
Kevin:
was consequently written by Jose himself and it really explains very, very clearly about what the intent is and why should you follow something. framework like this. He talks mainly about mocking as a noun and at work also we do it quite a lot. We encourage it so much to the point that even in the take-home assignment for applicants, we actually write mock as a noun, mock as a noun. So it's a very important deal for us there.
Sascha Wolf:
I mean, depending on how specifically you do this, there are also some opinionated libraries out there to make injecting mocks easier. I heard this one library called Knigger. Sorry, this is a shameless self-pitch. There's a whole bunch of different ways you can go about it. And honestly, that is a topic for another podcast, I would be saying.
Kevin:
Yeah, definitely, for sure.
Sascha Wolf:
So, Kevin, it was a pleasure having you on the show.
Kevin:
It was a pleasure being here as well. Thank you for having me.
Sascha Wolf:
If people want to get in touch with you and ask questions, how can they do that?
Kevin:
So well the primary ways everyone can email me my email address is Kevin am 9.9.work at gmail.com So for these things that you tend to use that email account Otherwise, you can reach out to me on Twitter. It's never loquacious because I don't talk a lot and LinkedIn as well just search for Kevin Matthew Kevin a Matthew Matthew. Yeah, and you'll find me Yes, and of course, my work email is kevin.matthew at kib.com, Q-I-B-E. So that's where we do the, as I mentioned earlier, we run loyalty programs for brands on the blockchain. And if anyone is interested in, do reach out to us and we can talk.
Sascha Wolf:
Nice. Are you hiring right now?
Kevin:
Uh, right now, unfortunately no. Maybe for other
Sascha Wolf:
Okay.
Kevin:
verticals, but not tech.
Sascha Wolf:
Okay. I'm just thinking about it. Making sure people are interested, right? That they know. Okay, then I'm going to transition to picks and I want to get a Rust pick from you, Alan. I've not had a Rust pick in a very long time. Please don't disappoint me.
Allen:
Oh, well, a Rust one. I had a game picked out, but if you give me a moment, maybe I can find a Rust pick for you.
Sascha Wolf:
Now go ahead.
Allen:
Yeah, I was just going to say this is a game I have not played yet, but I'm looking to play it's on an early access Starship Troopers extermination. Have you guys seen this one yet?
Sascha Wolf:
Hmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
Allen:
Wow, since the movie came out, it's like a perfect game to play. And now it's out on early access, and it looks like a lot of fun. It's like 16 players online, same time. FPS. There's some videos online of people playing it, and it looks awesome. Basically, it's early access, but it seems really stable compared to the famous Golem or these other ones that came out recently that are just big bombs. So I think people should probably check it out if you have it on Steam. Yeah, that's my pick.
Sascha Wolf:
Nice. Adi, what are your picks for this week?
Adi:
Yeah, I don't have any video game picks, but I got a Rust-ish pick. Checked out this new text editor, because I haven't really changed my NeoVim configuration since 2014.
Allen:
Is it said?
Adi:
No, it's Helix.
Allen:
Oh, Helix, yeah.
Adi:
Yeah, it is terminal-based. Again, those are the ones I like. It's written in Rust. It's very much inspired by Cocoon and NeoVim. It's still, I don't think it's ready for me to replace my Neo Vim with, but I've been playing with it. It's a lot faster. It supports tree sitter built in. So that just means any languages that tree sitter has, get highlighting and stuff like that, and the language server stuff as well. It's also GP accelerated, which is a lot better done than Neo Vim because Rust makes it a lot easier to build applications like that. So it's very snappy, very quick. But yeah, I don't think it's quite ready to replace my NeoVim configurations. I have, like what Sasha was saying, I have a pretty complex set of configurations for my NeoVim. So hoping one day I can switch to Helix. So that's the first pick. I got three more. I'm going to do a self-promotion again. Looks like calling out my book is working and a lot of you have pre-ordered it, so I'm going to do it again. My book is out or Well, you can buy it. It'll be the hard copy. I think they're shipping mid-June, but the Kindle one can be bought right now. So yeah, buy it. It teaches you how to build a small non-production version of Phoenix. The first part of the book has no metaprogramming. We build a web server and all that. And the next part, we wrap the whole thing around in a metaprogramming interface. every chapter at the end has a testing section. So it teaches how to test every small
Sascha Wolf:
Obviously.
Adi:
part of that, how to represent code coverage, right? So yeah, so check that out. Another kind of like a short pitch, self promotion, whatever. I spent a lot of money on Zapier, a good amount of money a month. And I just hate the fact that as a engineer who loves to code, I... spend money on automation software, 200 bucks a month or something. And so last weekend, I was on vacation. And on my way back, I was on a six-hour drive. My wife was driving. And I'm like, I'll just build it. So six hours, 100% code coverage. I built a small version of Zapier. If you guys want to try it, obviously free. I'm not going to take money for it. Feel free to try it. It has integration with Google Calendar, Slack, PagerDuty, cool things, I might open source it when I feel it's ready. But if you guys want to try the application, I will put a link over here. I'll have to approve you as a user. So reach out to me if you're curious, if you want to place yours up here with an Elixir application, which will be open source hopefully soon. So yeah, it's called Adapter Ninja. Again, I'll leave the link. I mean, it's actually Adapter Ninja as a link, but I'll still leave a link in the description. And last, but definitely not the least, ElixirConf. Early bird tickets are available to buy. I will be there maybe as a speaker, depending on when they schedule my talk, because I can only come there for the first day. But yeah, if you guys want to buy early bird ticket, hang out with me, reach out to me through my email. I'll be happy to talk to people about anything. Yeah.
Sascha Wolf:
Wow, that was an impressive series of picks, not gonna lie. I might want to talk with you about the self-built version of Zapier because I wanted to do not something similar, but also like event-driven action outcome for the content of Discord. So I'm
Adi:
Yeah.
Sascha Wolf:
curious how you went ahead of doing that. Okay, Kevin, do you have any picks?
Kevin:
Well, it's not as well thought out as Alan and Adi has went. But yeah, building on top of it, as Alan said about the video game, it's quite interesting to see over there. But however, I never used to play a lot of video games as a kid. So from a video games perspective, I would recommend a lot, a lot. NFS Most Wanted 2005. I still get nostalgic about it. I can't play it. I have a Mac now. So whenever I get a Windows, yeah, I might try to get it. Then the other thing is that my favorite thing about my environment about coding is the chart GPT extension within VS code. I'm not sure what it's fully called yet, but I think if you just look for chart GPT and you'll just get it. So what it is is that it... it sits on the side navigation bar in your VS code and then just click it. It opens a prompt to just put in your questions or whatever you want and then it will give you answers. So you don't need to log into your account. You don't need to have an API key, any of that. It will just, you post your query and then you just get answers. So I've tried, I've actually successfully integrated it into my daily workflow. Mainly it's because to debug something or to implement certain functionality or certain functions rather, and it just gives answers. Of course, it's not giving the correct answers 100% of the time, you still need to test everything to see if it's really doing the job. Other one is I'm not sure if a lot of people have this, I had this by default, which is the Git integration with VS code. Sad to say that I don't use the Git command line a lot. because the extension itself makes things a lot easier, especially when you want to add certain very specific files, you just need to click an icon and then it just does it. Whereas with the command line, it just gets quite long. And you need to remember a lot of commands if you want to do something very specific. And yes, with respect to places, rather I'm traveling right now. So right now I'm speaking from Bucharest, Romania. So very beautiful place, nice food, nice people everywhere. I was in Italy the last month. Yeah, so very good places in Europe to visit and work out of, work from, and yeah, do visit anytime you get a chance to. Yes, that's it from me.
Sascha Wolf:
Thank you. I've never been to Romania, but I've always wanted to visit. I mean, I live in Europe, so there's honestly no excuse.
Kevin:
Romania is amazing. It's my first time and then it's nothing like what I expected. I was expecting something from what I've seen in the Captain America movies, how it is. But yeah, it's nothing like that. It's really beautiful. It's very well organized.
Sascha Wolf:
Nice. Then I'm going to round us up with some picks of my own. First of all, I want to specifically pick the blog post I mentioned earlier. It's like this Rust blog from 2017, March, Language Economics. It's going to be in the show notes. I suggest everyone to check it out. It's a very short read. And it's really, like I said, there's a reasoning footprint idea that really got stuck in my brain, obviously, this thing being six years old and I still remember it. Then I want to have a little engineering pick because I never thought of it earlier when we were talking about testing workflows, but the opportunity never came to pitch it. So now I'm going to do it as a pick. And that is a little, I would presume not well known flag for mixed tests, which is mixed test dash dash stale. Mixed dash dash stale only runs the tests that have been impacted. since the last time you ran it. So basically it figures out, okay, which files were changed from the previous time you compiled on the reason you ran it. And the only those, only the test files that actually are accessing anything that got kind of changed, only those get run. And I usually use it in a setup where I have like a file watcher running locally, right? Like something that kind of notices every time I change a file on disk. And then it just keeps running that over and over and over. So it automatically always runs the test that got impacted by the change. I literally just did. So that's my, my, my workflow nowadays. And, um, it's like kind of a bit, a little bit of a replacement for the one I mentioned earlier with space, space max. It's decently fine. I have written some own little command line tools for it, for it to make it a little bit easier, but in all in all, you just need a file watcher, watch, watch the files in your lib folder, then run this mixed test as a stale and it It works surprisingly well and surprisingly straightforward. And then last but not least, I also want to do a game pick and that is a multiplayer game I've been enjoying very much recently. It's free to play and it's also pretty much on like all platforms I think, even on mobile. It's called Omega Strikers and it's like there's a weird mix of like hero based gameplay like League of Legends or whatever or like Overwatch, but with football. you have basically hero-based football, soccer rather, to do the effect with the German, with the European version. And it's a lot of fun. It's honestly a lot of fun. Obviously, Finances itself through microtransactions is a free-to-play game, but also as most free-to-play games nowadays, those are, well, skins and cosmetics and stuff, and they have a season pass and everything. So you really can get a lot of fun out of this game. just from playing it for free. And it's on mobile, it works surprisingly well on mobile, I gotta say. So yeah, that is my pick for this week in terms of nerdy stuff that I'm doing. Okay, then again, thank you for being here, Kevin. Thank you for having this amazing discussion with us.
Kevin:
Thank you for having me as well, thank you.
Sascha Wolf:
And thank you, Adi, for being as usual our testing guru.
Kevin:
Ha ha.
Sascha Wolf:
And I hope you all have a nice week and tune in next time with another episode of AlexiaMix. Bye.