Sascha Wolf:
Hey everyone, welcome to another episode of Elixir Mix. This week on the panel we have Adi Angar.
Adi Iyengar:
Hello?
Sascha Wolf:
Alan Weimar
Allen:
Hello.
Sascha Wolf:
and me, Sascha Wolf. And we have no special guest this week. So it's just the three of us. And I think we are going to talk about what the heck has happened at work recently. And like, because we, before we hit the record button, we kind of went through some, some challenges each of us had. And then we realized, you know what? Let's just talk about work because most of that boils down to stuff, probably a lot of elixir engineers struggle with in one way or another. So Adi, why don't you give us the start, because you had a very interesting story I feel earlier.
Adi Iyengar:
Yeah, so I guess I'm going to like to not hog up, you know, a lot of time, I will probably talk about like a side project slash some something I'm advising with. So it's like first time I'm using elixir to do any kind of AI that to like generative AI. So the problem that the startup was experiencing was There's some stealth mode, but some of it is not considered. doing CDZ, which is really weird. But they do NFT validation, right? So they want to use generative AI to validate digital assets that are a little bit off. If you have a painting and someone digitally changes a few things to change the signature of the painting, and still claim that's their own asset. They wanted to have that kind of like plagiarization kind of protection, right? So they were like, how can we store those signatures, associate them in a parallel blockchain, right? So they use generative AI to create parallel blockchains for the entire ownership of the NFT blockchain. So generative AI is trained to change easiest things that can be changed in an image. And obviously, it'll get better, right? And anyway, that should create more duplicates, close enough duplicates. That would have a signature associated with the main assets. If someone else tries to copy it, they're like, boom, no. This one's already there. We're trying to plagiarize something, right? So that's the whole idea. And it was really cool, because I had never used Bumblebee properly before. And it's really, we first started with Bumblebee, obviously, to take one of the image categorization algorithms that's already there in Hugging Face's website. And then we used Axon to train it. And it was amazing because it literally took us three hours worth of code and a little less than two weeks worth of training to have a feature that. help this company do series A. And I can officially say Elixir is probably the third best program language right now for AI and machine learning after this experience, only after Python and Julia. It's pretty close. And I was just so excited at how accessible it is. And to the beginners listening, just go to Bumblebee. Pull the code. Pull one of the examples. and start doing stuff right now. Like I showed this to my wife, and my wife and I have a really cool backyard. We have all the animals coming in. We had snapping turtles the other day, right? And we wanted, we're like, OK, let's build a very quick algorithm. It took us like two hours. Put it using nerves and Raspberry Pi. And whenever we detect an animal that's close enough to what we deem interesting, send, shoot us a message to a link with the live feed so we don't miss. any cool animals showing up in the backyard. Took us like less than two hours to write the code, a couple more hours of nerves hacking, whatever weirdness, and within five hours, we have a backyard porn monitoring app. Like again, how cool is that? And my wife is like, elixir beginner. So I want to encourage all the beginners to pull that project and like try it out. It's amazing. It's like, it feels like, you know, you feel like a magician, you know, this stuff felt so farfetched to me a couple of years ago. the fact that you can do it in a few hours with a beginner and make it accessible to beginners. It's just amazing.
Allen:
Wait, didn't you force your wife to write Rails or something? Like the whole framework from scratch?
Adi Iyengar:
I didn't force her, I wrote raves for her to show her that it's actually nothing but a bunch of small components, but that's the previous
Allen:
What I remember
Adi Iyengar:
podcast.
Allen:
last week is you said, oh, let's just rewrite Rails. Come on, we can do this. That's
Adi Iyengar:
Right,
Allen:
what
Adi Iyengar:
that
Allen:
I
Adi Iyengar:
was
Allen:
remember.
Sascha Wolf:
Hahaha
Adi Iyengar:
when she was starting up as Rails engineer. And it felt like, I mean, Rails feels like magic, right? I just kind of wanted to demystify that a bit. She's used to doing weird projects with me. This one was very easy, trust me. She was, she, we coded 50-50. She's not even an Elixir developer at her work. She does Ruby on Rails. And the fact that she was able to code and also, she can replicate this. Like without having much elixir knowledge. I really encourage all the, I know a lot of beginners listen to us. I really encourage them to give it a try. It's amazing.
Sascha Wolf:
But also don't feel bad if you have no clue about this because AI is one of the big blind spots for me. I have no freaking clue what to do any of this.
Adi Iyengar:
Guess what? I don't either. That's where I think the abstractions have come so close to our, you can literally treat it as a black box. You can
Sascha Wolf:
Mm-hmm.
Adi Iyengar:
literally treat it as like, oh, just think of it as a programming language compiler. I don't need to know all the steps. I just assume it's working. And Bumblebee makes it very easy. And the more you play with it, the more you'll train your mind, train your mind to treat. a machine learning model as a black box. And that's like the key going forward to be, to work with data and machine learning, to understand that, okay, you will not completely understand, especially if it's deep learning. It's very hard to understand how it works, like intuitively. I have done a few projects now, and it's just hard for me to intuitively understand how it really works. It just works. And you just train your mind to treat that as a black box.
Sascha Wolf:
That makes sense to me. I am also a bit hesitant because I mean like when you have that black box in your system You don't understand like what if it doesn't work, right? But I mean that that's part of the video. I guess you get
Adi Iyengar:
But you always have black boxes in your system, right?
Sascha Wolf:
Never
Adi Iyengar:
That's
Sascha Wolf:
this too.
Adi Iyengar:
the thing. The level of abstraction is subjective, right? You're just used to knowing. And anyway, it's more of the implementation that you're handing off to something else. It's different than doing web development or something else, because the complexity isn't much, and you feel like you know a lot more about your app than you do when you're doing machine learning. So. I don't know if that's the right approach. I know obviously a lot of people who write algorithms to make deep learning better. And the ones who write models tweak Microsoft's ResNet models to make it better. Obviously, if you're going to get to that level, you have to break that black box and learn more, right? But you don't need to be at that level to get started. Literally, I guarantee you, anyone listening? 15 minutes, if you have Elixir installed, just create an EXS file, mix install Bumblebee, set up an NX backend. whole Microsoft's ResNet model, turn on your computer's camera and click an image using like FS watch or whatever. Click an image and pipe that image to Bumblebee, it'll categorize it for you without even training. It's that easy. Give it a try.
Sascha Wolf:
I guess I'm a bit more conservative there because I have a friend, like a dear friend, which has been working in an AI area and also in AI ethics. So he's someone which is very hesitant on some of the latest developments. So I would very much presume that is rubbing off on me. Where I look at all of the developments like... I'm not sure what to think of that. I also don't know enough
Adi Iyengar:
Yeah.
Sascha Wolf:
to be sure what to think of that.
Adi Iyengar:
I mean, uh, I-
Sascha Wolf:
Just don't use it maybe to, I don't know, categorize applications and, right? Because there's implicit bias is something which is very dangerous.
Adi Iyengar:
I mean, that's a good point. And I used to think more about it. But I think you just slowly get accepted. You get accepted. You get like, what's the word, like desensitized these things. But I mean, it's right. And that's true with any technology, right? Even before technology, I mean, you look at like. Never mind, I was going to say guns.
Sascha Wolf:
Ha
Adi Iyengar:
Cut this,
Sascha Wolf:
ha
Adi Iyengar:
cut
Sascha Wolf:
ha!
Adi Iyengar:
this. I don't want to even step on that. But point is, I think, yeah, whenever a new thing is invented, it's always like there's always people who can abuse it. And it's always, I think, on society to kind of control themselves and create like societal structures to avoid that. And I think there are already laws. already laws in the US at least and like ethical use of AI, right. There's already laws, both from information technology and also like, what's it called, like civil rights laws that prevent you from using AIs in a bad way. I think someone used, was using AI to categorize ethnicity and race of people
Sascha Wolf:
Hmm
Adi Iyengar:
in California and they, you know, they were like, no, that's not did not deem that to be ethical. So, yeah.
Sascha Wolf:
Yeah, yeah. Yeah, I mean, I just remember stories like, I think it was a few years back where they had an AI that got trained on automatically screening incoming applications. And then later on, it turned out that it was actually also looking at the application picture. And
Adi Iyengar:
Right.
Sascha Wolf:
if the person
Adi Iyengar:
I
Sascha Wolf:
was a person of
Adi Iyengar:
think
Sascha Wolf:
color,
Adi Iyengar:
we're talking
Sascha Wolf:
then
Adi Iyengar:
about
Sascha Wolf:
automatically.
Adi Iyengar:
the same thing. Yeah.
Sascha Wolf:
Yeah, yeah. And I'm not saying anybody here from all listeners should not like. play with Bumblebee and figure it out, but keep these things in the back of your head. Like we have this black box, you have this thing you don't really understand, you try to train it to do a thing and mean if it's detecting animals in your backyard. Okay, so be it, right? But keep this in the back of our heads. I feel in our industry, at least in my time when I was studying, we didn't, we never had a course on ethics and honestly, I feel we should. So
Adi Iyengar:
That's
Sascha Wolf:
yeah,
Adi Iyengar:
a very good
Sascha Wolf:
just
Adi Iyengar:
point. I think it's a great reminder. It's something to always have, yeah, I think, I think, yeah, you're right. It might seem not important at first, but I think it's good to have that reminder of, you know, the more things you learn from, it's very powerful. Engineering and technology is so powerful. Like I can, I can, I can probably hack into our town's website in 15 minutes, right? But I don't do it. Right, because there's like ethics that I have already trained myself in. But, you know, maybe as you learn new tools rethink those, you know, your sphere of ethics and what should you use those tools for and what shouldn't use those tools for. Yeah, that's a great call out.
Sascha Wolf:
There's also a really great talk, I hope I can find it and put it in the show notes, which kind of goes in that direction. It's not about AI, but it's basically a guy talking about some really interesting work he has been doing as an agency where they build a system to detect Wi-Fi sources. So we say, okay, this is the rough direction of Wi-Fi sources, this is the distance. And the thing is, he talks about that and it's really interesting, also the math that goes into all of that. But then regularly he always interrupts, but don't let that distract you. We were building software to kill people because they were building software for the US military and rocket guidance stuff. So yeah, that is kind of like, sometimes we lose sight of that. And I'm just, I, I never going to stop reminding people that there is room for abuse if you, if we don't, um, if we don't keep this in mind, because sometimes even when it's said that this is the way the world is, sometimes we are the last line of defense. as engineers, we are the last people which can say, no, I won't do that. And I've done that in my career. Um, I've done that once. I've done that once and said, no, I'm not going to build that. And it wasn't rocket guidance software, but it was more like analytics tracking to a degree, I was like, I'm not comfortable doing that. Um, but yeah.
Adi Iyengar:
Yeah, in the words of the great Uncle Ben, with great power comes great responsibility, right?
Sascha Wolf:
Exactly. Okay. But yeah, folks still check out Bumblebee. I probably should do that too, if I find the time, because I might close a gap in my mental skill set. So what has been happening on my side? On my side, I've been going back more into coding. I've said in past episodes that I've been going down the Mid-Nigeria route and I'm actually handing off that responsibility again. So I'm actually... I'm basically in the middle of transitioning from a team lead role to a principal engineer role right now. And that also means I've got back into working again at our, if you some of our listeners maybe listened in into the past, of our new modular code base. So like, let's refresh on maybe for everybody who is not in the picture. At my employer, we had a very, very complex, overly complex legacy system that was also like a distributed legacy system. It honestly was a distributed ball of mud to a very large degree. And we are slowly replacing that with a single code base, kind of taking the same design principles as microservices, but still employing them in a single code base, which is honestly like an Elixir and OTP and like with those supervision trees. You can do that fairly easily compared to some other languages if you cut supervision trees properly. And since we're like a team of like four backend engineers, even though we have different areas of work, there was just a big room for having the need of having multiple microservices. So in our case, the single core base makes sense. But the thing is, me being a managerial colleague more, I had less and less of an insight into what we are actually doing there in our day to day. So now I basically didn't really. code anything in there for the past few months. Now I came back and something I've noticed, and I think this is probably related to a whole slew of people is, we have that system and the production integrates with Google Pub-Sub. It also has some API integrations with external systems. It of course has a database, right? And some of that is like, okay to run locally, right? I can run postgres locally, whatever, no problem. But for example, running Google PubSub locally, it's more of a problem. So at the moment, where we are standing is we don't really have a very smooth run that thing local story. And I went through the code a little bit and I realized, OK, we could probably, we already have some level of abstraction to have that publishing and that subscription layer replaced with something else. But currently, it's kind of leaky. So some other parts of it assume that you were using Gux PubSub, some other part. Some parts are more generic, but it's not super clear cut. And that is something where I feel like, I've seen that again and again and again, where you have potentially some dependency in your system that is super useful and a great choice in a production context, but it's not really what you can do locally easily. Like I said, we pops up, I can't run that locally. So what I've been thinking, okay, we kind of... need to add their integration, like an abstraction layer in between to have maybe a different provider locally. We could probably go far enough with Phoenix PubSub locally to say, OK, when I run that thing locally, I just use Phoenix PubSub in memory, whatever. But then again, that requires a level of thought. That also requires a level of design, even on your supervision tree level. That is not trivial. Because. For example, just today, literally today, I wrote a module which I called, like a shared namespace, it's called shared.otp.startNothing. I wrote a model like that because we have some parts in our supervision tree where we conditionally start things. So depending on configuration, we start that thing or we don't start that thing. And depending on what exactly we start, it might be, you know what, nothing needs to get started here. But there's also, like, if you have a supervision tree, there's no easy way to say that you couldn't have a task which does nothing, that's something I've seen people do. But there's no OTP ready-made module where you can just say, you know what, do nothing. So I wrote a module which has like a start, a child spec, you can just put it in supervision tree, and the start link just returns ignored. Or is it ignore? Ignore, ignore, I'm not entirely sure. That is like one of the on-start values you can return from a GenServer, that's perfectly fine. OTP can handle that. but there's nothing ready made that you can just plug in there, like a no-op. But that, again, enabled me to say, hey, you know what? I'm currently also building an API integration into an external system. But I want to make that, again, I want to make that integration abstracted away in a way that locally I can maybe just use some JSON files on disk, right? But in production, I actually want to go to a real API. So. For the real API, I actually want to start up a Finch pool. So like, because it's like this one endpoint you will integrate with, it will always go to this one endpoint. So I want to have like some fast HTTP connection there. So Finch with HTTP2 connection and that actually needs to be started, right? But for my local thing, for example, you know what? I don't care. So like, I basically wrote that behavior, which has like an optional child spec callback. And when that child spec callback is not implemented on the patient, it just uses that module from earlier. Start nothing. But that's the level of design, of the level of thought, and also the level of understanding you need to have about a system running on the beam that doesn't come naturally to me. Like, I really realized earlier this day, OK, I'm really at that point where I think about my application in that supervision tree context. I really ask myself, OK, what needs to be started here? And what depends on what, right? Before I even go to, OK, this is the module, so right. But I'm honestly like, I'm not even sure what I can conclude here beyond this is knowledge you need to gain, but I can't point anybody to anything. So yeah, how does that ring true with you Adi, Alan? Is that something you've also experienced?
Adi Iyengar:
Yeah, I think that kind of you can say like if you have the modular approach, that's like one of the things that can happen right there's so much in the supervision tree so many modules that have their own responsibilities, you have to not just understand how supervision tree works, but how do they correlate with each other like you said like to
Sascha Wolf:
Yeah,
Adi Iyengar:
run
Sascha Wolf:
yeah.
Adi Iyengar:
it locally, something that's to run something locally that's supposed to be deployed separately. Yeah, that yeah you're moving that complexity, you know. Again, trade-off, right? You always say to Sasha, it's all trade-offs, right? It's all about trade-offs. There's probably one trade-off of the whole modular approach where you're trying to run everything as a monolith locally, as a singleton application locally, where you still need to understand how different components interact, and what should be the order, and how should things fall in place. That's actually very similar to. how my current companies, my current team's application is. I've been here about seven months now, still don't understand about 20% of, like not like 20% of like code code, but 20% of the entire, you know, high level domains of the app. And yeah, it's the cost of going modular to project.
Sascha Wolf:
I also think it washes something into the visible. Something becomes visible that is also a case in distributed systems. Even in a microservice system, you very often have implicit dependencies on start orders. And that is the thing, they are often implicit. And that is what I realized also in our system. Yes, it's also still implicit, but because I can run that thing locally, I hit that barrier so much quicker. I, uh, then, um, because I wanted to basically comment out some event handlers because they were not relevant to my, to my local story. And then that was possible, but then I also, uh, commented out some, like, uh, the integration to Google Pub-Sub and some other thing broke further down the line because that implicitly depended on that, right. And that something you arguably probably would never, never on, at least not never easily discover in like a distributed microservice based system, even though these dependencies still exist in there. I'm not sure if any of you know Hillel Wayne because it's a dude which has been talking a whole lot about also like TLA plus, so like to model distributed systems and to model like communication between different components in a distributed system. And he's also been writing a whole lot about failure modes. And A long story short, I don't want to rehash everything he wrote. I can add the article if I find it with show notes. But basically he argues that there's a limited number of known good states a system can be in, there's a limited number of known bad states a system can be in, and there's probably arbitrarily large number of unknown bad states a system can be in. He's basically saying there are some ways in which a system can break where you're like, ah, okay, I expected that might happen. But you might even have some failstates in your code for that. But there are also ways in which a system can fail. You never expect in the first place, you never knew about. And some of those failstates might be unrecoverable from, not unrecoverable, but not to the degree where... automation fails where Kubernetes for example, and pots being restarted and containers being restarted no longer does the deal because maybe there are some implicit dependencies between things. And the same honesty implies one-to-one for building an OTP application and thinking about your supervision tree just in the smaller scale. Yeah, do you have any experience on how to manage that? Because I had one idea that we potentially could employ, and I'm curious to hear what you think about that. Basically, the idea was to be able to feature flag any subcomponent in our supervision team to be able to say, OK, this should be started, but this not. And maybe having some smoke test that tests all possible combination of this to basically make sure you know what we have. proper encapsulation of these things, and they can actually start independently from each other, and there are less implicit dependencies, probably not none, but less.
Adi Iyengar:
Yeah, I think that sounds great. But it goes without saying, the more things you add, it kind of exponentially increases the complexity. But yeah, I think what you said, like adding feature flags or environment variables based on which you can start them locally and ensure they run independently of each other by running some tests with those environment variables, I think it's great. It's going to be a lot of work though.
Sascha Wolf:
Yeah, I was thinking, is it going to be a lot of work? But if you have these patterns in place, then that automatically kind of like, it's the same level of thinking as your 100% test coverage thing,
Adi Iyengar:
I agree,
Sascha Wolf:
right?
Adi Iyengar:
yeah. Yeah.
Sascha Wolf:
Like, you have to think about it now.
Adi Iyengar:
Right. And I totally agree. I think once you have it in place, I think it encourages people to keep doing that, right? So yeah, totally agree with that. Again, I think the way we might be thinking about it might be different because it's based on experiences with these kind of systems. If I were to do it for the app that I work in right now, it would be a couple months worth of undertaking because
Sascha Wolf:
Oh
Adi Iyengar:
of
Sascha Wolf:
wow,
Adi Iyengar:
the complexity.
Sascha Wolf:
okay.
Adi Iyengar:
Yeah, so maybe I'm thinking in a different. scale, for lack of a better word, of domain than you. But yeah, I mean, if you feel like the individual components are well-defined enough that you can comprehend that at least, maybe it would not be a couple of weeks of project. But again, it's all CI, right? Just because you have so many components doesn't mean you have to start by testing all combinations. You can just start by doing what you know, right? That's
Sascha Wolf:
Yeah,
Adi Iyengar:
also,
Sascha Wolf:
I mean, but
Adi Iyengar:
yeah.
Sascha Wolf:
you could also, I mean, if you wanted to go start by all testing combinations, that could be a night job or whatever, right? Like there's
Adi Iyengar:
Yeah.
Sascha Wolf:
this mix or even
Adi Iyengar:
Oh yeah.
Sascha Wolf:
weekly, honestly, it doesn't really matter
Adi Iyengar:
Yeah.
Sascha Wolf:
at that point. But having that safeguard in place. And I think a few minutes like the number one thing in my career where I got put on more and more value on is having automation to tell me when I fuck up. That is the one thing I've grown more and more attached to over the years. And I mean, look at this particular case, I was wondering, okay, like I see this need to introduce this thinking in the team, right? To introduce also, like to get that mind share and to get that mindset on like, hey, we want to have these components to be modular. We want to. to avoid basically building a system, as you said, Eddie, where we were having that requires months of effort, but you're still early enough in the journey to have learned a fair deal, but also now to be able to do it in maybe a week, you know? And then having that in place kind of makes it easier to do that healthy practice down the road. But yeah, that is what I mentioned earlier. I definitely see the need. I don't see a lot of people talking about it. I also don't see a lot of teaching material talking about it. Like how do you actually structure your supervision tree to be able to, okay, like I want to have that encapsulation, right? I want to be able to say this starts, but this does not. I, maybe it's also not the super common thing, but I mean, when I look at OTP from a distance, it lends itself naturally to that.
Adi Iyengar:
Yeah, I agree. I think it should definitely be more material. The more it looks or grows, I'm seeing more companies doing complicated stuff at the supervision tree level and starting and not starting applications in different states based
Sascha Wolf:
Yeah, yeah.
Adi Iyengar:
on several configurations. So it's very important to test those. Yeah, I think I also very much align with what you said about capturing things that could break before they break, right? Putting things in place in an automated way tell you that this could fail or this will fail. Yeah, I think a lot of companies, especially when they start up early on, they kind of focus more on the cost of something going wrong, right? Instead of likelihood of something going wrong, right? But as I think you grow, cost is something that gets very hard to manage as well. Like you cannot like always make sure that if something does go wrong. its cost will not be high. It costs in terms of time, money, whatever, right? Like data, you could not recover, right? So you have to start focusing on likelihood because again, overall loss from failure is like cost times the likelihood, right? Focusing on both of those sides is super important. Like a good example is at my company with sports betting, we cannot just deploy an application whenever we want. We have to go through an entire auditing process. We release code
Sascha Wolf:
Mm-mm.
Adi Iyengar:
that we have done two or three weeks after we have done it. patches our pain. So we have to really, really focus on minimizing the likelihood of something going wrong by really adding all those automations, like Sasha will start to run in place, adding crazy extensive QA, and to minimize that likelihood. Because if it fails, it's not simple for us to deploy things, because we have to go through an audit. So yeah, it's a good thing to keep in mind, the whole automated way of capturing errors for you. A machine should capture the errors for you before the users of your system, right?
Sascha Wolf:
Yeah, yeah. I mean, having your users scream at you is also a way of monitoring. It's just not very cheap, usually.
Adi Iyengar:
Right? man Allen's new setup at home isn't quite a good setup huh
Sascha Wolf:
I guess we cut this, right?
Adi Iyengar:
I mean, I hope so, but...
Sascha Wolf:
Hello, if you listen to this, it should have been cut, so...
Adi Iyengar:
Yeah, this should be cut. Yeah, this should be really cut. We can do our talk about something else while Alan is joining. Continue down. That supervision... ..rapid hole.
Sascha Wolf:
Mm-hmm. Supervision Reptile. Okay. Yeah, I got something. I finished up on... Okay. Yeah. Something else that has been crossing my mind lately is also related to all of that. It's this like this balance of like having a system that is ready to be put into production and works there and having this developer experience, like smooth developer experience. And... Sometimes you make decisions that help towards both. Usually when I integrate some kind of dependency, be it a very big library that has assumptions baked in, like Broadway for example, or having something like an external software, like a software as a service thing, I basically always build a slim abstraction layer in between. and kind of like a pluggable provider kind of interface. Basically always. Because first it allows me to make my API tailored to what I need. So I can really say, okay, the things I wanna do, like that's completely, I can completely decide how that should look like. I also usually start with that and only then I build the providers. And secondly, it... also lends itself nicely to have a more streamlined local run story. And so this is like one of the things where maybe third leader, if you actually ever have to replace that with something else, because well, your software as a service provider, you now no longer use, let's say contentful, but you use, I don't know what is another content providers thingy, you know what I'm getting at, right? It kind of lends itself to the whole, like what if we rip out the database, but I think in a software as a service provider context, that is a valid concern to be having. So there you have like a decision you can make that lends itself nicely to having a nicer developer experience because locally I can, for example, to stick to the example of Contentful, I could just serve some JSON files on disk instead of actually going to the real system. But like in production, I can actually have that provider. and say, okay, like I actually wanna go to Contentful. But sometimes you also have things that don't really have a direct payoff in a production setting because same scenario, I built a super small mixed task, which basically what does it do? It starts a CowboyPlug server and serves one HTML file, which is loading a bunch of JavaScript libraries to give me an interactive GraphQL thingy. because we are integrating with content for free GraphQL. So I can just say, hey, start that thing. It opens up the thing and I can immediately start hacking. Okay, this is how the query should look like. This is how a response would look like. But then also loading the environment variables. That was a thing that took me half an hour. It wasn't a big deal because I looked up all of some interactive GraphQL things. But sometimes you still have things you might wanna build that make developer experience easier, but you don't get an immediate payoff. in terms of production, right? It makes future development easier, but it does, like it's an upfront investment and finding that balance, I feel, is also something that comes a lot with experience. I sometimes tend to bike shed too much in that direction because honestly, I draw a deep, deep satisfaction from making tooling nice to work with and like just having it on a level where it's just nice and... without any effort. But I also know, I can remember past me when the deadlines were sitting in my neck, I got a lot more relaxed on deadlines the past few years. Then I can't do that right now, right? I can't do that. I have to ship working feature code. And I'm not sure, how do you handle this, Adi and Alan? Do you have a rule of thumb where you say, OK, I'm I allow myself like 10% of the time to make developer experience nicer. Like, how do you go about this?
Allen:
I guess it also depends on the input required, right? Like how much effort does it take to do that developer stuff and then what's the output? What's it called, ROI, right?
Sascha Wolf:
Yeah, return of investment, yeah.
Allen:
Yeah, oh, you're very smart. Anyways, yeah, the
Sascha Wolf:
Now I just
Allen:
ROI
Sascha Wolf:
wanted to
Allen:
is.
Sascha Wolf:
reiterate for folks that might be listening and don't know what that means.
Allen:
Okay, fine. Yeah, but really the ROI is probably the biggest one, right? There's so many times where it's like, Oh, I want to do this. I want to make this change. And it's like, wait a minute. It's gonna take me some time. Is it really worth it? And sometimes it's not. But at the same time, it's like, if it always comes up, you know, it's the good question is like, when is it actually worth it? Like, let's say it's something that takes a lot of effort to fix. But it keeps coming up. Like, is that enough? I mean, is it just bugging you? Is it bugging everybody else? So this is really a tricky question. There's no objective answer, I think, for something like that.
Sascha Wolf:
Yeah,
Adi Iyengar:
Totally.
Sascha Wolf:
I agree. Which is why I'm curious to hear how you answer it for yourselves. And also, I think Lalisa is probably curious as well.
Allen:
Yeah, it depends on depends on what's going on. But I mean, sometimes I take care of these small things that even they require a lot of effort just because everybody runs in it's been there for a while. And it's like, let's just finish it off today. Especially if I'm not feeling like productive, and it's kind of like a brain dead thing to do. Then why not? You know, some days you're just not productive. But like, if you're productive enough to get these things done, I think it's probably worth it. Rather than kind of just, you know, piddling around all day and getting nothing done, right?
Adi Iyengar:
Yeah, I totally agree there. Yeah, I kind of want to also like, again, highlight that again, it's subjective, right. But the more you can measure the impact, the more we can gather any kind of data, any kind of, you know, it could be develop hours, people spending time on this, how much can come with like a rough number based on a subjective analysis or to make this as 20% more productive. because of this. Here's an example, right? I'm kind of, I always like, I'm the one who goes to CircleCI and fix our flaky tests, right? It's something I've done,
Sascha Wolf:
I
Adi Iyengar:
actually,
Sascha Wolf:
hate flaky discs.
Adi Iyengar:
I know, but it's something I did my first week at this company. And it's, to me, it's fun, obviously, to do that, like digging into, you know, things that are weird. And it's, you know, it challenges you. It allows me to come up with tricks to... solve problems and then share with people. It's something I really enjoy doing. So I was already biased towards doing that. But then you have to understand, not all flaky tests are worth fixing, because sometimes you just rerun them and it fixes. That's where go to CircleCI, look at the analytics, how often these tests fail versus how often they are run. How many pull requests are we pushing every week? And how many of those are getting slowed down and merging or whatever? How is it, try to quantify in some ways, how is it affecting the overall team's productivity, right? And then obviously the measurement is, how you analyze it is also subjective. You can always ask a senior person or a couple of the people in your team for the suggestion, like, is this worth it? And oftentimes I tell my mentees, like, you know, good decision is like measured and it's worth it. A great decision, when you come with like numbers, it should be obvious you should do it. But again, it starts with coming up with numbers. And it takes a while to develop that mentality because it's also hard work to come up with numbers. It might not be worthwhile coming up with numbers for something that's not worth doing. So I think it's a mindset trying to quantify the impact of something that's not tangible is a skill that I think as engineers we learn the more senior we get. Because how often you have to quantify non tangible stuff to upper management right the impact of something like the more you do it the more better you get it, the more you realize what are the numbers you can come up with what's a good subjective percentage or good subjective constant, you can add there as a multiplier to communicate the effects of something so yeah, but it's kind of end where we started it is subjective but your subjectivity and your ability to make that subjective subjectivity sound more objective, make it sound more objective, depends on your seniority and your experience. I don't know, but it didn't make sense to you guys what I said.
Sascha Wolf:
It does
Allen:
Yeah.
Sascha Wolf:
make sense. It's also something where I feel you can very much see the difference in the organizations we work at, Adi, because you've worked in bigger organizations and I tend to work in smaller organizations. And in smaller organizations, it's often a whole lot harder also even to have the room to come up with these numbers and also to even measure it in the first place.
Adi Iyengar:
Yeah, I think in small organ, and I wasn't a startup strike before that. I was in the extreme.
Sascha Wolf:
Hmm?
Adi Iyengar:
I was at founding engineer. I think good thing about that was that I could just make the decision. Right. So I didn't have to, I didn't have to show it to people. And oftentimes, obviously it came at the expense of me working longer hours and stuff. And that's not, that's unhealthy,
Sascha Wolf:
Mm-mm.
Adi Iyengar:
but yeah, you're right. The, the metrics do change based on organization size, right? Like a subjective metrics and like, you know, is something even worth thinking about. But again, going back to the point, a great decision is something that should be obvious. And that impacts not one, but three or four things. This topic came up, Sasha, when you were saying splitting the supervision tree also helped develop an experience. So it's something that helped you set some things up locally, help you test how those things work independently. and help develop productivity is something that's affecting multiple things. That's like probably something that's a very good decision.
Sascha Wolf:
Yeah, I sometimes have also scenarios where I feel, where I can, like my gut tells me this is a good thing to do, but I can't, I can't really tell you down to the numbers. Why? Um, because like a super small example of the same code base. Um, basically we have an API that is internal facing from this modulistic system. You have an API that is public facing. Um, there's two different, two different endpoints, also like different authentication needs. And, um, what we ended up doing is we have two separate Phoenix endpoints. which also have slightly different plug pipelines. And what I ended up doing there is I spent some hours on streamlining the configuration story because I think all of us who has ever worked in a code base that has a Phoenix endpoint, you know that the configuration for Phoenix endpoint is relatively noisy. There's a lot of things that you have to put in there. And the extra relevant... information in that configuration usually unless you have to do some crazy shit. It's like one or two or three lines at most, right? So what I've done because now we have two endpoints and there's a very big chance that we will have a third endpoint because we want to add like an admin interface down the road which probably will be LiveView based. So the idea is that also like have a separate endpoint that includes all of the LiveView plugs because we don't currently have them plugged in. We don't need them. And then we have three endpoints, which all have slightly different configuration needs. So what I ended up doing is basically massaging the configuration that like in our runtime.exe file, we only put in the information that is actually different. Like everything else is like kind of based on some defaults that get then merged into the configuration, you know, from the runtime. But that is against, I don't know, I can't measure this, but I know from my gut, this is a good idea because I know... that at some point we might do some configuration change in one place, but not another, because there's a lot of noise and not a lot of signal. I guess maybe that is the thing you can measure, right? Like how much noise is there, how much signal is there, how many lines of this configuration is actually relevant, and how many characters, so to speak, are actually what I need to look at to understand. But yeah, me talking through it, I now realize maybe there are some more numbers I could... I could attach my decisions to.
Adi Iyengar:
Well, but I think you still have a point. I don't think the numbers would be as concrete to convince someone who doesn't agree with it.
Sascha Wolf:
Mm-hmm.
Adi Iyengar:
Because it is subjective. But I also think, and I hope you don't mind me saying that, I think this decision would not be extremely consequential. If you went this way versus the other way, it really wouldn't make a difference. And that's the other side of this, picking your battles. As a senior engineer, especially, we have limited social equity. If we start picking every battle based on opinions, where the difference, even though my gut says something is right, but the cost of not going with my gut isn't that terrible,
Sascha Wolf:
Yeah, yeah.
Adi Iyengar:
you lose that equity. You can't always like, you know, there's no perfect decision, even though my gut says something and my gut's always right, at least in my mind, right? It's still about picking your battles and at least appearing that you have an open mind. Well, I really hope my teammates don't listen to this.
Sascha Wolf:
Yeah, I think a good rule of thumb there, at least for me, has been, when I look at this again in like six months, what would I wish I would have done, you know? And in the case of this configuration shenanigans, for example, with the multiple endpoints, I'm very, very confident that future Sasha will look at this and is like, oh, this is good that we did that.
Adi Iyengar:
Yeah.
Sascha Wolf:
Because if we actually need to change configuration, like, okay, where do I need to change? Okay, here. Do I also need to change something else? Like, is there like a... like a default that kind of needs to be changed everywhere now. And there's less of a chance of suddenly breaking things because I also remember, I think it's actually born out of an experience I had a few years back where we, our system did break because of a subtle configuration change that kind of got missed because there was a lot of noise in the configuration files. But yeah, I 100% agree with you, Adi, that like, we especially as senior and beyond engineers, you argue you can't and shouldn't try to win every battle. In this case, for example, like with the end points, it wasn't even a big battle. Like we had a discussion and a regular two week, one hour architecture check-in, like among back and colleagues, where we talk about some of the things. And then it came up, hey, we now need like a publicly available API. We need a privately available API. How do we do that? And then... One of the ideas was to have like a router, different things. And I brought up a suggestion, hey, we can also have two endpoints and then we have full control over the trucks and configuration. And everybody who listened to it thought it makes sense. It was not a big discussion for that. Right. But even if people would have said at that point, you know what? No, I really want to do the routers. I'm like, my God, what's there I said, I think is I get it's the easier, but it can bite us in the ass down the road. But you know what? We still got to do it. I mean, like very, very. very, at the very beginning of this whole modular thing, right? Adi, remember when we had an episode on like Pontu apps, Umbrella apps on code base? I'm still think that like Pontu apps have like, there is a value to be gained there, right? Like having separate distinct apps. The team decided, hey, you know what? That's the level of complexity we don't feel we need right now. So we now really have one app with fed supervision tree. And... That is a battle where I deliberately say, you know what, I'm not going to fight you on this because this is the consents here. And my opinion slightly differs from yours. Um, but let's see. It's, but it's also like going back maybe to the beginning of the whole OTP thing. If we had chosen different distinct separate apps, then we would have, and that would have had that like this, this separation enforced already.
Adi Iyengar:
Right.
Sascha Wolf:
And now we kind of need to ask ourselves, okay, how, how do we make sure that we still don't have these implicit dependencies in the supervision tree?
Adi Iyengar:
Right.
Sascha Wolf:
So yeah, you win some, you lose some trade-offs
Adi Iyengar:
Exactly.
Sascha Wolf:
as you said.
Adi Iyengar:
Yeah. And I've also learned also to very, very rarely say I told you so.
Sascha Wolf:
Hahaha
Adi Iyengar:
Again, it's all politics, man. It's all relationships. For the most part, I mean, some decisions, it's obvious that what's the right one when they make a case. But a lot of times, it's subjective enough that if someone doesn't like you, they will keep pushing back on your ideas. But if someone likes you, they will, 90% of the job is done right there. You know,
Sascha Wolf:
Yeah, yeah, that's true.
Adi Iyengar:
it sucks. I don't want to like make it sound depressing, but it is less about data and technicality than social communications.
Sascha Wolf:
I wouldn't even say necessarily like, like is a big part, but trust. If somebody trusts
Adi Iyengar:
Right, right.
Sascha Wolf:
you. Because I've had colleagues, I trusted implicitly, but I still thought they were assholes. Like when it came to technical decisions, it was like, you know what, when this guy talked about, okay, this is something we need to consider, like there's a risk here, I always knew he knew what he was talking about, but I still didn't like it as a human.
Adi Iyengar:
It's very hard to build that though without being like people even
Sascha Wolf:
It
Adi Iyengar:
to
Sascha Wolf:
is
Adi Iyengar:
build
Sascha Wolf:
true.
Adi Iyengar:
trust you need. To keep showing that you're doing the right thing, but I mean if the entire team is turned against you it's very hard, but yeah it's anyway it's something I learned maybe two years. After I started becoming a big after I became an engineer and I still learn early on in my life relatively but i've wished I had loan. a lot earlier that it's important to build that kind of connection, trust, likelihood, what are you going to call this social connection, which helps you in these like technical conversations.
Sascha Wolf:
Yeah, I think I can count the number of times where I said I told you so basically, I can count that on one hand. And most of that is actually not in a team inside of like an engineering team, but usually in like a management engineering perspective. And that is like something I've learned throughout the years. Like a few times throughout my career, I had moments where people asked me to do something and I told them, you know what, this is not a good idea. They asked me to still do it. I said, I'm okay. But let's write this down that I warned you here. This is a decision you're making. And nine out of 10 times, then nothing bad came out of it. But a few times, shit actually hit the fan. And then it was really good to say, you know what, people? I'm sorry, but I literally told you this would happen. Yeah, that is, I feel, like, the only scenario where, like
Adi Iyengar:
Yeah.
Sascha Wolf:
I told you so, it's not as problematic as if you do it inside of a team.
Adi Iyengar:
Yeah, I think my mind I might be a bit more even more extreme on like saying not saying it. I think a good time to say it is when something similar is about to happen. Maybe say something bad happened right and you predicted that and but in the retro or whatever you can indirectly communicate that somehow whatever right but if they make about to make a similar decision that you can like say in some ways a similar Like, hey, I said that earlier, this is what happened. It's similar in this way. Let's please not do it again, right? I think, but I've been burned by saying that very early in my life. And I actually said that
Sascha Wolf:
Okay.
Adi Iyengar:
to the CEO of the company. And they were very, very nice. So I got lucky, but it just is, it just sets a bad impression and bad
Sascha Wolf:
Yeah.
Adi Iyengar:
precedent for others also, right, around you. So.
Sascha Wolf:
I think you're right in that you shouldn't just say I told you so, but you should say I talked about this before and this is why we should do it differently now. I 100% agree.
Adi Iyengar:
Yeah.
Sascha Wolf:
I had like a one point in our career like a thing broke because we didn't have a proper offboarding process and like when then like an account from a former colleague got disabled, they actually provisioned some stuff, blah blah, right? Like that actually broke something in production. Shouldn't happen, yes, but I mean, you know how it goes. And then I was able, and that was a discussion where I even like a few months earlier I brought up, you know what? We don't have a proper off-boarding process. And at that point it was kinda, yeah, we should talk about it at some point. And then things actually broke. And then I was able to use it as leverage to bring it up as a conversation. People, we don't have a proper off-boarding process and now something broke because of it. Let's talk about it again. So yeah, we still don't have a proper offboarding process, but at least people are aware
Adi Iyengar:
Yeah.
Sascha Wolf:
of the costs.
Allen:
Yeah, I have a lot of those stories where I tell people, you know, here's my problem is I'm too nasty all the time That if I try to be nice people think there's something wrong with me
Sascha Wolf:
Wow, Alan, wow.
Allen:
Yeah But I mean like I'm i'm nasty but not without reason right? Maybe i'm just brutally honest is what my issue is I kind of give it straight like no, that's a really shitty idea. Like don't even think about it But at least like I'm not wrong and I give my answers about why like I you talk about like saying like this I told you so I had that recently about a vendor I think I talked about quite a few times on here. I said I know this is a really shitty idea the guy's terrible blah I said directly said but but. If it's two against one, I'll play the game. But I like you like, like I think it was audio. I said it or no, no Sasha. I think you said that right. I want to write this down. I disagreed and if I'm wrong, then I'm wrong. Right. That I want to be told I'm wrong, but I'm pretty sure this is going to be a bad idea. And it was. And of course I rubbed their face in it, like, like a puppy making poop on the ground. You know, I was like, I told you guys, it's not going to be a good idea. But then I let it go after a few seconds because I'm older now. I'm more mature.
Sascha Wolf:
The thing is that if you use that to change, if you use it as leverage for changing things to the better, then I still don't want to be an asshole about it. I don't think I ever have been an asshole about it, but it's very strong leverage to be able to say, you know what, I told you about this three months ago, we wrote it down, it happened now, we need to do something. It's a very strong leverage story. Uh, I also very much a big believer on like not assigning blame, like in the blame, I have working in blameless team, like, uh, it's keyword there is psychological safety, um, but even then you can go back to things and say, you know what we talked about this a few months ago, um, I would disagreed about that now things broke, um, let's do better, right? Let's, let's actually take a hard look at this now and change things. So it doesn't have. again. And sometimes, I mean, as you said, sometimes you disagree and sometimes nothing happens. Sometimes the thing you feared would happen doesn't pan out or maybe it happens 10 years down the line when you're no longer there. Who knows? Right? But sometimes it does. Okay folks, is that it? What is the name of this episode? Is this like building maintainable elixir applications or whatever? Something like that I feel. Well, you listeners, you will know what the episode is called. We don't yet. Okay, let's go to PIX. Adi. What are your PIX? Ah, Adi is not prepared. Alan. Alan, what are your PIX?
Allen:
Yeah, I just have one pick. You know, sometimes you get this issue of kind of like mismatch APIs where you have one being snake case, one being camel or Pascal case for when you're kind of sending over keys. And I brought before the show about talking about things that I saw in code and I didn't like. And I saw somebody manually changing like keys from like string camel case to snake case with with atoms right to match up stuff for elixir side. There's a really awesome library. I don't know if you guys have seen it before called parameters. Have you heard of it or no?
Sascha Wolf:
No.
Allen:
Yeah, it's basically just like for this issue, like you can define a kind of like a schema. And you can say, okay, the key is going to be like this, but I want you to translate it to this one, like you when you when you parse it. And it's super useful for this kind of issue. And it cleaned up a lot of code and if you're looking at it or not, but it's really, really useful. So like if you're going to be changing snake case to camel case or whatever, like when you get like parameters in, I think it's super awesome. I heard somebody talk about a while ago and it's been a lifesaver because so many API is are doing things camel case and snake case right so I think it's super useful and you guys should probably check it out.
Sascha Wolf:
nice uh adi do you have any picks for us
Adi Iyengar:
I really can't think of one. Can you come back to me Sasha? Just give me like 15
Sascha Wolf:
Sure.
Adi Iyengar:
more seconds.
Sascha Wolf:
Do I have any picks? I have only a small pick this week, I think, and that is probably something you all are aware of already, but the new season of How Black Mirror is out. And I watched the first episode last night, that's season six. And I don't want to say anything, but it was a very, very strong start into the season. So I've been really enjoying. Getting back into Black Mirror. I mean, it has been years at this point until the last season got released. But if you haven't watched Black Mirror yet, then you should check it out. If you have watched Black Mirror but not the latest season yet, you should still check it out. So Black Mirror is back and I've been very much enjoying it and that is my pick for this week. Adi.
Allen:
the last season I saw I did not like.
Sascha Wolf:
Ah.
Allen:
It's really that good this new season?
Sascha Wolf:
I mean, like, I only watched the first episode, but the first episode was... Mwah! Was really, really good.
Allen:
Okay.
Adi Iyengar:
Yeah, I agree with Alan. Season 5 was like kind of a...
Allen:
I told you so, okay? See, he agrees.
Sascha Wolf:
Hahaha!
Allen:
I told you it was not good. But okay, I'll give it a try.
Sascha Wolf:
The first episode is Joan is awful and yeah, it's really good.
Adi Iyengar:
Awesome. I guess I have like two Rust picks. Been trying to get back into Rust a little bit and did start off with like the Google's comprehensive Rust to start going through that repository. It's actually pretty good. I remember looking at it, I don't know, I want to say like a few months ago, it wasn't that good. But they've really added a lot more exercises. So I will check it out. I'll leave the link. in the description. And another Rust pick is a text editor. I have been trying to revamp my text editor game. It's been, I want to say, like 12 years since I did my Neo Vim configurations, 11 years or so. I think it had just come out at that time. And I need to rethink. And I checked. I kind of saw this new text editor written in Rust called Helix. It's very much inspired by Neo Vim. It's very snappy. A lot of the features are already built in, and it is based off of, it has integration with Treesetter. So, syntax highlighting and everything happens very, very quickly. So I think it's missing a few features for me to use it, as well as I use Neo Vim. There isn't like a very good tree view and all those things. there, it's also not so easy to install in every operating system. Again, you can expect those things with new editors. But of the new ones that I've tried in the last four or five years, this has come the closest to replacing my configuration. So I'm looking at this very closely and hoping something comes out of it. Oh, one more right pick. I had no clue that this is something that Jose does until this last week. Guy is streaming on Twitch live every day or every other day something like three or four times a week. And he's building a database connector in Elixir from scratch. He's already built like a few things. Again, I'm not going to spoil it for people who want to go back and watch it. But I stumbled upon it last week and I've been joining since then and it's so cool to watch Joseph William code live. So if anyone, if anyone's like a nerd like me and you know want to watch one of their heroes code live, check out his Twitter. He always tweets an hour or two before he's going live on Twitch.
Sascha Wolf:
Nice. The waiting was worth it. Adi. Okay, folks. I hope you enjoyed listening as much as we enjoyed talking. And I hope you tune in next time with another episode of Elixir Mix. Bye.