Adi_Iyengar:
Hey everyone and welcome to another episode of Elixir Mix. Today on the panel we have Alan Wymer.
Allen_Wyma:
Hello.
Adi_Iyengar:
and just me, we don't have Sasha today. But we also have a special guest, Tebow.
Thibaut_Barrère:
Hello, hi.
Adi_Iyengar:
Hi. I'm sorry, that was an awkward pause. I was debating, should I try to take a stab at your last name or not, but I decided no. But yeah, welcome to the show, Thibaut. Do you want to give our audience a quick run out of what you do, why you weren't invited, and how did you get into Elixir?
Thibaut_Barrère:
Sure, sure. So I'm an independent consultant for the last 15 years, working mainly with development and data, data pipelines, ETL work. So extract, transform, load, and data warehouses and stuff like that. So data backed application mainly. And so I viewed a variety of stack over the years, and my last main stack was Ruby. And now I've migrated gradually to more Elixir work. In that context, actually one of my main gigs is a French state-backed site, which is transport.datadofgood.fr, which is what we call an open data transportation access point, which is mandatory by law in the EU. Each state has its own national access point. And this is an open source Elixir app with quite a lot of stuff. You can find the data for transportation in a couple of formats, including buses, bikes, trains, electric charging station, et cetera, et cetera. So I worked with Elixir on that app for the last two years. And there is a lot going on. Everything is open source, so we will give you the links. And in that context, I had to work on some work involving HTTP queries to replay HTTP pagination and HTTP data fetching. So it was basically what led me to write the article that you are referring to. So I had to use rec and mix install to create those queries, to paginate over pages in API to download files. And I found the need to cache those operations, because I don't want to fetch the HTTP API each time I run something and I modify a data pipeline. I want to freeze the inputs so I can work reliably and with productivity down the pipe. So. I looked into Wreck because I really love that client. It's quite full-featured. It's building on the top of good basis, strong basis like Finch and Mint. And I was quite amazed at what I found, the way Wreck is extensible with plugins and the steps before and after the query. And I found it very clever and extensible. So this is very elixir-ish by the way. I shared what I learned writing a simple disk cache plugin in the article. So you have the story.
Adi_Iyengar:
Very cool, very cool. Yeah, so I would definitely like to learn more about Rec. You mentioned it's built on top of Finch and Mint. What other features does it add? Is it like more features to make it more usable?
Thibaut_Barrère:
Yeah, if you check out the readme, it's really what they call it batteries included HTTP client, not that Mint and Finch are not, but it's a higher level, which means that by default, you will get a lot actually. So you have what you find in other libraries like redirections, you have basic authentication, you have body compression. For instance, if Nimble CSV is installed and the type is CSV, it will decompress the CSV on the fly. You have extensibility at all steps. So there is quite a lot going on. And what I found clever is that all those steps you can unplug and provide your own or just decide which one you want using the lower level API. But if you use the higher level API, you get a lot by default. So I find it really nice to use now.
Adi_Iyengar:
That's really awesome. I have never used it, so I'm definitely going to give it a try. Very cool. So I guess one thing we were talking about before we started recording was about just mix install in general, right? And Alan, you mentioned you have tried it a couple of times before, or maybe not in production. Is that correct?
Allen_Wyma:
Yeah, I think I made a video about it to kind of let people know about it because it's quite cool. But I think my biggest thing is I just don't feel like Elixir is much of a scripting language necessarily because of the way it works. I mean, to me, it's always a log running system rather than like a one off script I would run or something I would just kind of turn up like a like a Python thing.
Thibaut_Barrère:
Yeah, I was like you, I must say. So coming from a Ruby background, I was my base language for scripting is Ruby, because it has a lot of gems and everything. But initially Elixir was harder to use for me and even more so for scripting tasks, which are I want to get things done quickly and be done with it. So, but when I saw Mixinstal and the... I discovered it via Livebook, I think, because in Livebook, you can at least install the library you want in your Livebook with Mixin style, or you can rely on a Mix project. So I discovered it that way. One of my lines of work is maintenance work, so Ansible stuff, upgrades, making sure that the overall cost of a project goes down and not high. And so I really was attracted by the idea that you could just run a quick script, specify, pin down the version that you want without touching your main application. So I have a big application, okay. It has its own dependency. Maybe some are stuck a bit in the past because you have a dependency constraint. So you cannot install exactly what you want. Okay, so I go to a mix install script and then it's more like a... how do you say that, a blank slate. So I can start from, I can add whatever I want there just for the need I need for my experiment. And I don't have maintenance constraints. So I really love that aspect, which allowed me to create some very rich script and start small and iterate and maybe at the end, promote the code, extract it and put it in the main app. Actually, that's what I did on my main project. Sometimes the exploration, they start as a small mix install project, files. And then I promote the code to the application once it has graduated. So initially it was quite hard to script with Alexia for me. But now I use MixInstall together with a tool called Enter, E-N-T-R, which allows to monitor a list of files and then rerun a command. So what I do is I monitor a couple of files, my main MixInstall files and a couple of data outside of it. And I pipe the list of files into Enter. and it will relaunch the script. I find it very nice because I just saved my script and it reruns. So I can comment easily and experiment like a rebel, but better. I'm not very good at IEX. I don't find it super comfortable. I get the job done in it, but when I have something a bit more complicated, I love that in Mix and Style, you can save and rerun everything. So.
Adi_Iyengar:
I guess one thing that, to run it in production, you need mix and elixir in production, right? That's something like if you use releases or distillery that generally people don't go on that route, they don't usually have elixir and Erlang in production, especially if it's in a docker container, it's just like an alpine lightweight image. Do you generally have that elixir and Erlang installed for that, or is it like a hack that you found? to use mix install.
Thibaut_Barrère:
Oh, actually when I use MixInstall on the code locally so far, so I don't use it on the production system directly, but I can typically backport a production database or get a read-only access to a production bucket or something so that I can safely
Adi_Iyengar:
Right, right.
Thibaut_Barrère:
connect, et cetera. But I don't use it for the deployment targets that I have. They vary a lot. So I have a... The project Transport Data GouvFR is a Docker image, which uses a mixed compile. I have used release with distillery in the past, and I will definitely use the new release in Elixir itself in the coming months as well.
Adi_Iyengar:
Got it. So the article that you wrote of implementing the cache, the module that you defined the custom cache that may or may not live in production, but the mixed script that you ran was locally.
Thibaut_Barrère:
Yeah, exactly. So
Adi_Iyengar:
Gotcha. Gotcha.
Thibaut_Barrère:
it wasn't a big problem. It's in the same repo, but I can query production data. And more or less, it's a kind of what I call a playground or scratch pad zone inside the main repo. So we will share the link. But we basically have a slash scripts folder in the main app directly so that we can see the file.
Adi_Iyengar:
Nice.
Thibaut_Barrère:
We are a team of three on that project. So we keep the experiments there. So we don't feel the pressure to respect production grade of quality, unit testing, et cetera. But at the same time, there is a lot that we learn while writing those scripts that we can keep around. And at the moment, for instance, I'm writing a reasonably complex piece of software in that app, which is a proxy. a HTTP proxy for a certain format, which is Siri. It's a transportation real-time format. And initially I needed to learn that norm, how it works, how to make a little query. So I just created a pull request and a branch and I did some mix install stuff and it allowed me to experiment and to keep the knowledge I learned. without actually impacting the system and to share that knowledge with my colleagues as well. And gradually, I'm in the process of expanding it and now I'm going to, for instance, this week I'm going to start working on the type of, you know, Postman, the HTTP query interface. So basically we will have something like that, but in live view and specialize for the... format that we target. So we pick templates of typical queries. This is a learning tool for people willing to use that format. So you will have explanation, et cetera. And what I will do at that point is basically take what I've learned with MixInstall and custom standalone scripts, move the code and refactor it into the main application. So MixInstall will have helped me prototype all the things over the last months, maybe almost one year. and gradually store my learning without a fear of losing it.
Adi_Iyengar:
That makes a lot of sense. I guess one last thing about, and I'm not sure if we've run into this, but if you require any of your regular library files in that, in the scripts, one of the scripts, right, that you use, say you want to use a repo or you want to use some other dependency, all of the files that you require, will probably get compiled, well not probably, will get compiled using the new version. Say for example, you have a different Ecto version in the Mix app and a different Ecto version in your Mix install script. It will use the Mix install script. Have you run into any discrepancies related to that yet, or is it too edgy of a case
Thibaut_Barrère:
Oh, yeah,
Adi_Iyengar:
for you to run into?
Thibaut_Barrère:
no trouble at this point. One of the work I've done on the application on Transport Data GouvFR is the project was initially launched as a kind of state startup. So they needed to ship things quick and to ship a lot of things, actually. So when I was on boarded on the team, a part of my work, and I've shared that experience on the Elixir use case article, which is out on the Elixir language site. that we will be able to share as well. So part of my work initially has been to upgrade the stack, actually to bring it back up to date, to leave the startup mode and to go to something where the maintenance can be sustainable and can make sense and can be easy to achieve. So this has taken a lot of time, but the good news in Elixir is that it works quite well. I mean, We have very few breakages after upgrading from Elixir 1.8 now to 1.14 deployed this morning. Okay, I made a little breakage, but a small one. It's okay. And the maintenance story in Elixir is quite good as long as you pick properly maintained libraries. But the maintainer that I've pinged over issues to keep a library up to date or fix more wiring. have mostly all been very responsive. And I see the same when using MixInstall actually. I have a MixInstall script with a certain version of Vecto, et cetera. When I move the thing to the main app, generally I don't meet problems at all. So that's pretty cool.
Adi_Iyengar:
Nice. That makes sense. I guess we haven't really gone into the details of your implementation of the cache itself. Do you want to walk us through that, how to do more of the strategy in creating the cache? I see that you have a cache path function, which is like a one-to-one unique cache key function. Yeah, I'd love to learn more about your thought process when you build this.
Thibaut_Barrère:
Yeah, sure. The thought process was initially, I remembered that REC had built-in cache, but it's a cache built based on the if-modified-since request-header. And my problem is that the servers I was targeting did not respect the header. So I cannot cache. So what I did at that point was, I was first very... curious to see how REC implemented its own built-in cache. So this led me to explore the source code a bit and to discover the notion of steps that you can add. And so I wondered, okay, so how will I implement the right steps? So I realized that I needed to work a bit before the query and a bit after the query as well, because before the query, you need to... verify if you already have the file on cache in a deterministic fashion. And after the query, you need to store the result so that the next call will be stopped at the query. So it was a bit fuzzy for me initially. So I just dove into the rec code and so that you had that nice plugin architecture, which is supposed to group steps together. So you have, three types of steps. You have the request step, so you can plug into the request. Like maybe you want to add something in the headers, like, I don't know, basic authentication, for instance, or something specific like that. Maybe you would create a token automatically and refresh it as you will automatically. This middleware IDs, which you can see in Ruby a lot as well. And so, yeah, I can't remember exactly which example I took. Maybe it was the straight example. Initially, it was a bit confusing because you have those attach methods. So I didn't know if it was a name that I needed to respect or not, but actually not, it's just a straight method where you configure the request and it's all stateless and immutable. So everything is clear actually. So basically
Adi_Iyengar:
Right, right.
Thibaut_Barrère:
you have to have a eight attach method where you can register options, something which is clever. How do you pass configuration to the modules that you add inside the pipelines of query and responses? And it was
Adi_Iyengar:
That's
Thibaut_Barrère:
really
Adi_Iyengar:
very neat.
Thibaut_Barrère:
easy. Yeah, it's really, really easy to use. And then how you register requests and response steps. Once that was done, I just needed a bit of... The code is very small once you grab that. So I was able to pick the cache path computation from the rec code as well. They do something clever using the host, the request method, and the URL that you encode with a hash. So this makes your local file because you need to find a way to construct a file that is compatible with the file system. So no special characters, et cetera. And something that will vary as your query varies. So if you have parameters, you need to include that into the key. And it was very straightforward because it was already in rec, so I didn't even adapt that. And yeah, there's something really neat that I found compared to other language that I used in the past is the Erlang binary to term. or term to binary, which allows you to serialize the whole structs. So this is what's quite clever because when I use that cache in practice, the calling code will get the same structure, the RAREC response, HTTP response, and it won't know that it has been serialized or not because the whole structure is serialized. So... That's pretty cool. I think it's quite unsafe to do in some languages, because here the data is really immutable. So when you persist that, you don't have a lot of state in it. Not at all, actually. You don't have pits and stuff like that. So you can safely serialize and un-serialize. And it's really transparent for the caller, which makes it even nicer to implement. Did I properly respond to your question? Sorry.
Adi_Iyengar:
Yeah.
Thibaut_Barrère:
Yeah.
Adi_Iyengar:
Yeah, that was great. I mean, there's so much to unpack there. First thing, I told you I'm very new to the rec library. Huge fan of the options thing. Being able to explicitly pass the options and then like pass the keys and register them and then pass the options. Very Was it NimbleOptionsy? That's
Thibaut_Barrère:
Yeah.
Adi_Iyengar:
what it's called, right? NimbleOptionsy. So it's a huge fan of that. And yeah, I mean, the Erlang term to binary, I'm so glad you use it. If, say, you don't want to add JSON encode or any other encoder, it's such a lifesaver, especially if you want to, you have any term that you want to write to a cache or any, a dj. database or rabbit MQ whatever you know write it to like a message queue and reread it and get that term back like term to binary and binary term is like a lifesaver I'm so glad you're using that here
Thibaut_Barrère:
Yeah, it was quite nice because in the analysis I was doing that day, I was basically iterating over pages of JSON. But once I was on the page on the item, the item contains one URL which targets a binary file. So this can be zip files, XML, CSV, whatever, or zipped files. So it was important to keep them pristine and not corrupt them. So... It just saves the bytes as they are, which is pretty cool. And as well, I'm not yet using Rec in production directly because the API is still in flux. It's 0.3.1. So using Mixinstal is a way for me to get used to the API. And gradually, I think I will use it in production, but this way I don't have a lot of pressure to try things out. And I didn't have any breakage so far. So it's, I like that it builds upon shoulders of giants. So mint and Finch below. And overall, I feel the maintenance story of those library as open source projects is very good. So I'm quite confident that we will see them for the years to come alive and well. So that's pretty cool as well.
Adi_Iyengar:
Yeah, it is really cool. I evaluated Finch and Mint, I think, early 2021. And the reason why I stuck to HTTP Poison was because of XVCR, which is like a way to record a request and test it. It's like a huge part of how I do API testing. And it didn't support them at that point. But then I heard that they have support for Finch now, which means they also support for REC. So that's like an in. and all the features that REC brings that HTTP Poison doesn't even have anywhere close to, is enough of an incentive for me to re-evaluate my HTTP client. And maybe all the listeners should definitely check out REC. It looks very, very promising.
Thibaut_Barrère:
Yeah, actually I was quite...
Allen_Wyma:
What about
Thibaut_Barrère:
Yeah,
Allen_Wyma:
Tesla?
Thibaut_Barrère:
sorry.
Allen_Wyma:
Sorry, I was just kind of curious about Tesla. Have you guys ever worked with that one before?
Thibaut_Barrère:
I've not used Tesla in... Sorry, go ahead, go ahead.
Adi_Iyengar:
Go for it, go for it.
Thibaut_Barrère:
In the app I'm referring to, so Transport Data Gouffre, we have HTTP Poisoned, we have a bit of... We have... Oh, sorry, we have Finch directly, but we don't use Tesla. HTTP Poisoned was the legacy client, it handles a lot of cases, so it's very convenient. We started using Finch for one specific case. which is that we needed to compute checksums of files that we download in a way that doesn't require a file to be in memory completely. So doing a streaming download, and I will share the code because it's open source for our listeners if they are interested. And Finch made it very easy to implement the streaming calculation on the HTTP body that you. download gradually. So that was my first contact with Finch at the moment. And later we started using it more. And when I realized that REC was actually using Finch, it was really, I had a good feeling, but I never used Tesla Excel. Is it a middleware type of things like it is in Ruby? Is it inspired by the Ruby one?
Adi_Iyengar:
Yeah, Faraday and Ruby, yep.
Thibaut_Barrère:
I'll fare
Adi_Iyengar:
That's
Thibaut_Barrère:
it
Adi_Iyengar:
why they
Thibaut_Barrère:
okay.
Adi_Iyengar:
also picked a physicist, a Faraday
Thibaut_Barrère:
Yeah.
Adi_Iyengar:
and Tesla. I think that's why they picked the name. But yeah, I haven't tried Tesla yet. I was forced to use it at a job because I think the Elixirs, what is it called, the Open API Spec library, they generate a client and they use Tesla. So we had to end up using Tesla to get comply to use a client that's generated based on open API specs. But yeah, it's very middle-warey. I think you can also make Tesla use Finch as the actual client. So I think that's a good point.
Thibaut_Barrère:
Oh
Adi_Iyengar:
I think
Thibaut_Barrère:
yeah.
Adi_Iyengar:
that's a good point.
Allen_Wyma:
Yeah, you can make it use stuff. So you'd keep the interface, but then you'd just swap out the back end. That's what I understand.
Adi_Iyengar:
Right, right, right. And I think that's the most promising thing about Tesla. But I think when I was evaluating it, it did not have Finch's adapters. I think it had Erlang's, what is it, it's called Go or something. That's a very fast HTTP client. Again, I don't remember, it was a while ago. But yeah, you reminded me of Tesla just now. It was not even in my radar. And yeah, it's definitely worth keeping in mind while re-evaluating. It should
Allen_Wyma:
Okay,
Adi_Iyengar:
be clients.
Allen_Wyma:
so it has HTTPC, which I think is the one that's built into Erlang, which you think you're just talking about. Hackney, which I think we've all played with a long time ago. eyebrows,
Adi_Iyengar:
This ACB
Allen_Wyma:
which I think
Adi_Iyengar:
poison
Allen_Wyma:
are
Adi_Iyengar:
is hackneyed too.
Allen_Wyma:
Yeah, I think it's built on top of it, right. So eyebrows, which I do remember hearing about, but I think that's really old one, right? If
Adi_Iyengar:
Yep,
Allen_Wyma:
I remember
Adi_Iyengar:
really
Allen_Wyma:
correctly.
Adi_Iyengar:
old. That's the one I was talking about, the fast one. The O-Lang fast one. Yeah, yep.
Allen_Wyma:
Oh, that's the one. Oh, guns written by the guy who did Cowboy, right? I think. Or am I wrong?
Adi_Iyengar:
I guess makes sense.
Allen_Wyma:
Uh, mints and Finch.
Adi_Iyengar:
Hmm.
Allen_Wyma:
So has all the, all
Adi_Iyengar:
Very cool.
Allen_Wyma:
the big ones.
Thibaut_Barrère:
Actually, in the Ruby world, I don't have very good memories of using Faraday. I think at the time, it was a long time ago, I had trouble because the fact that there was an attempt to make something very generic at the top and use adapters, which is very attractive initially. But I think I made some corner cases, which troubled more than the benefits I got from the abstraction. That said, I'm less... I'm less... I don't think I would be afraid to try test lab because in Elixir in general, that type of composability is done in a better way. At least that's what I've seen so far. So I'm less in trouble with starting using it.
Adi_Iyengar:
I think one argument, and I haven't seriously given Tesla a try, so I'll take this to the grave. But yeah, I think with flexibility comes complexity. I think to make something very adaptory, I remember that with Faraday as well and Ruby, the complexity and the learning curve to get comfortable with it, to actually know everything that's going on, which I feel like there should be at least one engineer in the entire team. know things at a low level, it just introduces more complexity. If there is, I feel like with the WREC, all the cases that I see that it's supporting, at least at a high level, it's simple enough that we can follow it. Tesla is too complex for me. But it's still configurable enough that we can do basic things like retries and caches. It might be a happy medium where we can confidently say how to use WREC without, you know, bringing in a lot of complexity that Tesla might bring in. Again, this is like a, yeah, this is like just a speculation. I haven't looked at either in detail, but this is experience that I had in Ruby, which might carry forward here in Elixir 2.
Thibaut_Barrère:
Yeah, one thing that I keep in mind when choosing library, and I'll share a link about that, is that if you, depending on the need, you can target something completely different and have completely different needs. For instance, a part of the application that I'm working on is a HTTP proxy. So in the future, we will maybe want to be able to to stream the proxies of the client and user, targets our own server, we do a couple of verifications and things like rate limits. This is actually something to protect servers, which open their data, but are not always able to handle the load or to implement rate limits or to implement clever API management, stuff like that. So we act as a national, protection server and in that context we do the HTTP query that we receive. So the end user queries us and we query the private server. We look at the answer and we send it back to the client. And in the process we use cachex to put some cache so we keep the data 10 seconds in memory for instance. so that we can protect the target server from excess load. And so in that context, I looked a bit at what you can do to stream the answers, et cetera. But since for now we cache the things in memory, so there is no big point in doing some streaming. But later maybe we'll have some caching which is done differently, in which case having a HTTP streaming API can be useful. So yeah, it's completely a different need compared to the article I've wrote, which is more laid back analysis on my workstation.
Allen_Wyma:
Thank
Adi_Iyengar:
Very cool.
Allen_Wyma:
you.
Adi_Iyengar:
Alan, do you have any other questions?
Allen_Wyma:
No, actually, I was just getting sidetracked because I saw that gun was was actually done by the guy who did cowboy. I forgot his name.
Thibaut_Barrère:
Ah, yeah.
Allen_Wyma:
like you said, makes sense. And then I just remembered about bandit coming out and I was like, yeah, bandit but band is actually on the server side, not on the client side, right, which is what this
Thibaut_Barrère:
Yes,
Allen_Wyma:
is all
Thibaut_Barrère:
yes,
Allen_Wyma:
about.
Thibaut_Barrère:
it seems to be a plug
Adi_Iyengar:
Right,
Thibaut_Barrère:
compatible
Adi_Iyengar:
right.
Thibaut_Barrère:
alternative to Cowboy or something. Yeah, I didn't
Allen_Wyma:
Yeah, exactly.
Thibaut_Barrère:
try it
Allen_Wyma:
So
Thibaut_Barrère:
yet
Allen_Wyma:
it
Thibaut_Barrère:
but
Allen_Wyma:
looks.
Thibaut_Barrère:
it looks interesting.
Allen_Wyma:
supposed to be much faster. So I'm kind of curious if we're
Adi_Iyengar:
right?
Allen_Wyma:
going to go that direction eventually.
Adi_Iyengar:
I wonder if they'll write a client named sword, just to keep up with the cowboy gun, bandit sword pattern, whatever. Bandit
Thibaut_Barrère:
Maybe,
Adi_Iyengar:
sword?
Thibaut_Barrère:
maybe.
Adi_Iyengar:
Do bandits use swords? Or?
Allen_Wyma:
I think maybe in India they use swords. I don't know. I did see Indiana Jones in there using swords in there, right, in India.
Adi_Iyengar:
Oh man, that's like the first in-depth reference you've made here, Alan, with me, so, uh, nice. Hahaha.
Allen_Wyma:
What? I don't know. This is what I see in the movies, man. I just follow the movies. Uh, no, but I don't know. Like I'm always kind of confused about like, you know, Finch and Mint. And I feel like we're going to have another, you know, client library. I don't know why we've got so many HTTP client libraries out there. Is there, it seems like, uh, I don't know, like maybe JavaScript has their framework of the week, we got our like HTTP client of the week kind of going on. It's
Thibaut_Barrère:
Yeah,
Allen_Wyma:
not that many, right? But it's, it's not quite a few, right?
Thibaut_Barrère:
I mean, I don't think there is a single stack where there aren't plenty of HTTP clients. It's it looks the same everywhere. I think there is a flavor personal state, but taste, sorry, but as well HTTP is quite complicated actually when you dive into it. And that's what I like about REC and Finch and Mint. Mint is a low level stuff. Finch is slightly higher level and use Mint. And REC is even higher level and use Finch. So it all depends on what exactly you have to do, but at least they are working together, which is nice.
Adi_Iyengar:
Right, right,
Thibaut_Barrère:
And
Adi_Iyengar:
right.
Thibaut_Barrère:
it's
Allen_Wyma:
So, does this
Thibaut_Barrère:
tricky
Allen_Wyma:
say so we got
Thibaut_Barrère:
because
Allen_Wyma:
Russian
Thibaut_Barrère:
of our...
Allen_Wyma:
dolls basically?
Thibaut_Barrère:
Yeah, yeah, exactly. For those three, yes, yes.
Adi_Iyengar:
Yeah, I mean, I forgot who said this. I think it was like one of the Rust guys. That number of HTTP clients dictate how good and early a new language is doing. If a new language has a lot of HTTP clients coming out, I mean, it's being used a lot and has a lot of opinions, but as the language gets old, the number of HTTP clients decrease that have been adopted and all these new ones that come up by people who think they can maintain it, but they don't end up maintaining it. they slowly retire. So it's good to use the ones that are built by people who are more prominent in the community or are part of the core team. Again, not that we're nowhere close to that point where these clients will go away, but yeah. Finch, Mint, and Wreck are at least in that category. All those guys are known and are connected to Jose somehow.
Thibaut_Barrère:
Yeah, and I mean the issues count is low as well, because they are more recent. So it's maybe in a way easier to achieve. That's something I look when I pick libraries, trying to pick well-maintained libraries where the maintainers are active, no matter where they work. But because the main cost for software is maintenance. And I've been beaten by things like server broken at 4 AM because Twitter upgraded their TLS stack and the client discovered that our HTTP client was not upgradable at all. And so it was a Twitter spider and with all the other social modules. But when you have... old code relying on the old HTTP clients, you are looking for trouble at some point if you work with the internet. So, yeah, I tend to choose things that are well upgraded and which I can maintain easily on my side as well.
Allen_Wyma:
There's also another issue too that just came to my mind. Do you remember there was a release of OTP that came out suddenly to fix an issue with SSL certificates? Do you remember this?
Thibaut_Barrère:
Yes, it was a global CA root experience or something or revocation. It was in September
Allen_Wyma:
Yeah.
Thibaut_Barrère:
last year, I think. Something like that.
Allen_Wyma:
Yeah.
Thibaut_Barrère:
Yeah, yeah, yeah. I remember that because we had to deal with that. So indeed, yeah, yeah, yeah. I remember that. This is the type of things that can happen on the Internet.
Allen_Wyma:
Yeah, so even even that kind of problem, I mean, luckily, that's in OTP or not like some like library that nobody updates, but everybody uses. So I mean, this kind of stuff is, yeah, that's a huge problem, right? So was it difficult for you guys to fix their problem? Or is this basically upgraded with TPA? Because I think it was just like a patch release at the time,
Thibaut_Barrère:
We were
Allen_Wyma:
right?
Thibaut_Barrère:
able to upgrade quite easily because the technical depth had been swallowed. And so at that time our release process was quite good already. Today too, because we use Docker, which is not the most popular option, it's for legacy reasons, our hosting provider is French and they provide the Docker support. Instead of using their built-in Elixir support, we use their Docker support, which allows us to finally control what we want to put inside the Docker image. So we have a workflow with GitHub Actions, which allows to... We use the XPM Docker image as a base, so you can choose the Elixir and the OTP version. And then down the line, we add... Node.js, whatever we need, all types of binaries, which we add Java programs because we shell out two Java programs as well in the app. And this allows us to be quite independent and to be able to upgrade at our pace. Because imagine that we would be depending on the OTP version that our hosting provider gives us, we would have been stuck probably.
Adi_Iyengar:
Yeah, I'm surprised that you said that you use Docker, which is not a popular way of doing it.
Thibaut_Barrère:
Not at all.
Adi_Iyengar:
I think Docker is really interesting. From my experience talking to companies that do Elixir, I know only one company that uses Elixir that doesn't use Docker. Now I know, well, you also use Docker, so I still know only one. So what is the popular way of deploying Elixir in your opinion?
Thibaut_Barrère:
I mean, when I first came to Elixir, I had to work. So my very first app was on Heroku with Buildpack. So it's easy. Yeah, OK, so basically you push to your branch. And when you deploy, the Buildpacks will run Mix compile. And you work like that. As I did more consulting, I came over places where people were running mix.phx.server. almost manually on the target, no releases, or something with Capistrano coming from the Ruby world. So basically a script that allows you to pull the right version and rerun the compilation. So yeah, all types of possibilities. I mean, you can use build packs or use a PAAs. In some PAAS, Platform as a Service, they provide Docker support, so it makes it easy to provide whatever you want. And compared to the option of the, which is often mentioned on the Slack for the LXL language, which is basically create, you have your own build server and you create releases and then you deploy them to the servers. Well, I've done it a couple of times, but I... I have not seen it much used except from hardcore tech channels.
Adi_Iyengar:
Right, so having adopted Elixir in 2015, I guess we have gone the Capistrona route. In
Thibaut_Barrère:
Okay.
Adi_Iyengar:
fact, the first Elixir Capistrona library, I built that. It's still, I think, open source. It's called Act, AKD. That was OK. We went the build server route, which actually worked OK for some time, like build in a server and then deploy. But again, you have to make sure the versions of the server libraries and dependencies are the same
Thibaut_Barrère:
Yeah.
Adi_Iyengar:
on both those servers, right? Because, I mean, open SL is different, boom, it breaks suddenly, right? Or any key libraries are different that breaks. And mixed Phoenix server is just like silly, I think. Like, having to maintain Elixir and OTP on a live server is so much work. So much work. Yeah. Again, that's why I'm surprised. I'm places you've worked at in your circle, it might not be. But every place that I have worked at, they may or may not use the mixed Phoenix server route. But eventually, they go to Docker. Eventually, they
Thibaut_Barrère:
Okay.
Adi_Iyengar:
realize, oh, we can't maintain this. Right now, I'm advising for the companies that use Elixir. One of them used to use the build server route and
Thibaut_Barrère:
Ně ne.
Adi_Iyengar:
run the build manually. But just seeing how convenient it GitHub Actions, like you were saying, right? Build an image, build and publish an image, and have a watcher. Even without Kubernetes, have a watcher. I think it's called Watch Tower. That just watches for a new image version that auto deploys an image. That setup is so simple. And Heroku also provides the build pack for containers. So all that stuff, combination of all this stuff, I feel like, everyone's. I feel there's almost a consensus in my circle at least that Docker is the way to go. But yeah,
Thibaut_Barrère:
Yeah.
Adi_Iyengar:
I was just curious when you said it's not.
Thibaut_Barrère:
Actually, I quite like about Docker is that we are not really... There is no vendor lock-in with our hosting provider because of that option, which means it would be quite easy to go to another provider. So we stay with them because we like them and they are working fine. On the other hand, more than a couple of times we've had issues with the fact that Docker is a bit of a black box. So sometimes the container will crash and because we are not the hosting provider ourselves, it can be tricky to determine what has happened inside the box. Something which is less the case... When I manage a server directly, it's more work, but often I have more control. I can run more stuff on it, and generally have more control. So it's a type of balance between less work, but less observability, and sometimes a bit less performance as well. And the other hand, I have my bare metal server. Okay, I need my own build server. but I fully control. So depending on the needs of the client and the HR, do they have people to manage those servers as well? Because it's nice, but I don't want to leave clients with SSH stuff which is not secured and the patch which are not applied and everything. So sometimes less is better.
Adi_Iyengar:
Totally, totally. Yeah, I think the stuff you're talking about, I feel like a good logging, both the container level log and the host level log for the Docker inspect logs, the Docker logs, both periodic Docker inspects. And I think Heroku, for example, calls it log drains. It drains all the container logs every minute to a logging site. I think those, making sure that those are set up properly. You can probably get out of 99.9% of it. In cases, I'm not saying 100%, right? There's still something that might happen between host and container. But yeah, I see what you're saying. If you have a good DevOps team experience maintaining bare metal and you have complex, I guess, like intra-host configuration, for lack of a better word, then it makes sense to take the bare metal approach.
Thibaut_Barrère:
Yeah, but so far, I mean, the transport data gov.fr, the Docker route works quite well, my main project. So we are full Docker at the moment, with the exception of the database, which is hosted separately. Our main trouble is that we don't have the host logs, because we don't manage it ourselves. We only have the container logs. So sometimes when we have a crash, we need to investigate, the full elements. So we are in the process of adding Prometheus metrics to get more insight on what is happening inside the box etc. Yeah, because we otherwise we are a bit blind at this point. But on...
Adi_Iyengar:
Yeah, I think
Thibaut_Barrère:
yeah. Sorry, go
Adi_Iyengar:
I
Thibaut_Barrère:
ahead.
Adi_Iyengar:
was saying, a lot of these small providers who manage the containers, what they do usually is they spin up a VM inside which they spin up a container. And the entire Oracle Virtual Box logs are for you to see. In that way, the security will be implicit. It will only be able to see your VM's logs properly. I mean, something like that would allow you to see the host logs. It's not really the host, but host of the container without necessarily giving the entire host's permission, because they have a virtual machine in which a container is running. I have seen people take that approach. Again, that's a lot of overhead for them to maintain, obviously. Sorry, I was saying something I interrupt to do.
Thibaut_Barrère:
I can't remember actually, because I have a bit of COVID, I think.
Adi_Iyengar:
Oh,
Thibaut_Barrère:
Yeah, but
Adi_Iyengar:
I'm
Thibaut_Barrère:
a
Adi_Iyengar:
so
Thibaut_Barrère:
slight,
Adi_Iyengar:
sorry to interrupt you, sorry.
Thibaut_Barrère:
a small version. So that's cool. Yeah. But I was, I must say, I was happily not surprised because I use that on bare metal server, but even just with a basic Docker, uh, we are handling a decent traffic, including with proxying. which can be more demanding because in the proxying process, you get to keep the HTTP connection for longer durations. And at this point, we do not have huge problems. We just have one specific architecture where we have two nodes which are not really connected in the Elixir cluster sense, it's not a cluster. They are connected via O-BAN, via the database. So our main site takes in all the traffic, the proxying, the catalog of data that we see, the maps, all the tools. And then when we have heavy jobs, sometimes we shell out to converters or analyzers of data, which are Rust programs. Sometimes we have Java programs. So they run on the GVM. So what we do is that we create a shell with the library is called Rambo. Rambo is able to shell out a process and run it and capture the logs and manage the zombie processes properly. So the problem that we face from architecture is that sometimes those programs, those Java or Rust programs, the file that we process can be quite big and the RAM can explode a bit. So we use two nodes so that if the Oban worker crashes for some reason, it will get back up on its own without causing problems to the HTTP traffic on the other node. So it's a bit low tech but it works nicely so far. So we don't have really an elaborate LXC cluster We are not able, due to Docker, to limit a third party program RAM consumption, by the way, which is a big problem with Docker in unprivileged mode. So that's our low tech solution. While staying in the PIS, we use one front and one worker, which is very Ruby by the way, so Sidekick and the HTTP server in a way, but yeah, works well for us so far.
Adi_Iyengar:
Yeah, I mean, whatever works, right? Low tech, like I said, less is more early on. You don't need to over-engineer. And that's, I think, one thing people do a lot, I feel, in the Elixir community. People like to try out new things. Oh, I want to try out, what is it, Fly? Fly, oh, it's edge computing. I want to try all that stuff. How many requests per minute do you expect? Maybe at most one. You don't need edge computing. So yeah.
Thibaut_Barrère:
Yeah.
Adi_Iyengar:
It totally likes knowing where to stop knowing what's a good place for your app because like I said any Code not just code any technology is adding maintenance any Technology any code is tech debt right so like keeping it simple keeping your tech stack simple It's definitely the way to build a system especially for a small startup
Thibaut_Barrère:
Yeah, and the good news, and I share another link, we have a live vehicle position map implemented on the site. I share the link with our listeners. And I did things like that before in other languages. The fact is with Elixir, we didn't even have to think about the scalability much because... everything worked so well. So basically what this map does is it pulls, I don't know, 60 or 70 protobuf binary files. So protobuf is JSON but binary more or less to keep things simple. And we pull them every 10 seconds and they give the vehicle positions of buses in some cities. Some refresh it every 20, every 30 seconds, etc. So sometimes they move fast and other times they don't move fast. But to implement that and I share the code link and maybe I do a XCConf talk about that because it could be funny to implement in live. We just have one Gen server responsible to pull the 70 HTTP sources. We decode them. And then for each of them, we broadcast using Phoenix Channel. the data to the JavaScript client and each JavaScript client for each data set that we have it's a layer using a DecGL which is a mapping technology which works on top of leaflet. So basically whenever Phoenix, the gen server pulls an update into one of the feeds we just broadcast the whole thing on the other end and it refreshed little work. I mean, some other stack you would have to have a special something to grab the feeds and here you have everything in one process, the fetching, the pubsub to the client via the sockets and the presence, sorry, the presence and the channels and all goes through the the JavaScript as a map. It was surprisingly easy to to create and as well it's really scalable because the pulling occurs just one for everyone.
Adi_Iyengar:
Right?
Thibaut_Barrère:
Yeah, so my point is that it's not to brag about a map or something, it's to say that it's quite a low cost maintenance to have something like this up and running. And this is very encouraging to develop more visualization in real time. And it's not a big deal in Elixir and in Phoenix. And that was quite an amazing experience. So
Adi_Iyengar:
Yeah, it looks brilliant. The delay after clicking, I'm clicking on the BNLC checkbox, and the delay is not much at all. They said 70 HTTP requests. That's pretty good for a single, I guess, effectively a single threaded part of the process. So there's only one Gen7 that makes all the calls. So yeah, I think it's very snappy, and it looks great. Yeah.
Thibaut_Barrère:
КУЛ, КУЛ!
Adi_Iyengar:
Awesome. I guess one last thing I'd like to talk about, since we talked about HTTP clients and HTTP calls and API calls, I think we have briefly touched on this before. But how do you approach testing those API calls? What's like, you know, yeah, like
Thibaut_Barrère:
Oh yeah.
Adi_Iyengar:
automated testing. Yeah.
Thibaut_Barrère:
Big topic. Okay, so initially the app was using XVCR a lot and it was in the pre-MOCs area. So basically the app as it was, wasn't very oriented toward testing. So a part of the work I did was to extract behaviors. of the parts that make external calls. So to get inspiration there, first I read the Mox articles and the foundational article. And sorry, let me put something on not disturbed. Sorry, I had a beep in my phone. So we'll have to cut that after the, at the edit time. So yeah. What I did is realize first that the app was making HTTP calls, and that was what was making the testing difficult. So then I looked at the XPM source code, the package manager, which is open source itself. And I saw how they implement various parts. They use mocks, and they have testing implementations and production implementations. And it's explained in the mocks. projects as well. So basically what I did was extract proper behaviors for all the paths doing external calls and try to find the right layer of code with the right level of abstraction to do that and gradually introduce mock implementation, stubs and mocks, which is easy to do as long as you have extracted mocks behaviors in your code. Some library in the Alexia world do have, are designed this way already. If I remember where HTTP poison, for instance, is already a behavior. So yeah, the big testing part was actually removing XVCR calls because I find them hard to maintain. It's hard to replay sometimes and to, you can keep sensitive data in there without noticing. It can require things to refresh, it can be complicated. That was the initial state. Then we move to mock MOCK, which was actually hacking the modules in the beam at runtime. So you can intercept the calls. The problem is that with that approach, your test cannot be async because the global state is changed. And it was just some temporary patching. And gradually, we have moved most of our app testing to mocks. So everything has a behavior. Everything has a clear boundary. And everything can be tested quite easily. And we'll raise an error if the expectation is not set on the mock. So I'll share our current configuration. A lot of things are mocked available with a testing implementation and a production implementation. And one benefit of that design is that if you need a development specific implementation of the behavior, let's say, for instance, you want to work locally with something that would normally work with S3 and a bucket, it's easier to implement something that will work in development with a local disk based folder instead. And I find there is a lot to gain to take the effort to extract boundaries and to create behaviors and to work with them because as you do The test suite becomes better the boundaries and the explanation between developers are better But there is about sometimes on legacy apps. It's a lot of work to get there at times So yeah, that's it mostly.
Adi_Iyengar:
Yeah.
Thibaut_Barrère:
A lot of mocks mostly these days.
Adi_Iyengar:
Gotcha. Yeah, I mean, one thing I don't like about mocks is the behavior. I mean, the overall lack of typing in Elixir. Like, it is prone to your request's responses not being completely accurate. That's where I like XVCR, where you at least have a snapshot of the API at a time. And maybe, like, generally how I approach testing it is, like, have the client itself, the lowest layer of your HTTP request, be tested using XVCR. And that being a behavior,
Thibaut_Barrère:
Oh, I see.
Adi_Iyengar:
you can, like, you know, like any further modules outside will use the mocks to, you know,
Thibaut_Barrère:
Yeah, yeah, yeah.
Adi_Iyengar:
make
Thibaut_Barrère:
I see
Adi_Iyengar:
the
Thibaut_Barrère:
your
Adi_Iyengar:
tests
Thibaut_Barrère:
point.
Adi_Iyengar:
simpler, easier to measure. Yeah.
Thibaut_Barrère:
Actually, there... You have something to add to my reply, actually. It says, for low-level clients, for instance, the proxy feature of the site I'm working on requires to be... to work very well. And so we need to test it at a lower level. In that case, if I remember where I use Bypass, which is I make a real query, HTTP query, because I want to be, I see your point. You don't want to, partly because of lack of typing, partly because you don't go deep enough in the stack, you don't have a real integration test and it can be troublesome. So when I need to go that deep, I can go further and use bypass, or in some project I have a... a custom plug server which I start myself inside the suite. Okay, so I can have a real HTTP query, sometimes with SSL to test the stack from beginning to end. And in some even more complicated setup, I have a full end-to-end testing with real connections. You see, for instance, if I was to implement S3 library, which is low level, I really want to connect to the real thing and to make sure that I didn't break anything. So yeah, I see exactly
Adi_Iyengar:
Exactly.
Thibaut_Barrère:
what you mean.
Adi_Iyengar:
Exactly. Yeah, I think at that point, end-to-end definitely works. I think one thing I do like to mention with XVCR, I really feel it's incomplete to use XVCR without a periodic, like a daily test and a different trigger. Say GitHub Actions is like a run a trigger to run a test every night to make sure the cassettes are up to date. Make the actual API call and refresh the cassettes. in some fashion, your cassettes are up to date, and your CI will use the actual response. The thing is, you'd need to set it up. If you're using an external service, make sure you hide sensitive data, for example, like you said, have a way to make your response friendly from a get-gets perspective. But yeah, I mean, mocks is definitely the best approach for as nothing. Anything beyond the lowest level HTTP call, mocks is the way to go. I mean, mocks are some kind of a way to do dependency injection or whatever, however people like
Thibaut_Barrère:
Yeah, yeah.
Adi_Iyengar:
to mock their actual module. Yep.
Thibaut_Barrère:
I find that using that has accelerated our test suite quite a lot. But then again, if I have a project with very strong, I don't know, legal requirements to go through real connection, etc., I will definitely have a more end-to-end test. But I want to have something fast so that when we create pull requests, the test suite can be fast and... We can keep the fact that we really run the test locally. You know, I've seen teams where the shoot has gone very long, take a lot of time. And it can be really a trouble because sometimes in some teams, even small teams, I don't mean huge teams, small teams like five to 10 people, and they are not able to run the test locally anymore because of all the requirements to connect everywhere. This means that sometimes the build server, because the only way to run the test, and at some point it's too late, and I think even Martin Fuller has a name for that anti-pattern in his radar publication, I think. So the dependency to the build server, where you are stuck and if it goes down, you can't ship anymore, you see? So, yeah.
Adi_Iyengar:
that makes sense, that is definitely a court smell.
Thibaut_Barrère:
Ha ha ha.
Adi_Iyengar:
I think one thing, yeah, I mean, yeah, being able to pull a repository and run, maybe like run like a setup command, like a make command that sets up your environment and run test within 10 minutes is like the prerequisite for all our applications. Because we rely heavily on contractors as a startup. We need, we don't want to We don't want to waste their time setting up their environment for more than an hour, right? Like, especially in today's world with Docker, Docker Compose, all these tools available. Yeah, like being able to run tests locally, like run everything locally right away without any external dependency. Or if there is a dependency, bringing them as part of a Docker Compose development environment, that is like, yeah. total prerequisite that we have for all our projects. So I can totally relate to that. What was that? You mentioned that someone called it in their book or something. Can you say that last part again? Like,
Thibaut_Barrère:
No, it was,
Adi_Iyengar:
you mentioned a person who called this as a code.
Thibaut_Barrère:
yeah, it's Martin Fowler. If my memory serves well, it has somewhere an article stating that the dependency to CI in large organization is becoming an anti-pattern. I will try to find it back for our readers and share it with
Adi_Iyengar:
Hmm.
Thibaut_Barrère:
you.
Adi_Iyengar:
Very cool.
Thibaut_Barrère:
If he had to take the time to write this, it must really be a widespread pattern, sadly.
Adi_Iyengar:
Yeah. Alan, do you have any other thoughts or questions?
Allen_Wyma:
Mm-hmm. No, so this the article from our father sounds pretty interesting If I could if you can pass it over a bit, it'd be nice to read
Thibaut_Barrère:
I'll try to find it back, hopefully.
Allen_Wyma:
I feel like not enough companies actually use CI servers. Most people in Lixir land or whatever do do it, but if you go to PHP or JavaScript framework people, where to me the entry to barrier level is really low. A lot of companies just kind of come out there. There's no formal training or whatever. They just don't do CIs and testing and everything else. So it's interesting. And
Thibaut_Barrère:
I mean,
Allen_Wyma:
it's also learning a lot from you guys about bypass and etc. Go ahead.
Thibaut_Barrère:
yeah. From my perspective, the CI has well developed in a lot of places. I started doing CI in 2000 with tools like Cruise Control. And at that time, you had to fight a lot with the organization to bring it into practice. And now you have to like GitHub actions, CircleCI and everything, which are openly available. So I'm quite optimistic about the state of all that. I mean, every single open source project can start its own suite and accept PR more easily. Sometimes with a setup that we would have dreamed of in the past. Like, okay, run me this on the five flavors of Ubuntu in less than two minutes in parallel and it works. So a long time ago, you would have paid heavy bucks for that. So it was
Adi_Iyengar:
Yeah.
Thibaut_Barrère:
a full job. I mean, release manager, release engineer, it still exists, don't get me wrong, but. you get a lot of release management automatically by using GitHub Actions or CircleCI compared to what you had to set up in the past. So I'm quite happy about that because it was a pain in the ass not to have that.
Adi_Iyengar:
Yeah, I can't imagine life without CI. Yeah, I can't imagine it. Every place that I worked at and every place that I advise right now, if they didn't have a perfect CI, I would, that's the first thing I'd make them do. I don't care what language they use. It's so
Thibaut_Barrère:
Hahaha
Adi_Iyengar:
easy these days, GitHub Actions. Like, Python has zero experience with Python and I think they use like fast API. It took me five minutes to set up the CI. Like, it's that simple. these days with get up actions. If there's something, even something that you don't know, just on the concept, it's easy. So there is no excuse not to see these days.
Thibaut_Barrère:
Yeah, exactly.
Allen_Wyma:
and run the test cases and all of them failed because they never had a CI server to run
Adi_Iyengar:
Right,
Allen_Wyma:
them and nobody
Adi_Iyengar:
exactly.
Allen_Wyma:
were bothered to run them. Run a linting,
Adi_Iyengar:
Uh...
Allen_Wyma:
run Kratos or whatever, you know, and that's exploding
Thibaut_Barrère:
Yeah,
Allen_Wyma:
on you. So that's
Thibaut_Barrère:
for
Allen_Wyma:
the
Thibaut_Barrère:
the
Allen_Wyma:
next
Thibaut_Barrère:
record,
Allen_Wyma:
one.
Thibaut_Barrère:
I will share that as well if I remember that. But I'm using, we are building Docker image in GitHub Actions with GitHub releases. And we have a bit of testing in there, just, you know, basic stuff like running command and checking the output. So we do CI for our Docker image as well. And it's really, really nice to have a The base image hosted on GitHub directly, everything is so smooth compared to a few years ago. So I'm really happy about that as well. So we can focus on the job.
Adi_Iyengar:
I have not used GitHub's registry, container registry. Do they have security checks and stuff? I know there's something called Key that does it, as I've used Key. I know Docker Hub doesn't have security checks on images and stuff like that, but I'm not sure if GitHub does, because that would be a game changer.
Thibaut_Barrère:
I am not 100% to understand what you ask actually, but what we do is
Adi_Iyengar:
Oh.
Thibaut_Barrère:
we have... Yeah, you mean is the package protected behind some authentication?
Adi_Iyengar:
No, no, no, my bad. So not private registry. So like
Thibaut_Barrère:
Okay.
Adi_Iyengar:
Docker images, vulnerability checks on images based on versions and stuff. I'm not sure if you, it's Q-U-A-Y, key.io. It's pronounced key. But I think Red Hat builds it. But that's more, it checks. There's multiple levels of security checks. First is a high level. It checks if there's any vulnerabilities reported in the base image, in any packages that are installed in the container, or in the image, rather. And it runs a container by the run command just for a few seconds, inspects the logs, and based on logs, determines if there's any other vulnerabilities. So
Thibaut_Barrère:
Okay.
Adi_Iyengar:
like a security check for your Docker containers.
Thibaut_Barrère:
Okay.
Adi_Iyengar:
I'm curious if GitHub has that, because I've never used it.
Thibaut_Barrère:
I'm using Tri-V, I don't know how they pronounce it, if I remember that name. This is something that you can read, it's a scanner from Aqua Security. If my memory serves well, you can use it on Docker images. So this helps me, it's not fully automated in our case right now, we do release from time to time to get patches. But it allows me to assess the number of vulnerabilities reported into the latest image that we use. And if we have too much to go further and to investigate. So it's not perfect, but at least the release process is automated, which is nice to have already. And it's combined to the overall maintenance story of Elixir, which is itself quite nice to upgrade and not break. I feel that we have a
Adi_Iyengar:
Yeah.
Thibaut_Barrère:
sweet spot here. It's quite nice.
Adi_Iyengar:
Awesome. Yeah, if you could like find the name of that service that, oh, Trivy, it's right here, got
Thibaut_Barrère:
Très vite, très vite.
Adi_Iyengar:
it. Awesome. Very cool. Thank you for sharing that. I had no idea.
Allen_Wyma:
If you pay for Docker Hub, you can get vulnerability scans.
Adi_Iyengar:
Docker scan,
Thibaut_Barrère:
Yeah,
Adi_Iyengar:
right?
Thibaut_Barrère:
actually we were on Docker
Allen_Wyma:
It says 300
Thibaut_Barrère:
Hub
Allen_Wyma:
hub vulnerability scans.
Thibaut_Barrère:
earlier. We were on Docker Hub earlier, but I had massive troubles for various reasons. One reason is that our state startup is a member of a larger state startup groups. And that in Docker, the way that you can split privileges to people in your team can be complicated. So... We solved it by moving to GitHub, actually. So, which means that if you have the right access on the repository, which is able to edit the Docker file, then you have the rights to publish the image on the registry, which is associated to it, which makes it a nice isolation because we are a team of three developers, so the developers all do the ops as well. we don't have a huge team. So that level of separation per repository creates one GitHub hosted constatine registry package works really nice for us. And much better than Docker Hub for sure.
Allen_Wyma:
Does anybody actually ever pay for Docker? I don't know.
Thibaut_Barrère:
Nope.
Allen_Wyma:
No, nobody?
Thibaut_Barrère:
Nope.
Adi_Iyengar:
We'll pay for key that's like, but key provides us a lot of these things which are better like our back and stuff that Docker Hub doesn't have, but yeah, Docker Hub, I've never paid for it. I mean you can even host private registries now without any limits right in Docker Hub It didn't used to be that way there was a limit to that but now you have like unlimited private images. So there's no need to pay
Allen_Wyma:
No, I still see you have unlimited public but unlimited private.
Adi_Iyengar:
I remember there
Thibaut_Barrère:
I
Adi_Iyengar:
was
Thibaut_Barrère:
mean
Adi_Iyengar:
a
Thibaut_Barrère:
it.
Adi_Iyengar:
limit of 5. Yeah. Sorry, go ahead.
Allen_Wyma:
Yeah,
Thibaut_Barrère:
I
Allen_Wyma:
there
Thibaut_Barrère:
was
Allen_Wyma:
could
Thibaut_Barrère:
a
Allen_Wyma:
be
Thibaut_Barrère:
bit
Allen_Wyma:
five, but
Thibaut_Barrère:
upset
Allen_Wyma:
I don't know.
Thibaut_Barrère:
to see that they took so much time to implement two-factor authentication. So yeah, that was a part of the reason as well why I moved to something else.
Adi_Iyengar:
Nice.
Thibaut_Barrère:
Sorry, I have friends working at Docker, by the way, so sorry about that, but if you listen to that podcast.
Adi_Iyengar:
I think we're looking good, Alan, right?
Allen_Wyma:
Yeah, if nothing else, we can transition over to picks, right?
Thibaut_Barrère:
Yeah,
Adi_Iyengar:
Awesome. Yeah,
Thibaut_Barrère:
thanks for
Adi_Iyengar:
let's
Thibaut_Barrère:
inviting me.
Adi_Iyengar:
do.
Thibaut_Barrère:
Sorry.
Adi_Iyengar:
Oh, no. So yeah, we just have something called picks, by the way, Tiibo. I don't know if you've watched the show before. We just pick if you have a video game, something related to Elixir, a book that you're reading, anything you want to recommend to our listeners. There's something just we do in the show towards the end. So Alan and I can go first to give you more time to. have your picks ready. But Alan, do you have any picks for us this week?
Allen_Wyma:
Yeah, so I don't know if you can see the back of my wall or not. I'm recently putting up some smart panels. So I'm working out with Nanoleaf. It's pretty I think it's pretty relatively cheap
Adi_Iyengar:
Nice.
Allen_Wyma:
and they got different sizes. So I got like a bunch of hexagons and small triangles, but they also got like straight squares and all kinds of stuff. So yeah, that's kind of in my it's keeping me busy. And I think they're they're pretty relatively cheap and work out quite nicely. So check them out. Nanoleaf.
Adi_Iyengar:
Very cool. I have a few picks today. So I have a video game pick. That's a game that I picked up again. It's a hidden gem called Kingdom Come Deliverance. I'm not sure if you guys have played it, but it's like an RPG. If an actual engineer were to make a game and have a good amount of budget, like in a think like an engineer game, this is that game. It's surprisingly inexpensive right now because it came out in 2017, but they've had a lot of bug fixes in last few years. It looks really good on PS5. So highly recommended if you're looking for an RPG game and if you're also looking for punishment, it's a hard game too. So that's my first pick. My second pick is if anyone is looking for a part time elixir contracting roles, reach out to me. At my company, we have a ton of work right now. We
Thibaut_Barrère:
Cool.
Adi_Iyengar:
have a lot more work than engineers that we have. So if you're interested in pedal stack, we all use pedal stack. Phoenix, Elixir, Tailwind, Alpine, Live View. Alan actually, in January, kind of pushed me to investigate Tailwind again. And when I did that, I ended up liking it. So we moved away from Bulma, which used to be our go-to. CSS framework to Tailman. Yeah. Yeah, so again, we're 100% code coverage. Again, our engineering team is pretty good, actually. Can't say much about other departments. But engineering team is actually good. So if you guys are looking for a part-time elixir roll and want to get some experience with the pedal stack, reach out to me. We have a lot of work. So yeah, that's it for my picks. Tivo, it's all you.
Thibaut_Barrère:
Yeah, thanks. I have one musical pick which I'm sharing which is Goran Grooves. It's a new virtual drumming plugin to make music. It's actually a set of properly recorded drums with a kick, snare and everything and they have clever things like a variable I hat human feeling, you know, so they sell that together. I do not have stock options. It's just that I bought it this week and I find really really nice. I do music and so they sell those sounds as a plug-in and they sell also MIDI loops on the other side and I'm actually pretty sure that I will start using Elixir to trigger those drums directly because I played with Elixir and music in the past. The sound is really really good and I had a lot of fun creating music this week with that.
Adi_Iyengar:
That's really cool. So like you said you use Elixir to play drums. Like is there like a DSL for that? Or is it just something you just in the API right now to start with and nothing sophisticated around it.
Thibaut_Barrère:
No, it's just I made a few talks at ElixirConf in the past. Like I was just connecting to MIDI devices using a GEN server and hot code reloading. So I will share the talks with your listeners. And I plan to do more in the future. So it would be nice to mix things like Axon and machine learning in Elixir. together with drumming
Adi_Iyengar:
Nice.
Thibaut_Barrère:
would be quite exciting.
Adi_Iyengar:
That sounds awesome.
Thibaut_Barrère:
So I hope to make a conference talk one day with that maybe.
Adi_Iyengar:
Very cool. That actually reminds me of one of my conference talks too last year. I wanted to give an example of metaprogramming to everyone. And I ended up building a DSL live to compose music. But I just used all sorts of play. But we could hook that up with this thing and change instruments. So it's like those things can marry each other. So I'm going to share a link to that one too. But that's really cool. I had no idea about this. Thanks for sharing.
Thibaut_Barrère:
You're welcome.
Allen_Wyma:
I was gonna say, uh, this reminds me of a talk I seen quite a few years ago for the library called Skedex. If you ever heard that one for it's a scheduling library written in elixir.
Thibaut_Barrère:
Oh
Allen_Wyma:
Let's
Thibaut_Barrère:
no.
Allen_Wyma:
put it this way. I believe it's the guy who wrote bandit actually wrote Skedex.
Thibaut_Barrère:
Oh cool.
Allen_Wyma:
He used Skedex library to play music.
Thibaut_Barrère:
Excellent!
Allen_Wyma:
I just remember that because it's very unique. Yeah. He's got a pretty cool talk
Thibaut_Barrère:
If
Allen_Wyma:
about
Thibaut_Barrère:
you have
Allen_Wyma:
that.
Thibaut_Barrère:
the link
Allen_Wyma:
You can check
Thibaut_Barrère:
I'm
Allen_Wyma:
it out
Thibaut_Barrère:
interested.
Allen_Wyma:
on YouTube. Yeah. and see if I can find it for you.
Thibaut_Barrère:
Simple
Allen_Wyma:
cool
Thibaut_Barrère:
scheduling for Elixir, okay.
Allen_Wyma:
Yeah,
Thibaut_Barrère:
I'll check
Allen_Wyma:
skittucks.
Thibaut_Barrère:
that out. Matt Trudel, okay. That's the guy doing
Allen_Wyma:
Yeah,
Thibaut_Barrère:
Bandit
Allen_Wyma:
that's,
Thibaut_Barrère:
by the way.
Allen_Wyma:
yeah, that's what I was going to say. It's the same guy doing band-aid, right? So, cause I just looked up Skedex and I was like, that name looks familiar, his username.
Thibaut_Barrère:
Hehehe
Allen_Wyma:
So
Thibaut_Barrère:
Well,
Allen_Wyma:
small
Thibaut_Barrère:
thanks
Allen_Wyma:
world.
Thibaut_Barrère:
for having me and for all these exchanges.
Allen_Wyma:
Yeah,
Adi_Iyengar:
Thanks
Allen_Wyma:
definitely.
Adi_Iyengar:
for coming, Thibaut, and thanks, Alan, for being here. Yeah, I mean, that's it for today, folks. We'll see you guys next week, and hopefully we'll have Sasha to host. Bye-all.
Thibaut_Barrère:
Bye
Allen_Wyma:
All
Thibaut_Barrère:
bye.
Allen_Wyma:
right.