Sascha_Wolf:
Hey everybody and welcome to another episode of Elixir Mix. This week on the panel we have Alan Weimar.
Allen_Wyma:
Hello?
Sascha_Wolf:
and me, Sascha Wolff. And we have a special guest this week, and that is Michael Fick. Michael, hello.
Michael_Fich:
Hello, how are you guys?
Sascha_Wolf:
We are great. We are great. Thanks for asking. So why don't you tell the people who are listening to this right now, why you're on the show and what we are going to talk about today?
Michael_Fich:
Yeah, absolutely. So I am an Elixir developer. I am based in Toronto, Canada. I work for a company called The Score, which is owned by Penn Entertainment. Yeah, I've been working there for about three years. And earlier this summer, I wrote a blog post about a problem that we encountered, trying to query records with deeply nested JSONB data. in it and yeah you guys came across and invited me on the podcast so I thought I'd share a little bit more about that.
Sascha_Wolf:
Yep, that sounds about right. That also sounds like something I've read. So yeah, I mean, we've heard a few times people on the podcast talking about Postgres in general and all the kinds of Postgres. I'm not sure we ever talked about like Postgres and the whole JSONB kind of things in detail. I mean, it's definitely something I've also used myself, but not to the extent you are doing. So that was
Michael_Fich:
Yeah.
Sascha_Wolf:
what kind of caught our eye there. So why don't you give us the gist of how you came to work at the shop? No, at the shop, sorry. At the score.
Michael_Fich:
The
Sascha_Wolf:
Good
Michael_Fich:
score?
Sascha_Wolf:
gosh.
Michael_Fich:
Yeah.
Sascha_Wolf:
Sorry, I didn't have enough coffee today. So how you came to work at the score and how you ended up in this situation, where you wanted to really dig deeper into querying JSON data inside of Postgres.
Michael_Fich:
Yeah, absolutely. Kind of interesting for, well, for myself anyway. Several years ago, I was not in the tech field, but I wanted to transition into it. I ended up taking one of those, you know, tech accelerator slash dev bootcamp courses to kind of renew my skills and take a shot at making a career out of it. Part of the reason I came across that was because I was looking to learn Ruby on Rails at the time. And I had... learned about that from job postings on the score years ago here in Canada there were there was a TV network from the score and I was a big fan of that so I You know, I was trying to learn Ruby on rail specifically to get into this company At the time after I finished that like they were looking for people with much more experience than somebody just graduating from a boot camp So I actually had to circle back to it years later when when I was better aligned And, uh, uh, when I did circle back, they were using Elixir, which was perfect, which, because it aligned exactly with what my interests had evolved into, uh, in the tech
Sascha_Wolf:
Ah, okay.
Michael_Fich:
industry. So definitely, uh, it definitely happened to work out very well for me at the time.
Sascha_Wolf:
Did you, do you know why they transitioned from Ruby to Elixir? I mean, it's a common story you hear all the time, right? But I'm always interested to hear like details of why specific companies make that transition.
Michael_Fich:
Yeah, I mean, it predates me a little bit, but one of the kind of recurring themes that I've heard, just kind of through the rumor mill, is that, you know, there was a lot of competition for Ruby developers, and there were a few really big players here in the Toronto space. And again,
Sascha_Wolf:
Mm-hmm.
Michael_Fich:
this was well before, like, the pandemic and everybody going remote. And some of those larger players are kind of... recruiting every Ruby developer that they could potentially go for. So it was difficult to compete for talent at the time. I know another, you know, I think there were people internally as well who were really pushing for it, who had done their own independent research. And I think this was, you know, six, seven years ago or something like that.
Sascha_Wolf:
So basically they also changed the elixir to have an easier time hiring. That is interesting.
Michael_Fich:
Yeah, definitely. And I would say that I'm very involved with our interviewing process these days. I do know that a lot of the developers that I speak with who are interviewing with us, they mentioned that we got on their radar specifically because they were looking for a company using Elixir.
Sascha_Wolf:
Mm.
Michael_Fich:
Kind of similar to myself, when I circled back to the score I was looking at them again specifically because they were one of the local companies who were very prominent in that space. So definitely actually helps with recruiting for some people.
Sascha_Wolf:
That's also been my experience to be honest. I've also been involved in hiring at different companies and when you're very outspoken about your usage of Elixir, to say hey this is like our number one language, then people come to you like specifically especially if you hire remotely then even more so.
Michael_Fich:
Absolutely
Sascha_Wolf:
Okay. Um, so now, how long have you now been at the score and how did you end up coming into a situation where you wanted to write complex Postgres JSON queries?
Michael_Fich:
Yeah, so I've been at the score for a little over three years now. I just passed the three year anniversary a few months ago. Um, so my current role is as a technical lead for our promotions team. Um, I guess I should give a little bit more background on, on that. So we, um, currently have two primary, I guess I should say three primary off offerings, but two that are, um, heavily engineering focus, which is. We have a media platform, which is like sports betting, sorry, sports news, sports articles, statistics, stuff like that. So that one is actually, I think, you know, the number one sports media app in Canada, number two behind ESPN in the States, if I'm correct and current. But about three years ago, it was right after I joined the score, we launched a sports betting app that kind of partners with the media app. And that's kind of where my attention or my focus has been. I've worked on three separate teams at the score in my time that are all different components of that betting app. So the one that I work on right now, as I mentioned, was our promotions team. So within the sports betting landscape, there's a lot of competition. And one of the ways that I try to get a competitive advantage is there's a lot of promotions that are going on across all of the various players in that game. So we're trying to build like a very robust promotion service that allows our marketing team the flexibility to try to stand up whatever types of promotional campaigns they want to acquire customers. Also, retention and engagement is huge for us. So that's... That's what we're doing on our team. In terms of how that relates to Ecto queries and querying JSONB columns. So whenever patrons are in our betting app and they're, you know, taking some, they kick off some event, like they're placing a bet or they're in our casino app and they're, you know, maybe. trying to, they're playing through like a hand of blackjack or something, or even just depositing funds or something like that. Um, all of these events could potentially lead to, uh, getting rewarded for an ongoing, uh, promotional campaign that we have going on. So we need to basically be told about those events from the services, uh, in our, um, on our platform that are kind of the primary source of truth for each one of them. So we ingest all that data, and then we need to save those particular events locally in our database for later reuse. So they all have some similar columns, like they'll all have their own primary key, and they all have a date that they occurred, a region in which they, you know, they were placed in one of our, you know, operating regions such as New Jersey or Ontario or something like that. and of course a user ID. But after that, there's a lot of differences in the type of attributes that are sent over for these types of events. For example, a deposit will have stuff like, the amount that they deposited or the financial provider they used. Whereas a bet will have, which game they're placing a bet on. it'll have, you know, the market, the, if you're betting in, for example, basketball or something like that, you'll have the league that they bet on as well as the sport type, et cetera. So, um, the, uh, the more complicated events like, uh, sports bets for us, the ones that have far more robust attributes, uh, these ones, um, uh, you know, and also kind of that, um, the differences between the different types of events, having different attributes kind of necessitated us. storing all of that data into a JSONB column for later retrieval and use. But I would say that at least early on, we didn't kind of expect the types of queries that we'd be needing to do with that JSONB data. So we definitely hit a point where we needed to look to invest a bit of time into writing proper queries, or at least more efficient queries for fetching and using that data.
Sascha_Wolf:
Yeah, that would have also been one of my kind of my questions later on, like, like why Jason B, right? And like, also, maybe maybe let's do a spoiler here. Would you build it again this way if you had the knowledge of today?
Michael_Fich:
Um, so yes and no, I would say right
Sascha_Wolf:
Mm-hmm.
Michael_Fich:
now, um, that the answer is yes in a way because we've like a lot of the, we, this is querying data from these has kind of been, uh, our white whale where we're constantly trying to make it more performant,
Sascha_Wolf:
Mm-hmm. Mm-hmm.
Michael_Fich:
um, and for a little bit of of additional context with each one of these types of promotional campaigns. We have a set of criteria that we refer to as audience criteria. This is the stuff that you'll have to have done in your past history in order to be eligible to participate within it now. So an example of that might be like you would have had to not bet on the Super Bowl last year to be eligible for a Super Bowl campaign this year or something like that.
Sascha_Wolf:
Yeah, okay.
Michael_Fich:
So So we need to look back into your history in order to see if you're eligible. Now, the idea of trying to put all four of those types of actions into one table, it worked for us really well initially because a very large percentage of our overall users, well over 90% for at least the first year that we were alive, they... had, let's say less than 10 deposits, less than a hundred bets total. And the total records in our database wasn't that robust. We were also only in, I think four or five operating regions. Um, you know, we're, when we opened sports betting in Canada, it there, as I mentioned off the top, we're kind of like, our, our footprint is a lot bigger in Canada. So we have a lot
Sascha_Wolf:
Mm-hmm.
Michael_Fich:
more, um, patrons that came on board our app right away. But we also opened a casino service, which introduced one of these new types of events that we needed to track. And where a single user might cap out at around like 100 bets, other users were doing 100 casino wagers in a single day or something like that. So it grew the size of our database table quite a lot. So yes, in that... To answer your original question, will we go this way again? Um, yes, in some ways because the queries, um, like we allow ourselves to be very dynamic with those queries. We've gotten them to actually be pretty performant. The no part stems from just the fact that I'm a little concerned for the future. You know, like let's say five years down the road, uh, when we're operating, maybe in. 40 states in the US or 10 provinces in Canada or something like that, where this table could be a lot bigger. So it's one of the things in a way that kind of keeps me up at night is like, how do we try to make this more efficient? As I mentioned, it's been kind of our white whale off the top. We initially just would say for an individual patron, we're going to fetch all of their events because it wasn't that significant. You know, maybe a hundred records. and then try to see if they qualify for a whole list of promotional campaigns. Now we've had to refactor that, write a significant rewrite, which this blog post was kind of one of the artifacts from, where we had to be a little bit more surgical about what we were querying for
Sascha_Wolf:
Mm.
Michael_Fich:
each individual campaign. And that has pros and cons with it. So I don't know if I answered your question very directly, but Yeah.
Sascha_Wolf:
Yeah. No, I think it was a very black or white question. And any kind of senior engineer usually has this. It depends, right? So let's see. From what I've
Michael_Fich:
Exactly.
Sascha_Wolf:
heard, it seems to be the, oh, you told, okay, how do we scale this kind of situation, right? Like make some
Michael_Fich:
Yeah.
Sascha_Wolf:
simple enough decisions in the beginning, and then down the road, you realize, oh, okay, we're hitting some bottlenecks. So how do we deal with that now? Yeah,
Michael_Fich:
Yeah, 100%.
Sascha_Wolf:
that makes sense to me.
Michael_Fich:
I mean, yeah, we might, I mean, some of the stuff that we've been brainstorming is just like, how do we avoid having to run this evaluation
Sascha_Wolf:
Hmm.
Michael_Fich:
so many times? Or, you know, how do we maybe parallelize it a little bit with using like Elixir tasks and stuff like that. So we're exploring more options always to optimize it. Right now, it still isn't, or I would say it's not. It's pretty acceptable as it is right now, but of course, you know, it'll grow and and Those the runtimes will will increase of course with volume
Allen_Wyma:
Yeah, I
Sascha_Wolf:
So.
Allen_Wyma:
find your thing very interesting. Well, I figured I'd speak up, you know, give you a break, right? Um, cause like I always had this problem recently with a project where like, uh, some, some quick background. guy who helped design the database schema. Well, he calls everything a collection, instead of calling it a table because he comes from MongoDB. And so we always have this object type, which obviously would be a JSONB, AKA a map. And we get some pretty interesting things that come up. And I think one of the most interesting things that come up is like relations, actually storing relations within that, that thing. And that's the tricky part, right? Because I would like to be able to say, okay, within this I'm embedding something and then there's this data structure, but I was never able to actually get that thing to work properly, number one. And number two, I wanted to also make sure that it was unique. And I also couldn't find a way to make that one work either. So sometimes I try to start off with the JSON beat and end up just switching to a table because it just became much easier because the structure was already typed anyways.
Michael_Fich:
Yeah, absolutely. So like, with our use case, we aren't kind of the central source of truth for these events. Other services are like, for example, we have a, you know, a betting team, which is responsible for handling the bet events I could send to a sports book bet events, we'll have a payments team, which is responsible for the deposit events and they, I'm not like a primary stakeholder for their apps, but I'm pretty sure they're all in that. more traditional relational databases that are strongly typed. For us, we took this approach with JSON B columns and kind of suffered a little bit of the querying repercussions later on. One of the things that we learned is... For example, with the sportsbook bet, there is one of the attributes of it that gets stored in that JSONB columns are the legs of the bet. So for example, if you're not familiar with the sports betting space, you can have a bet that is either a straight bet or a parlay. A straight bet would be like betting on one specific game. A parlay might be like, I'm gonna bet on four different games and they all have to win in order for me to win my bet. and you get a much bigger payout, obviously. So when we talk about the legs of a bet, we're talking about each individual game that you're betting on. And so we had to deal with like that, you know, list or array or collection within a JSONB field. But similarly, we all, like one of the other problems that we encountered with that is that like, we can't just look into the JSONB collection of data as a giant text entity and just look for specific values. We had to be a little bit more surgical with our queries, uh, using the example I used earlier where, uh, you know, and the audience criteria for promotion might be that you did not place a bet on the Superbowl last year. Um, we can't just look for like, uh, an event where it doesn't include a certain value we had to. sort of break, we had to dive into each individual leg of a specific bet and look to see if, you know, that event ID was, was present in any of them. Um, hypothetically, uh, that ID might be the ID for some other, uh, you know, data entity somewhere else. So I w I wanted to be very, um, very strict about saying like this has to specifically be an related to one of those event IDs or something like that. We also wanted to use different operators. So it couldn't just be that it was either there or not there. Some of our fields might be like an integer type. So we would wanna say that it's within this range or it could be stuff where it is a list where we wanna say it's either in or not in that list. Kind of some of the... more standard operators that you would use when you're querying data from a relational database. So that's kind of where we started trying to learn about some of what some of the operators we could use with, with JSONB, we found a function that's kind of a native function within Postgres called JSONB to record set where we can look at like taking that array of JSON objects with like however deeply nested it might be within a JSONB column. And we're basically creating like a table in memory based on that collection. And once we do that, we're able to write those postgres queries on it. But trying to translate that from like raw SQL into Ecto also proved to have a couple of challenges of its own. So it wasn't a short journey for us, at least.
Sascha_Wolf:
So I'm curious, I mean, like my personal experience with JSON and JSONB data in Postgres is like, I would assume probably very standard for most people, right, like, you know, the occasional JSONB column, write some JSON into there, load some JSON into there, end of story, right?
Michael_Fich:
Yes.
Sascha_Wolf:
So that's a lot of it. Maybe dabble a little bit with like, oh, I can query like subfields of JSON, oh, that's kind of neat, right? But that is about it. Also from what I can see in your blog post, you went way beyond that. So, and you kind of mentioned that earlier, like, but this was kind of your white whale, right? So what
Michael_Fich:
Absolutely.
Sascha_Wolf:
were some key insights you had when you were building that, like some of the roadblocks you encountered, some of the things which were surprising maybe, and what did you do about that?
Michael_Fich:
Yeah, absolutely. So just even after we discovered that JSONv2RecordSet function within Postgres, trying to use that within Ecto itself presented a few challenges. So, you know, Ecto is fantastic, obviously. I know it's one of the favorite things that I get to use when I'm writing Elixir. But You know, it obviously doesn't have support for every native function for each database. So trying to find a way to, uh, coherently. Right. Like reusable component, uh, query, um, sorry, I guess the correct term is composable queries. So we could reuse that type of function that provided a bit of a challenge. Um, we, uh, went to the elixir forums actually, and it turned out there were a few other developers who had. Already. Um, come up with solutions for that. So they were able to point us in the right direction. And since posting that we've had a few more people who've asked questions about that as well, where they tried to implement the same solution and run the errors, but like the syntax is very brittle. So like of just the native SQL syntax is very brittle. So, you know, we've tried to post feedback to help them if they can post their code samples. But one of the... problems in particular with it, even after you're kind of aware of how it works is to use JSON beta records that you have to use the as keyword to kind of to, uh, to basically create that schema in memory that it's going to be cast into. And one of the things with Ecto is that each time you're doing a from statement or a join, it automatically appends its own as keyword onto it. So you might be saying like from JSONB to record set, you know, some entity
Sascha_Wolf:
Mm-hmm.
Michael_Fich:
as whatever name you wanna give it. And then Ecto will read, will basically translate into from record set as name and then as its own alias. So those two as clauses in a row definitely cause some strife. Another issue that we encountered is a lot of the use cases for I like trying to find resources for JSON beta records that specifically with Ecto or in the elixir landscape, wasn't that robust. Uh, a lot of the JSON beta records that use cases that we found, like stack overflow and blog posts or whatever. Um, they, they kind of just showed it with raw SQL and a lot of the way those were being used was, you know, select something from a table and then a comma and then JSON be the record set, one of the fields on that table. That's another issue with trying to use it within the Ecto landscape because you can't have multiple schemas within the same from clause. You can only say like from and then some schema, you can't chain multiple. So trying to figure out how to get around that was also another challenge that we encountered. The way that we resolved that one was trying to use a lateral join to be able to use the record set that's being returned from that function with the original base query. And that worked out pretty well, but it took a little bit of research and experimentation to get there. It's just part of the reason that I wrote the blog post actually. So... A long time ago as a junior developer, one of the things that I came across was saying, you know, it's very challenging to land your first job in the industry as a programmer. Um, but one of the things that you can do to try to increase your profile is to write blog posts and your, your audience is basically you from one month ago. Because no matter how far along you are in your career progression or your understanding of technology, first of all, you're never going to know. everything, you're probably not even gonna come close, which is especially true for me. I'm not even close, but I definitely know more than I knew one month ago, and there's definitely somebody out there who's encountering the same problem that I had a month ago. So if I can write this blog post and if they're able to find it, you know, helping at least one person is just, you know, kind of a hopefully a good deed, a pay-it-for moment, maybe they're able to help another, and just kind of uh, creates the kind of community we want to see. But, um, but I, I did a lot of my own research looking for other people who had solved this particular problem. And it is a very niche problem, which I, which I understand. So I wasn't able to find anybody within the elixir community in particular who had solved that one yet.
Allen_Wyma:
That's what I find a lot of is like I used to and I'm gonna go back into making more videos and I find that some of the videos I think they're so simple. People like those are the most popular videos because like for things that I think are simple and not so interesting and people struggle with and I'm sure I guess we're all good at good and bad at different things right but Yeah, it's amazing like I have a video like how to put in tailwind right and now you don't have to worry about that anymore because the new version of Feek is going to have that already in it. But yeah, I mean, it wasn't so simple, but it's actually not that difficult.
Michael_Fich:
I think
Allen_Wyma:
And people
Michael_Fich:
I watched
Allen_Wyma:
really love
Michael_Fich:
your
Allen_Wyma:
that
Michael_Fich:
video
Allen_Wyma:
video.
Michael_Fich:
about the tailwind actually. Hehehehe
Allen_Wyma:
Yeah, exactly. How to do the whole pedal stack, it's not too bad. But I think the information at the beginning, you had to go to this website and copy that and go to that website and find this. And putting it all together is one thing. But now it's very clear about how to do it. But yeah, like I said, the simple ones are more interesting than the. the ones that I think are more interesting myself.
Michael_Fich:
Yeah, absolutely. I mean, um, honestly, blogging or trying to, uh, I have a YouTube video or two as well. Like I remember speaking at a small local meetup group here in Toronto about five years ago, I think it was, and, uh, kind of, I, I recorded the video of it and threw it up on YouTube afterwards, but it was just kind of going through like one of the problems I faced within, you know, my first year or two as a developer and I've had people who've reached out to me. Uh, I think the video only has like five or 10,000 views even after all these years, but I've had people who've reached out and they said, you know, this was, uh, extremely helpful to me. You know, if you're ever looking for work, we're like trying to hire people have knowledge of that thing sort of thing. So like, you know, sometimes even though like I haven't pursued those opportunities, it is still nice to know that like it was able to help somebody. And you know, sometimes it can even come around to benefit you as well. That's not really the primary reason.
Sascha_Wolf:
So you earlier also said that you kind of ended up in a situation where you were to consider performance, right? So what were some of the key things you did, you fiddled with? I mean, for example, I know that you can kind of put indexes on JSONB columns, that that is a thing. Is this something
Michael_Fich:
Right.
Sascha_Wolf:
you explored or how did you tackle the whole performance angle?
Michael_Fich:
Yeah, no, good question. So we haven't put indexes or indices on JSONB columns just yet. Well, we have one JSONB column that tries to be an embedded schema for multiple types, so we haven't gone that route. We did use indices, though, on some of the more baseline fields that are in the table just to try to Sorry, I should prefix that by saying, preface it by saying that there are some other things, like for example, we're always going to be adding the user ID that into a query because we only care about the events that relate to a specific user. So we will index that or, you know, we'll often be indexing or filtering by a specific region, which is again, another top level field. So we'll index that one as well. or try to make some of the combination indexes. But that's kind of where we initially went to optimize. Another one that I alluded to earlier, so again, if we have like 20 sort of promotional campaigns that are running in production at once, right now it would evaluate like each of them sequentially. We did look at spinning up processes in Elixir to try. to evaluate each of them independently. However, because we receive these events via Kafka and there's a lot of them coming in, especially when it like a large game ends or something, we run the risk of basically trying to use too many database connections with that. So that's a problem we're currently looking to solve to speed up this evaluation. So... You know, again, if any of your listeners have encountered a problem like that, where you have like far too many, uh, database connections being spun, uh, being used in, uh, in elixir processes, uh, I would be happy to, uh, to, uh, if they could reach out and collab with me on that one, but, uh, that's, that's some of the, the, the, I would say more, uh, challenging aspects right now that we're currently facing to try to further optimize it. Like we're one of the other issues that we have is just kind of more of an organizational issue is trying to balance, uh, refactoring this particular aspect of our service. Um, which I know the developers, uh, those of us who are on production support and stuff like that, we really want to try to optimize that and increase throughput versus, you know, kind of the, um, building in new features. Uh, you know, offering new rewards to our users who complete these promotions and stuff like that. So, you know, it works, it works pretty well, but we know that, you know, down the line, it's going to be a bit of a challenge. So trying to like offset the logistics of, of that is, is another challenge in and of itself.
Sascha_Wolf:
Did it make sense?
Michael_Fich:
say
Allen_Wyma:
Have
Michael_Fich:
it's
Allen_Wyma:
you?
Michael_Fich:
a yeah, it's
Allen_Wyma:
I
Michael_Fich:
one
Allen_Wyma:
was
Michael_Fich:
of
Allen_Wyma:
just
Michael_Fich:
those,
Allen_Wyma:
thinking about your
Michael_Fich:
you
Allen_Wyma:
database
Michael_Fich:
know.
Allen_Wyma:
problem. I'm curious, have you ever tried using a transaction? Would that kind of solve it? Because I thought that's all using one connection. I don't know, just
Michael_Fich:
So
Allen_Wyma:
thinking
Michael_Fich:
a transaction
Allen_Wyma:
out loud.
Michael_Fich:
for like all of the processes, they'd all be encompassed
Allen_Wyma:
I don't know if
Michael_Fich:
within.
Allen_Wyma:
you, yeah, if you had some or did like a large query at the beginning, I don't know. I didn't know more about your problem because it sounds like an interesting problem to solve.
Sascha_Wolf:
So from my understanding, it seems that if too many events come in, you might spawn too many processes, and then you kind of exhaust the pool of connections.
Michael_Fich:
Exactly. Yeah. Like we could have, um, we can throttle the number of events that are coming in, but, um, like if we wanted for each into, let's say we're throttling out 50 events per second or something
Sascha_Wolf:
Mm.
Michael_Fich:
like that. Um, but we don't have an upper limit on the number of promotional campaigns. Our marketing team can set up through our, our service.
Sascha_Wolf:
Hmm
Michael_Fich:
So let's say they're running 25 campaigns. for each one of those 50 events, we would have to spin up 25 processes to see if they qualified for any of those campaigns. So, 50 times 25 per second, well, it would be a lot of database connections that would be used, but if it was possible to use them all in the same transaction, that would be interesting. Off the top of my head, the only concern that I would have is if... For any reason one of those processes failed, would it roll back the whole transaction?
Sascha_Wolf:
Probably.
Michael_Fich:
Yeah.
Allen_Wyma:
Yeah.
Sascha_Wolf:
I was mostly, I mean, I would, I mean, now we get kind of into here, say land and right. I mean, I don't know me by definition, obviously not the plan picture. I am from all of what you told me. I'm left wondering, is this maybe something Broadway could help solving? If you go like about it a different way, right where you
Allen_Wyma:
Yeah.
Sascha_Wolf:
set up your consumers in Broadway, your producer consumers in like such a fashion and like support partitioning, and maybe you can
Allen_Wyma:
Yeah.
Sascha_Wolf:
partition by... by how did you say from promotions campaigns, I guess.
Michael_Fich:
Mm-hmm.
Sascha_Wolf:
Maybe that would be an interesting angle, but I don't know, it's your code base, right? But
Michael_Fich:
Yeah, no,
Sascha_Wolf:
that was
Michael_Fich:
yeah.
Sascha_Wolf:
what my brain was like going to like, hey, this sounds like something Broadway could be good at.
Michael_Fich:
Yeah, actually we are using Broadway for ingesting those events by Kafka, but I think what you're referring to is kind of like producing them into a Kafka topic and each... Yeah.
Sascha_Wolf:
Now, I mean, you have a partitioning capabilities of Broadway. I'm not sure
Michael_Fich:
Yes.
Sascha_Wolf:
if, I mean, I don't know details about your particular campaigns and anything, if you can even partition events by particular campaigns, or if all events are relevant for all campaigns, that kind of thing, but that
Michael_Fich:
Mm-hmm.
Sascha_Wolf:
could at least already allow to do to a diverse data flow, so if it was a campaign, only certain events are capable, and then you also don't run into a situation where you exhaust the database connections because you can... just, okay, we have that many campaigns, each of those producer consumers, they pull in 50 events, whatever, per batch, and then they have a database connection from which they can run. But based on, I'm actually interested in something else, because you said, from everything you've been telling us, I get the impression that the bottleneck at the moment isn't even Postgres, it's more of, okay, you get the different events, and you kind of want to handle that for different users, and how can you do that without doing everything sequentially? So... Is that right? Like the whole Postgres Jason thing has just been chugging along happily?
Michael_Fich:
So, yeah, actually our, since we did this, this migration from kind of
Sascha_Wolf:
Mm-hmm.
Michael_Fich:
what we had like as a V1 model of just fetching all of the events for user, that actually, when we had less events per user, scaled very well in a way because we could have as many promotional campaigns as we wanted, but we were just evaluating them all in memory once we had fetched them all. Or or v2 model where we're doing these dynamic queries per each campaign that one is Potentially bottlenecked if we were to run a very large amount of Promotional campaigns, so we do try to do some stuff to filter that a little bit where And we're working at one of the members of my team right now is working on A way first of all to do some caching where we won't have to necessarily We basically determine like, you know, a delta between your last action and your current
Sascha_Wolf:
Mm-hmm.
Michael_Fich:
one to see if it could potentially even allow you to, you know, qualify for, let's say, seven out of the 10 running promotions or something like that. And if not, then we don't even try to run that evaluation. But yeah, the individual queries that we're running right now aren't actually. a bottleneck. It's just kind of like how do we, how would we want to set up trying to try to evaluate multiple campaigns if there are, you know, a large number of campaigns and a large number of users that are that will all hopefully be using our app in the next couple of years.
Sascha_Wolf:
Yeah, makes sense.
Allen_Wyma:
in.
Sascha_Wolf:
What kind of scale are we talking about here? Like, I mean, you said 25 campaigns earlier, right? Like that kind of situation, but then how many events
Michael_Fich:
Yeah.
Sascha_Wolf:
are you processing roughly per second, for example?
Michael_Fich:
Um,
Sascha_Wolf:
Just ballpark
Michael_Fich:
so
Sascha_Wolf:
numbers.
Michael_Fich:
yeah, so like for example right now when The nfl season here just started On sundays at you know when the the games are pretty The the way the nfl sets up their schedules the games start at you know, one four and seven o'clock eastern time Which and then they end at? Four seven and ten o'clock eastern time So at four, seven and 10 o'clock, we receive a large number of, uh, of updates basically to individual vets. So, um, we'll, we'll have the original bets on their place, but then we received an updated version of each of those bets once they've settled or there's some progression in their bet. So that the volume at those peak times, um, can be anywhere from let's say 20,000 messages that are all. Instantly produced from another service which we need
Sascha_Wolf:
Mm-hmm.
Michael_Fich:
to consume anywhere upward of like, you know 100,000 at the moment
Sascha_Wolf:
Yeah, okay. So, I
Michael_Fich:
So
Sascha_Wolf:
mean, it is non-trivial load. Let's just say
Michael_Fich:
Yes,
Sascha_Wolf:
that.
Michael_Fich:
exactly.
Sascha_Wolf:
It's not crazy experience, but it's also nothing to scoff at.
Michael_Fich:
Exactly. And like right now, um, just for context, um, I, I mentioned off the top that, uh, our company had been purchased by Penn entertainment about, I think I did about a year ago. So Penn also, um, has a second sports book app that they operate in the U S and will be, uh, trying to use the same tech, uh, infrastructure for both apps, uh, down the road. Um, so our app is only operating in Canada since
Sascha_Wolf:
Mm-hmm.
Michael_Fich:
that acquisition. And so we only have to worry about at the moment, you patrons in one specific province in Canada, uh, that were, um, that we're running in. However, they have, I think the, the other app, um, which will be on boarding, uh, Barstool sports book. Um, they are operating in something like, you know, 15 to 20 states in the US already, I'm sure by the time we get it all on the same platform, it'll be several more. Um, And hopefully here in Canada, we open up in more provinces. So the volume right now at, you know, peak event blasts of 20 to 100 K is one province in one country, um, that's going to grow a lot as well. So the scale is, you know, we, we
Sascha_Wolf:
Yeah,
Michael_Fich:
know
Sascha_Wolf:
fair
Michael_Fich:
that
Sascha_Wolf:
enough.
Michael_Fich:
the scale is going to be, um, not just hypothetical, but, uh, it'll be a rapid, uh, increase, which we're, we're going to have to try to, uh, you know, run load testing and, uh, to try to, uh, experiment a little bit with some of the ways that we can try to optimize this down the road.
Sascha_Wolf:
But I still think it's interesting to hear that. And that is a story I feel I'm hearing over time, is that when it comes to scale, rarely something such as Postgres becomes the bottleneck. I mean, look, you said, OK, if you get crazy scale, then maybe, yeah, we have to look into that. But I mean,
Michael_Fich:
Mm-hmm.
Sascha_Wolf:
all of what we're talking about has been, OK, how do we basically use these queries in Postgres efficiently, right? How do we distribute work over the different kind of database connections? how do we speed up this query? This is
Michael_Fich:
I'm gonna go.
Sascha_Wolf:
super interesting for me to see, because I mean, I do remember in the past, people were like, what if you want to swap all your database, right? Like all that was kind of she bangs, people talking about that. Since what
Michael_Fich:
Yeah.
Sascha_Wolf:
if it no longer scales? But in my experience, Postgres has always been there to save the day. Like it was never the thing which was the bottleneck. If it was, then like some, I mean, I had one scenario where we had one query happening in one of our apps a few years back. which was super slow, it was also super weird. Like, some of the queries to the same software were like super fast and others
Michael_Fich:
Mm-hmm.
Sascha_Wolf:
were super slow. And it turned out it was like the query planner occasionally because we didn't do the index quite right. Sometimes did a full table scan instead of using the index. But like as soon as we figured that out and figured, okay, this is how we set that, have to set up the index, it was again, like no problem, right?
Michael_Fich:
Yeah, no, that is, it's an incredibly, uh, performing technology. We, we had the, the only issues that we've had have basically come through again, our own, you know, errors, right? Like where we'll try again, where we're fetching all of those user actions. I mentioned
Sascha_Wolf:
Yeah,
Michael_Fich:
like
Sascha_Wolf:
yeah.
Michael_Fich:
our original model was, you know, most of our users, 95% would fall into that a hundred or less actions.
Sascha_Wolf:
Yeah, yeah.
Michael_Fich:
That was most of the users. We did have some users who were like. Had. a betting history that had tens of thousands of records,
Sascha_Wolf:
Okay
Michael_Fich:
right? So try to fetch all of those was definitely a user error on our part for Postgres. Um, this one is, is nice though, because like, as long as our audience criteria, I mentioned in the article, some contrived examples where we might want to say, um, you know, that the user hadn't plus placed a bet on the Superbowl plus the user signed up within this date range. And, um, you know, and the users made at least five deposits of over this amount or something like that. So these are multiple rules that we have to try to, each one of them would have their own query. But at the end of the day, we're only, you know, for n amount of rules that each one of our queries has a limit of n to try to just make sure there's a, that each rule is being satisfied by a different mutually exclusive action or event. Um, so yeah, when you're trying to fetch just, you know, one, three, five, uh, particular records out of this table and you're being surgical about it. I mean, Postgres queries are lightning fast. It's, it's at that stage, it isn't even close to a bottleneck.
Allen_Wyma:
Oh yeah, I've strictly changed to moving a lot of logic over to Postgres and it's made life much, much easier. So much happier about that one.
Michael_Fich:
Yeah,
Sascha_Wolf:
Yeah.
Michael_Fich:
absolutely. A lot of the things that, like we have several internal use cases where we'll set up like Postgres triggers and stuff like that, which, you know, a different topic entirely, but, you know, leveraging that functionality from Postgres just opens up a lot of doors for you as well when
Sascha_Wolf:
Yeah.
Michael_Fich:
you don't have to try to worry about it at the application level.
Sascha_Wolf:
It's also, I mean, if you ever get to a performance issue with Postgres, then you can still... There are still tools to reach for, right? There are materialized views, which are insanely powerful. I mean,
Michael_Fich:
Yes.
Sascha_Wolf:
a bit of work to set up. But if you have a proper setup, then that can really push you really, really, really far. And
Michael_Fich:
Yeah,
Sascha_Wolf:
I mean,
Michael_Fich:
that's
Sascha_Wolf:
the
Michael_Fich:
actually
Sascha_Wolf:
only...
Michael_Fich:
something we've looked at too.
Sascha_Wolf:
Yeah, I have materialized views or something, which is still in my book, or my tool, but it's like, okay... I've used it in the past and it was always, I mean, I wouldn't say a pleasure to use, but it always did the job, you know?
Michael_Fich:
Yes.
Sascha_Wolf:
So, and I mean like the only case where I actually ended up where Postgres became a performance issue was in this like in a prototyping thing back in my previous job where we turned out Hey, we kind of have graph data and we try to put graph data into Postgres and then try to query it recursively. That does not work well. But then we
Michael_Fich:
Yes.
Sascha_Wolf:
ended up using a graph database. So yeah, the wrong, wrong do-over job. But, uh, you, usually tabular data, relational data, Postgres really, unless you end up with crazy scale, it's not kind, it's not the bottleneck you have to deal with. That's definitely my experience. Okay, yeah, cool. I am left wondering, like, I mean, from all you told us, and like we had, the weather's kinda, I don't know, like a bit of a, I wouldn't say pet peeve, it's a bit of a recurring theme in this podcast. How, have you have any point considered to like to do something like event sourcing? Because I mean, to a certain degree, you kinda are, right?
Michael_Fich:
Yeah, we actually did look into that a little bit. We actually the first version of our app before we went to production was actually looking at or was leveraging commanded. So
Sascha_Wolf:
Ah,
Michael_Fich:
we were,
Sascha_Wolf:
okay.
Michael_Fich:
yeah, we were ingesting
Allen_Wyma:
It's
Michael_Fich:
these events.
Allen_Wyma:
Sasha's favorite.
Michael_Fich:
You know what,
Sascha_Wolf:
I wouldn't
Allen_Wyma:
Seek
Sascha_Wolf:
say
Allen_Wyma:
your
Sascha_Wolf:
favorite,
Allen_Wyma:
ass king.
Sascha_Wolf:
but...
Michael_Fich:
I absolutely loved it. I'm not going to lie. I thought it was an incredibly elegant framework. But so we did it after, I think when we were about two to three months into our project, we had found that a lot of the... So one of the drawbacks with Commanded is that it doesn't align to nicely do... read operations from the database during the pipeline, or at least based on my knowledge. And again, I've only used it for a couple of months, so I might have this very much wrong. But you basically had to do read operations before dispatching commands to
Sascha_Wolf:
Yeah,
Michael_Fich:
your
Sascha_Wolf:
that's true.
Michael_Fich:
pipeline. And a lot of times we didn't, for our particular application, we didn't know what data we needed to fetch until we were later on down the pipeline. So that about two to three months into the project, we did a team retrospective on what we thought of Commanded and what we thought and how we thought it fit for our particular use case. And we gave the team a heads up of about a week, like this is the meeting we're gonna have, please all come with some notes on what you thought and we'll compile them and just... You know, come up with those notes beforehand so that in the meeting, you're not being influenced necessarily by the person who spoke before you. And it was, it was unanimous on the team that like, it didn't fit our use case. Um, some people like the framework. Some people didn't, like I said, I loved it. I would love to someday get to work on a, on a service that uses that, um, in the best possible, uh, with the best possible use case for it. Um, But unfortunately didn't fit ours. There is another team at the score that is using it and they just went to production on a rather large scale application. So definitely there are use cases where it would fit.
Sascha_Wolf:
Yeah, I get it. I also used to work with an application which was using Commanded and it's great when like your use case fits into the opinionated kind of area Commanded is taking place in. And I
Michael_Fich:
Absolutely.
Sascha_Wolf:
mean, that is Commanded. Commanded is very opinionated. And then I have no problem with technology being opinionated because I myself, I'm an author of some open source libraries, which are super opinionated.
Michael_Fich:
I'm going to go ahead and turn it off.
Sascha_Wolf:
But when you... Sometimes I do wish that Commanded would be composed of smaller things. I mean, you have the Event Store library, but that is a lot more low level than what Commanded
Michael_Fich:
Mm-hmm.
Sascha_Wolf:
offers. And where you could maybe do a bit more of a pick and choose, where you say, okay, for my particular use case now, I'm actually interested in commands maybe. Maybe I just want to handle, want to emit events and I want to consume events. Please give me that part of command
Michael_Fich:
Hehehehe
Sascha_Wolf:
or other way around. In this case, I don't want to do event sourcing, but I like the whole aggregate command emitting thing. Maybe give me that part, right? And that is also be my experience. Like it gets painful, but I think it's the truth of any kind of top software out there when we try to use it in ways which that particular piece of technology, in this case, command is not meant to be used.
Michael_Fich:
Right, yeah, I mean, I actually had a very different opinion when I, along the same lines of that though, when we were using Commanded, one of the first things I noticed is like all of the dependencies we needed to add to use Commanded. So there was, you know, as you mentioned, the event store, there's Commanded, the base library, Commanded Acto projections, and you know, there was, I think a couple of more, but yeah, I mean, it's... You're, you're right in the base library does kind of have all of those constructs for projection or like base projections or aggregates or process managers, all kind of bundled up into that, uh, that one.
Sascha_Wolf:
Yeah. I mean, I have to work on one app where I was like, oh my, if I could now take this whole aggregate idea and like, if it made events and do event handlers, but don't do commands because they're completely overkill in this case. That would be amazing.
Michael_Fich:
Yes.
Sascha_Wolf:
Like, it is not possible for my, for at least my take. So, um, but I mean, like I said, it's a piece of technology. Everybody has their own right to do opinionated libraries. I just. Sometimes I yearn for something which is like taking a space in between, you know, like between, okay, but maybe not more low level event store. And then, uh, something more full blown opinionated, like a monitor and something which is more like on a level between where you can pick and choose, but we aren't there. It doesn't exist.
Michael_Fich:
Hehehehe
Sascha_Wolf:
And that is like, that is the downside of something like the elixir community. I mean, we, it's still. somewhat niche. And I mean, like earlier, you said you didn't find any any blog posts about particular combination of like Jason B2 Records set with Elixir. Yeah, well, it's still somewhat niche. That's kind of what you get into in using this technology.
Michael_Fich:
Absolutely. I mean
Sascha_Wolf:
Sometimes
Michael_Fich:
kind of
Sascha_Wolf:
you literally are the first person to do this in this piece of technology.
Michael_Fich:
Which is, you know, has pros and cons obviously, right? Like
Sascha_Wolf:
Yeah, yeah, yeah, yeah.
Michael_Fich:
I, I've never, this was my first time writing a blog post about a problem like this that I had solved where, you know, I thought, okay, this might actually be new, new content where
Sascha_Wolf:
Yep.
Michael_Fich:
nobody
Sascha_Wolf:
Yep.
Michael_Fich:
had, not just solved that, I'm sure people have solved that problem, but nobody had written about it so that at least that I could find, right? Whereas, you know, when if, For example, when I was working primarily in the Ruby world or I used to do front end stuff with Ember or React, like all those problems had already been solved. So in a way being, you know, a little bit of a, I guess you could say like a, in a technology that is still allows for pioneer work a little bit is kind of exciting.
Sascha_Wolf:
I agree, but sometimes I can also be annoying.
Michael_Fich:
100%, yes.
Sascha_Wolf:
Hahaha. Hahaha. So, Alan, any any pioneer stories from your side? I'm actually curious now, since we kind of ended up here.
Allen_Wyma:
the I mean, you're talking about with Jason B, right? In particular.
Sascha_Wolf:
I'm not even Jason B. I'm like any kind of like situation where you ended up, hey, maybe it seems that I'm the first time first person trying to solve this, or at least the first person to solve this and nobody else has spoken about. Because this is definitely something I've experienced in Elixir. It would like be more
Allen_Wyma:
Yeah.
Sascha_Wolf:
niche.
Allen_Wyma:
I think there is like one thing. I mean, there was one thing where I piggybacked another guy's idea where I use Postgres, uh, PubSub for broadcasting. Uh, so we had a chat system I created and my, the guy wanted to have a way that we can, we need to lead a message, um, to, to broadcast that. Okay. Simple. Right. But the problem is that if you, you can reply to other messages. And so we had a relation So if you delete the message, it could potentially cause a big chain. So I ended up creating a function that would basically broadcast with PubSub using Postgres for every single record that was deleted. So it wasn't like brand new thing, but it was definitely an interesting idea. It got me onto a podcast to talk about it. But yeah, nobody
Sascha_Wolf:
Yeah,
Allen_Wyma:
really
Sascha_Wolf:
cool.
Allen_Wyma:
talked about this exact situation. But using PubSub from Postgres, people did talk about in
Sascha_Wolf:
Yeah,
Allen_Wyma:
Elixir.
Sascha_Wolf:
it's also something I want to explore a bit more because I feel there's, I mean, there's a whole bunch of solutions already out there in the lixing community revolving around Postgres. I mean, Oban is, for example, a great example, right? But PubSub so far, at least as I know, is still, if you ask me, criminally underutilized. We actually were in a similar situation at my current job where we want to have like a simple event emitting and consumption solution. there were a few guarantees we wanted around that. And one of that was, if I emit an event, I want to guarantee that this is really persistent now, right? Like I don't lose it after that point. So when I,
Michael_Fich:
Mm-hmm.
Sascha_Wolf:
for example, I get a request and I say, I do some work and then I say, okay, let's emit an event and the request returns 200 or whatever, I want to guarantee that this event now ends up being processed at some point. And surprisingly, There are some event bus solutions on X here, but that particular guarantee, none of them gave. So
Michael_Fich:
Hehehehehehe
Sascha_Wolf:
we ended up, okay, we ended up kind of building our own with in this case, Google pops up because we were also considering, hey, we already use Postgres, Google Postgres has Google Post pops up. Why don't we use that? But the overhead of doing that was like, it's not worth it, you know, not worth
Michael_Fich:
Mm-hmm.
Sascha_Wolf:
it. And there is nothing out there, at least not that I know. There's nothing out there which is at least has production capabilities. There are some libraries I did find, which are last updated four years with Freecomets, that kind of situation, right? But there is nothing out there where you say, hey Google, hey Postgres has this PubSub thing. It's probably good enough for getting started, right? So why don't we build an event bus around that? Then
Michael_Fich:
Yeah.
Sascha_Wolf:
again, searching back, Elixir, Niche, that sometimes happens.
Michael_Fich:
Honestly, yeah, kind of tying the last two topics together. One of the things that I was thinking about when you're asking the question about like pioneering or putting out content that hasn't yet been covered in the community or at least not extensively. That's actually one of the things that came up a lot in our team's retro when we were looking at. evaluating commanded for our use case. Like we, uh, we found that the author of the library is extremely responsive and helpful and
Sascha_Wolf:
Yeah.
Michael_Fich:
has put out some like really good content. But one of the problems that we had was, you know, like what happens if he. Becomes less responsive in the future, you know, like he's kind of the primary driver of that framework, which is, which is very unfortunate, right? It's an extremely cool technology. But we couldn't find a whole lot of other people who had written on the topic or you know I've encountered problem X with while using commanded Can you and here's how I solved it kind of thing, right? So like if we had very and you know niche problems within commanded like every niche problem Seemed that it hadn't yet been written about so the the community support around it wasn't what we We would have loved to have seen just yet. Although Again, it was, this was a couple of years ago, very early into the framework, et cetera. We just, we couldn't really take that risk at the time, I would say.
Allen_Wyma:
As you guys
Sascha_Wolf:
Yeah, niggas.
Allen_Wyma:
were played around with policies in Postgres before?
Sascha_Wolf:
Never.
Michael_Fich:
I never
Allen_Wyma:
You know
Michael_Fich:
know.
Allen_Wyma:
what I'm talking about, right?
Sascha_Wolf:
Yeah, like, I mean, it's permissions basically and everything, right? Like that kind
Allen_Wyma:
Yeah,
Sascha_Wolf:
of like
Allen_Wyma:
so
Sascha_Wolf:
also
Allen_Wyma:
supposedly
Sascha_Wolf:
like
Allen_Wyma:
you can
Sascha_Wolf:
where you can do column level permissions also, that stuff or
Allen_Wyma:
For
Sascha_Wolf:
what
Allen_Wyma:
sure
Sascha_Wolf:
exactly
Allen_Wyma:
you can
Sascha_Wolf:
are
Allen_Wyma:
do,
Sascha_Wolf:
you referring
Allen_Wyma:
there's,
Sascha_Wolf:
to?
Allen_Wyma:
yeah, SuperBase using this thing called role level security RLS and it's like a way where you can,
Sascha_Wolf:
Yeah, Robo
Allen_Wyma:
within a
Sascha_Wolf:
is
Allen_Wyma:
database,
Sascha_Wolf:
not calling,
Allen_Wyma:
you can
Sascha_Wolf:
sorry.
Allen_Wyma:
start plugging in like if a user is whatever, like I'm really curious to check it out because that I think may be better than writing my own code, right? If I can get a policy that's actually correct, that'd be really nice to have. I was just curious if anybody ever tried it out yet. It's like, what is it? It's like Postgres has got too many things in it. And we're
Sascha_Wolf:
Hahaha.
Allen_Wyma:
barely even scratching the surface of it. And I feel a little bit sad.
Sascha_Wolf:
That is why I'm super interested in next stories like your Michael, right? Where you actually do dig into some of these details. And as I said earlier, I would love to get like a lightweight event bus implementation of Postgres where it just says, fuck it, it's using Postgres. And if it's not good enough, use something else. Right. But it's there, it works. You already have Postgres probably. Why don't use this, right?
Michael_Fich:
Yep, a hundred percent. Yeah. Uh, no, I, I feel the exact same way Alan as well. Like, uh, there's, there's so much functionality there. And I almost feel a little ridiculous every now and then when I'm, you know, explaining a problem or asking for help. And then somebody's like, Oh, there's this thing in Postgres. Like, why didn't, why didn't you just use that? And that's like, you know, I didn't even know about that. The library is just of stuff you can use with it is huge.
Allen_Wyma:
And they're just adding on more stuff and taking out things that nobody really ever uses. Or I don't know if they actually ever take anything out. Like they still have that JSON column and it's only there for like legacy. And they always say
Sascha_Wolf:
Okay.
Allen_Wyma:
use JSONB, right? So they seem to just be adding stuff, but never taking anything out. And for some reason it actually keeps getting better and better. Right. I think 14 has also got a lot of really good improvements in it compared to like 13.
Sascha_Wolf:
Yeah, I feel like POSCOs, I mean, usually people say there's technology which people hate and there's technology which people don't use, right? There's this, I probably botched it, but something like that, right? And I feel POSCOs is doing that miracle thing of like, it's used a lot and most people like it. I don't know how they're doing it, but that is my experience with it.
Michael_Fich:
The only thing that I would add about that is like, I think post-crisis very well loved, liked or loved, but I feel like that it might be a little bit criminally underserved because of how underutilized like the functionality is.
Sascha_Wolf:
Yeah,
Michael_Fich:
Right.
Sascha_Wolf:
yeah, true.
Allen_Wyma:
There is some
Sascha_Wolf:
Definitely.
Allen_Wyma:
annoying things within Postgres, but there's ways to get around stuff. Like for instance, if you want to sort, so I want to sort like strings or sort, uh, sort of column, sort records by a column, right? And so there are floor numbers. So you got floor one, two, three, you know, 20, 30, 90, whatever. But the problem is that, uh, because of the way the numbers are and there's the way the strings are by default, like you're going to get things not sorted correctly. where it's like you have, if you put zero one, then you're gonna sort it correctly. But if you just put a one,
Sascha_Wolf:
Uh...
Allen_Wyma:
it's actually to come after I think 11 or something. So
Michael_Fich:
Oh yeah.
Allen_Wyma:
like, but there's actually a way to solve that, that I found out about. I forgot what it's called, it's called collation. If you create a special collation, you can do it. Yeah, that's something I'm gonna have to do for a project, but I just haven't gotten to it. Like basically we know the issues there. We're gonna release it soon to some public testers and we still haven't solved it yet, but we know how to solve it. But again, this is one of those things where it's like, if it's another database, I don't know if there's gonna be a way to solve it, but Postgres has this in calculations, which you can actually help to solve the problem.
Michael_Fich:
Yeah, I think like there's, if I'm not mistaken, I have not actually delved into this, but I think that there's, you know, various plugins that you can just add in for Postgres, if you happen to find one of those, like a niche use case, right? Like I might be completely off base on this, just because as I said, I haven't used it, but I remember looking up something around, user emails and trying to ensure uniqueness and I think that there were plugins to
Allen_Wyma:
You talking
Michael_Fich:
you
Allen_Wyma:
about
Michael_Fich:
know prevent
Allen_Wyma:
CI text?
Michael_Fich:
yeah there's
Allen_Wyma:
Case
Michael_Fich:
that
Allen_Wyma:
insensitive.
Michael_Fich:
but I
Allen_Wyma:
Yeah.
Michael_Fich:
yeah yeah um the other thing I was looking for was like something around like wild cards and emails to prevent you know maybe the same user from signing up with like a bunch of different wild cards stuff like that I think that might be available as well but uh Again, could be very wrong. That's very outdated, very outdated memory, but I like that the, the, uh, you know, the functionality to just add in more, uh, exists within it as well within that ecosystem.
Allen_Wyma:
And also find a way that you can also extend it with like post GIS. So you got like a GIS database and
Sascha_Wolf:
Yeah,
Allen_Wyma:
also
Sascha_Wolf:
yeah.
Allen_Wyma:
time series database with time scale DB. It's just crazy what you can do.
Sascha_Wolf:
Yeah, definitely. It's also, I mean, I actually used the GIS part of things already and worked like a charm. Like I did the job. I mean, we were not doing anything super crazy, but for what we were trying to do, that was just pleasant to use. Okay,
Allen_Wyma:
Even
Sascha_Wolf:
um...
Allen_Wyma:
super crazy, I think you could probably handle it to be honest. That Post.js is really amazing.
Sascha_Wolf:
Yeah. Yeah, probably, probably.
Allen_Wyma:
I think the one book I was going through and it's like the first chapter was like, okay, this query is going to help you to find all the McDonald's that are along a five mile stretch of road. I'm like, what? And the query was
Sascha_Wolf:
Hahaha!
Allen_Wyma:
very small. And I was like, all right, this is pretty powerful stuff.
Sascha_Wolf:
Okay
Michael_Fich:
That's
Sascha_Wolf:
folks,
Michael_Fich:
awesome.
Sascha_Wolf:
we are kind of diverging, so
Michael_Fich:
a
Sascha_Wolf:
we're
Michael_Fich:
little bit.
Sascha_Wolf:
just okay, but I think we can slowly come to an end. So Michael, if people want to reach out and get in touch with you, how can they best do that?
Michael_Fich:
Yeah, so they can reach me on LinkedIn. I would say is the the best way also reachable on Twitter At Michael underscore fic last name is spelled fich. So not exactly as it sounds Um, but yeah, definitely the best way to uh to reach me
Sascha_Wolf:
Great. Then I would say, let's move to pics. And since I'm moderating Alan, why don't you start?
Allen_Wyma:
should know it's going to be B. So
Sascha_Wolf:
Yeah, of course.
Allen_Wyma:
it's been a while since we have a rest pick. So I still don't have a respect today, but, um, I wanted to actually, I don't know if you guys have seen it yet. So they finally started posting at least some of the videos. Actually, there's not that many up for the extra calm. That's a little bit surprising, uh, for that's looks to come. But I wanted to, to really pick out, uh, Chris McCord's talk. I don't know if we, anybody picked that one yet, but, uh, what's happening with Phoenix is really awesome. I actually wanted to tag along with this one. So what I actually did today was I cloned Phoenix and I ran the installer locally. I tried master branch and it didn't work. All the changes that I saw, like the tailwind and stuff like that is not on master branch yet. It's on another branch called, I think, CM tailwind or something. But actually I cloned Michael Crum's branch and pulled his out and actually started taking it apart and actually taking a look at what they're doing. because it's kind of curious about how they're using Tailwind and how they set up everything. And it's pretty cool. So it's given me a lot of cool ideas for actually existing projects about things I want to do. So I think check out his talk and also check out the work they're doing before it's released. They have instructions about how you can do it. Really copy paste. I just copy paste two lines once I got the project. And actually got a simple project using their style. I did what do you call it? The generator for the authentication because now it works with Live View. And also the resource one and just start taking a look around, start poke around. So it's a lot of cool stuff in there. Um,
Sascha_Wolf:
nice
Allen_Wyma:
yeah.
Sascha_Wolf:
yeah it's definitely super cool to see how how these things are evolving and i'm also feel that phoenix is really getting to a point where it has its own unique identity right like it started off as something which is very much inspired by rails and by now it's really phoenix is something of its own and i like to see that
Allen_Wyma:
I like the relationship we have with the Tailwind people. I don't know how close they are, but it seems like a lot of things they do are like really working together with Phoenix team. I don't know. Maybe it's just the way I see it because I'm in the Phoenix community. Like they work a lot with React and with Phoenix, I think, for some reason.
Sascha_Wolf:
End
Michael_Fich:
I think
Sascha_Wolf:
of.
Michael_Fich:
the tailwind guys have been pretty big with actually reaching out to several different... I don't know if they reach out or have been reached out too, but they've been working closely, it seems, with a lot of the people responsible for maintaining different frameworks. I remember seeing some stuff in, I think, the Laravel community and stuff like that, as well as working with... Ember, I believe it was. You know, there was a prominent developer in the Ember community who was creating a plugin for the Tailwind library. So they were working on that, I believe. But yeah, it's really, one of the nice things about that framework, Tailwind in particular, is just like how proactive their whole team is with, you know, helping you. solve problems or trying to solve them for you before you even think of them. Like we, we use it for internal tooling. Um, a lot of the score and they, uh, you know, with their tail and UI components and stuff like that, it's been super helpful for just bootstrapping, like internal tooling and stuff that we need. Oh, Sasha, I think you're muted.
Sascha_Wolf:
Ah, wait, make Microsoft. OK, now, yeah, technology. How does it work? I don't know. So OK, then let me go real quick. I have only one pick this week. And it was kind of funny. Like last week, I said, I don't have any picks this week. And then after I basically looked at my microphone again, I was like, oh, wait, there's this one thing I could have picked. But nah, OK, so I'm going to pick it this week. And I want to pick some. headphones or rather earplugs I recently got the Genki waveform none of you probably heard of it It's Genki is like a company and I've been following for a while now. They're doing audio equipment For gamers and the very first thing they did was a Bluetooth accessory for the switch That was before you could pair Bluetooth headphones with a switch, which you know can but you could like plug it into a switch and then you could have like very low latency Bluetooth connection for your headphones, which was amazing for me back then because I was still commuting daily. So, hey, am I switched with my Bluetooth headphones? Yeah, great. But they are doing a whole lot more now. They have these earplugs, which are kind of like, I think, I don't know how they're called, the Apple ones. I don't know. I'm not in the Apple Equusphere, sue me. But they are very much also designed around being capable of calling multiple audio connections at once. So you can actually, what you can do with them, you can connect them, for example, to your PlayStation or whatever, to your Switch or something, so your gaming console, and also to your phone if you, for example, have Discord running on them, and do both at the same time, play both at the same time. And also, they have like a smart case with it, and... If you connect your phone to the smart case, you can also run like a voice connection through that without losing quality. Which is usually something with any kind of headphones, right? As soon as you have like a microphone activated and it goes both ways, it sounds like shit. And this thing, those things don't. So it's been a pleasure using them. I mean, I'm gaming as a big part of my life and that is definitely something. I actually haven't used this dual stream connection yet. But apart of that, they're just, they're pleasure to use. They're not super cheap, but if you are looking for something which is like lightweight, easy to carry around, supports all kinds of gaming habits, then I just can't recommend. This is decent enough and the company in general. So yeah, Genki Wave form, there's gonna be a link in the show notes. And that's my pick for this week. Michael, do you have any picks for us?
Michael_Fich:
Yeah, gonna go in a little bit of a different direction. So since I work for a company in sports and sports are a big passion for me, especially local sports, yeah, I'm gonna just give a shout out for Toronto sports. So right now, for example, basketball and hockey are starting soon, the Toronto Raptors, Toronto Maple Leafs, huge fan of both. Really looking forward to, you know, I think... exciting seasons for both teams. Hopefully I'm not being too naive on that one. But also, yeah, baseball. The company I work for, The Score, is a sponsor for the, or I guess a partner of the Toronto Blue Jays, and they're about to enter a playoff run after a really exciting season. So I'm very much looking forward to that and getting to a few games soon. So yeah, a bit of a different direction than you guys went in. So hope that's all
Sascha_Wolf:
No
Michael_Fich:
right.
Sascha_Wolf:
problem.
Michael_Fich:
Hehehe
Sascha_Wolf:
That's what picks are for. You can pick literally anything. If you want to pick a happy cloud in front of your window, you're free to do so.
Michael_Fich:
Next time, next time.
Sascha_Wolf:
Okay Michael, it was a pleasure having you. Thanks for being on the show.
Michael_Fich:
Yeah, my pleasure. Thank you guys very much. Appreciate it.
Sascha_Wolf:
And I hope you enjoyed the show as well, dear listeners. And tune in next time with another episode of Relic Symmix. Bye-bye.