Unlocking the Power of Functional Programming and Elm with Richard Feldman - RUBY 646

Richard Feldman - author of Elm in Action - joins the Rogues to discuss the advantages of Functional Programming and using Elm. Elm is a programming language that is a functional programming language built for the front-end that compiles to JavaScript. Due to its set of enforced assumptions, it leads to clean code and powerful programming constructs.

Special Guests: Richard Feldman

Show Notes

Richard Feldman - author of Elm in Action - joins the Rogues to discuss the advantages of Functional Programming and using Elm. Elm is a programming language that is a functional programming language built for the front-end that compiles to JavaScript. Due to its set of enforced assumptions, it leads to clean code and powerful programming constructs.

Links


Picks

Transcript


Hey, everybody. Welcome back to another episode of the Ruby Rogues podcast. This week on our panel, we have Ayush Nwatiya. Hello. Hello.

I'm Charles Max Wood from Top End Devs. And this week, we have a special guest. We have Mohammed Hassan. Mohammed, you built Lightstack, which is pretty cool. What else should people know about you?

Well, I've been I've been doing Ruby for for a long one now. I guess I guess 2,005 or something. I've been been in this industry for for a long while. I go by the handle old moe for a reason. I think, yeah, it's it's literally old, Moe, and and, I've been I've been doing computers since 1990 something.

So, yeah, I I have have a lot of work that has been done in the Ruby land, especially in concurrency. I've I've released the library long ago, if anyone remembers. It was called NeverBlock once fiber started. And, yeah, I got fascinated by by this, idea of, like, high concurrency and and doing high performance stuff in Ruby, and then afterwards, I I also got into I got in love with with embedded databases, SQLite to be specific. Mhmm.

So, basically, I'm marrying the 2. So, like, high performance will be high performance, embedded database, SQLite in particular, and I've been I've been doing that with my own, work, in my own startups, and looking forward now to actually deliver something to the community that will, you know, spread this, this value, to everyone that is using both Ruby and SQLite. Cool. So I looked at it, and it and and you can correct me why I'm wrong. I didn't have a chance to try it because my life is absolutely nuts.

You think the kids get out of school, your life gets lots and nuts, it doesn't. So, yeah, I'm looking at it, and I'm sitting here thinking, well because I remember back in the day SQLite was kind of the not serious database. You know, you'd use it development, and then you'd, use a real database like PostgreSQL. But with Lightstack, it looks like you you provide the database, which is kind of obvious, but then you're also pulling in stuff like caching and jobs and yeah. Do you wanna just talk about all of this?

Because Yes. You pull in just one gem, and it does all this stuff. Yes. I think I think one one the biggest, issue here was was the amount of dependencies you need when you start a new Rails app. Like, you need you need, like, a database, naturally.

Yeah. And then you need some some cache server, and and and then things go a little bit crazy and you, make a cluster of those cache servers. And then you need a a a job processor, and you need a queue for that job processor. So you have, for example, and then a process for a sidekick or something like that. And then you need, full text search.

So you, use something like, Elastic or, OpenSearch, and then you need, like, cable, capabilities. So you need a Pub Sub server, and you have usually Redis for that as well or Postgres. Mhmm. And and and you end up with so many dependencies and so many moving parts in your application. And I was I was thinking of, like, yeah, like, making making SQLite your your database is is nice.

But, like, limiting it to just database may means it's only limited to smallish applications that only acquire database. Actually, SQLite, due to its, like, latency characteristics, is capable of a lot more. And I went out to to try and see if if I can actually prove that it is capable of a lot more. And so, yeah, database, cache, pops up, queue, job processor, full text search. Everything you would, need for not everything, but almost everything you'd need for, your application.

And and currently in the in the repo, but not released is LiteKd, which is Kratos, basically, ported to SQLite. So you won't need, for your key data, another separate process running somewhere in your infrastructure. Very cool. I have to say, like, I switched over to, what is it? The the caching and queuing that they Yes.

Solid cache and solid queued that that DHH came out with. Sorry, folks. I'm a little slow. I am under the weather big time, but, this looks so cool. I just wanted to talk through it.

And so, I was super excited to get rid of Redis. Right? Because I I haven't been using Action Cable a ton. So, yeah, I I kinda like the idea of and I've also been using Maille search, and I've been trying to move that into my Postgres database as well. Just use, Postgres full text search.

So so this appeals to me a lot because I you know, my stack gets a lot simpler. Yes. Definitely. I think I think there are 2 dimensions here. One of them is simplifying the stack, and one of them is actually realizing the huge potential in having an embedded database.

So one way to simplify the stack is to move everything to Postgres. Postgres is very capable. It's very solid, and it's been there since forever, and and you can rely on it. But what I'm, my thesis is there is a new dimension of performance when you go embed it. Like, a a complete, a completely different thing.

I think in in Brighten Ruby, I presented what it takes for a query to run if you go through, like, client server, stack versus an embedded stack. And what you get is basically you're you're doing for most reads, you're doing a memory code. And and and that's that's a huge advancement in performance that you'll only appreciate it if you see it. Once you see that your Rails application is able to able to deal with, like, potentially, thousands of concurrent connections and and responding to them very quickly, You you'll you'll you'll you'll have to see it to believe it, basically. Very cool.

Ayush, have you played with this? I haven't, actually had a chance to play with it. SQLite and Rails has been something that's, been on my to do list for a while, but, well, life gets in the way. Right. I think my my next project is almost definitely gonna be my next side project's definitely gonna be SQLite rather than Postgres.

But, yes, I just, I just wanna get very clear in my head about some of the gotchas with SQLite and well, let's start with just what Lightstack actually does in certain things, like with light job and, sorry, with light cable and, like, light k d. You said, like, light, light cable usually pops up. It's like a a drop in replacement. Right? And, like, k d replicates credits.

So are these building on, like, native SQLite features of, like, PubSub in a key value store? Or No. SQLite is is a little, bland than than that, so it doesn't have, like, something like notify in in Postgres. Mhmm. And, I like I had to build this functionality into SQLite, on top of SQLite itself, but on top of native SQLite.

So, by the way, the solid family of, of products like Solid Cache, SolidQ, they have they do have SQLite support. But but the problem is they do deliver that on, through active record, which is, like, an order of magnitude more, abstracted than than going native, for with SQLite, like, directly. And, hence, the SQLite performance, if if you go directly to the metal, it's it's much much faster. And I I the the nice thing about about what we have in in light stack is that we have the, SQL code completely decoupled from the, from the Ruby logic. So you you can optimize that independently and work on ensuring guess every, call is through an index.

You're not, working, you're not walking, tables or doing any table scans, things of that sort. So it's, I guess I guess it's it's it's implemented, but the functionality is not that hard. I guess, it's it's really, straightforward, getting a cable and getting it to perform. The the problem, the real core issue would be to get it perform correctly under, like, concurrent, operation, which is which is tricky and because you don't know whether the application would run-in in threads or on fibers, multiple processes or not. And this is, I think, the core contribution of of, Lightstack, which it abstracts all this and and makes sure any component can be brought up very quickly.

Like, I built, like, cable in 1 hour. Oh, wow. Okay. Because all the, machinery, there for for, like, making sure that, like, pulling of connections, threads don't go into these conditions, and and proceeds are not affecting each other, that was there already. So you only get to implement the logic of the higher level.

Of course, 1 hour with bugs, but but then, I had a both work input type. And, and and unlike k d, it was not, much harder. It was done in 2 days. So it and then it was mostly the functionality. A lot of functionality, a lot of logic, on top of the, connection.

So I guess I guess that's that's what, Lightstack is able to do, bring up those features quickly. And I I implement as much of them as possible in SQL land rather than in Ruby land for for performance reasons from one side and and also for portability and, separation of concerns as well. Yeah. I think that, that's a great approach. I've I've kind of been, blissfully unaware of SQL up to a point because active record kinda handles, it for me.

But for the client I'm working with at the moment, we've had to do some really advanced stuff, and I have been getting my my hands dirty with SQL, and the power that it has has blown me away a little bit. So, yeah, I completely get, your approach there. So, yeah, just for personal curiosity, could you talk about how you, how you did PubSub you with just, like, SQL and database features without having something like Notify, at your disposal? Yes. So the the idea is when when you do a database calls, it it it in in the traditional sense, it goes through the network to the server and back.

But in in in the SQLite sense, it's just a method call. A method call to a local memory address, especially if you're if you're looking at some, page that is definitely in the cache because it is, like, access it's hot in the hot, section of of your data. So, basically, it's a pulling mechanism. It's pulling happening from from Ruby. Like, each thread that is, that is, subscribed to the, pops up would would constantly pull the database and ask it for, for data.

And and in benchmarking, like, for c's constantly writing and constantly pulling, I'm able to get around, like, 120,000 messages retired per second. Oh, wow. So, yeah, this is this and and that's not on on very high end hardware. So it it is cable. And and I I was even able to do, faster than some some IPC cross, process solutions that are tried at first as an optimization, but then, it ended up, like, SQLites faster still.

Yeah. I think when you eliminate the network from the equation, I think the the performance speed up you get is just hard to comprehend. Because you're and you're also removing so much complexity from the call. Right? You're not opening a network connection.

You're not sending anything over a wire. Yeah. You're not pumping data from from buffers to, like, the network stack in the OS. There there are so many so many things that are eliminated, and and your your life becomes a lot easier. I'm in some cases, I think I kind of feel this, you know, because I've used some of the, cloud databases.

Right? And so, you know, even if I'm pulling a, you know, a database on Linode and a server on Linode or DigitalOcean or whatever, you know, pick your poison. Like those, I I feel like I felt that more than when I have, you know, 2 virtual servers that I've set up in the same network, you know, on DigitalOcean and stuff. Because all of that stuff is it's I think it's all software, and it's fast. Right?

And It it is fast, but it's not free. No. That's fair. It'll never be free. You you don't get that for free.

But yeah. I think I think there are different levels to this. Right? I I don't know that I'm completely convinced that, you know, it's gonna be that noticeable depending on what your setup is. Yeah.

You should try. Circumstances. Yeah. Yeah. Definitely.

You should you should try, actually, to go through the, like, all the benchmarks that I have for, like, Stack, by the way, on on the, GitHub repo, These are run against Postgres on the same machine. Okay. So you put Postgres on the same machine and run it locally? Not even on a virtual server. Right?

Okay. Like, all these are run through against Postgres on the same very same machine, on the same address space, same everything. Well, that changes my argument a little bit. Yeah. So, I I like, the the point is how how much of of the overhead how much is the overhead compared to the actual work that you're doing?

Mhmm. And it, there there is a spectrum here. Not not all queries will be much faster on SQLite. Like, if if you're doing heavy IO, if you hit the disk, then they'll both be both be a lot slower. Then in that case, the the the savings that you had by not going through the network stack will be very, unnoticeable.

They will not exist, basically, because reading a single block from disk is much, much more expensive, like orders of magnitude more expensive. So, like, if if you are talking about queries that are fulfilled from the page cache, then the the difference would be really not. If you have complex queries that will require disk access, then the difference will be less. But in no, situation will SQLite bill will be slower than Postgres in RAID scenarios, given, of course, the the the query plan that was provided by both databases. Both have very advanced query, planners.

And, they should produce for most in the most case, similar plans for for the the queries. But in case they're producing the similar plan, definitely, SQLite will be either as fast or much faster than, like, in a spectrum between fast and, as fast and much faster than Postgres. That's 3 queries. Right. So is a completely different story.

Right. So my question then is because, yeah, I'm looking at these benchmarks, and it's pretty impressive. What if you have to reach it over the network? Right? So what if you've got enough traffic to where you've spread your application servers out over multiple servers, and they'll have to hit the same source of truth, right, the same database that way.

How does that change the equation? So SQLite is not designed for this scenario. Like, scaling out and and connecting from multiple application servers to the same node, database node is not what SQLite is designed for. SQLite is designed generally for vertical scaling rather than horizontal scaling. So you get a better, bigger, machine, or a VM Right.

Basically. And, in that case, you can serve more, from from the same resources. But, there's many attempts to, to actually enable, enable this level of of scaling. But but then the the point would be a little bit moot. Like, why not use PostgreSQL?

I'm working myself on a solution that that is midway. So it it it only transmits writes, but all reads are local. So you have multiple nodes. Each one has a complete copy of the database, and writes are distributed. I've written about that on my on my blog and still didn't open source it yet, but I'm not sure if I'm gonna open source it or maybe just try to, build the service around it.

But but you get you get a copy of the database, a full copy of database locally, and you can do all reads locally. So you get those benefits for reads, and you get slight, extra latency for writes. Yeah. But it I mean, I have to admit, most of the time when I'm scaling, yeah, I just go into Linode or DigitalOcean or wherever I'm hosting, and I say, I want more RAM and a little more disk space, and it goes, okay. And then my website's down for a minute.

So what you're talking about, that makes sense to me. Yeah. Yeah. Because it simplifies your setup. Like, because one once you go out of that, you you have to consider, okay, what about, like, a load balancing?

What if a node fails? What happens here? What happens there? Then then you have so many moving parts, and you they keep just growing exponentially as as you add complexity to your system. Right.

Yeah. Honestly, I think, in 2024, the amount of traffic you can serve with 1 single beefy server is just bloody unreal. I think, horizontal scaling if you need horizontal scaling, you will have enough money to pay people to get your SQLite and onto Postgres. Exactly. That's I I think I think the the point is or or or the point that I'm I'm trying to, to spread is, it is unwise to scale more than what you need initially in your small or starting projects.

You can get a lot more a lot more value in in, like, eliminating, overheads and and increasing performance and eliminating, like, moving parts by by moving to an embedded solution. And and SQLite is is one heck of an embedded solution. It's an amazing, solution. It just works. And, yeah, my advice would be don't focus on your infrastructure.

Don't have DevOps. Focus on your application, and it will carry you a long way. And once once you are in a position where you need to you really need to move out of this, exactly what you said, Daish. Like, you have the money and the funds to to move out of this. Either either you're getting a lot of money, or you have, enough external funding to, to to get out of this, given that you have that amount of user activity.

Yeah. D h h actually said I'm trying to find the tweet, but he said in a tweet that if you bought, like, the I think it was 32 or 64 gigabyte Hetzner level thing that you, at this point, could have run the entirety of base camp a few years ago. Right? And so yeah. I'm, yeah, I I'm really kind of yeah.

I'm I'm liking the way that this looks. And and then look at at things now. Like, Basecamp now is is a beast. It's a huge application and Yeah. So many subscriptions.

But, and and they had to go through a lot of pain to scale that. But if you're starting something like that now, you have got, like, at least 5 to 6 years of growth within a single box. Yeah. Then they said, when they first started these guys way back in, like, the mid 2000 or whatever, for a number of years, they were running on one physical server. Like, not even virtualization back then because this is 2,004.

So, yeah, I'm fully on board with this just rent a VM, approach. Yeah. He said right here. He said you can rent a dedicated 48 core, 256 gig, 2 terabyte AMD EPYC Server from Hetzner at $236 a month or €236 a month. And, yeah, I mean, I've I've never had an application that got so much traffic that that wasn't enough.

Yeah. And and those 48 cores, like, translate to 96 cores in cloud speak. So, yeah, that's that's a lot more than many, many applications and and a lot cheaper, than than what what deployments we see on the cloud. Yeah. It's crazy how much money you can save when you cut out, complexity.

Let's, let's dig into a little bit just about SQLite and its use in a in a web application. So inputting light stack and all the amazing stuff that does, let's put that to one side. And if I wanted to use SQLite directly with Rails, I know there are some gotchas, backups being one main thing is how do I back up and not lose my file. And the second thing is, I've been looking through the Campfire code base, which is obviously SQLite, the one's product from 37 signals. Mhmm.

And the only thing I could find in there that was, like, specifically to do with SQLite in terms of, like, initialization and setup was something called a busy handler. So could you talk about what that is and why that's necessary? Okay. So, originally, SQLite is configured for, like, really, really embedded use cases. So it is optimized for usage on a phone.

And probably, between the 3 of us, like, we have, like, maybe at least a 100 SQLite instances running at this very moment on our phones and computers. But but this use case is is completely different from the, web application that requires, like, multiple, users accessing it at the same time as much as pass as fast as possible. So there are a few caveats and and a few, knobs that he that should be set. What light stack does, it it completely does that, by default, when it's, it's publishing its own, like, SQLite driver to active record. And, what, there there are other gems, like, factor minded, with Stephen, Margine.

He goes yeah. He he has, a lot of effort, done in that, in that area, and and I think I think he is helping a lot, like, the vanilla SQLite experience, for rails. By the way, one one thing about light stack, light stack is not just rails. It's basically you can use light stack with HANA and use use it with with, Sinatra, whatever framework you'd like. But let us get back to, the configuration.

So, basically, what what you need is the following. You need to ensure that, you can do writes while you can do reads. The default configuration can only either do writes or reads. So you change the journaling mode from the default, which is delete, to while, which is write ahead log. So in that case, you're you can have a, a writer and a reader at the same time.

Actually, an unlimited number of readers and a writer at the same time. That's the first configuration. The second thing is, which is a little bit, tricky. Like, when when, when when connections are trying to capture the lock on the database, they will either succeed in getting the lock or fail because another process captured the lock. And if it fails, then it will return to you like, SQLite busy error.

And, hence, we need to have a busy handler. And that busy handler, what will will do? It will capture that error and not and not propagate it to the application, and it will try again later to capture the lock, which is basically what what a central server, like Postgres would do, in in locking. Of course, its locks are a lot more fine grained than SQLite, which is like a single lock for the whole database. And and then you also have something in in the, in the transaction type.

SQLite has multiple transaction types. It has, like, the, the default, which is deferred transactions, and the, there is the immediate and the exclusive transaction. What what we do also for the driver in in to run a web application properly is to make all transactions immediate. What that means is a deferred transaction is a read transaction that you can midway upgrade to a right transaction. And and the problem with that that if you do this, and another transaction has, if you attempt to upgrade the lock and another transaction already has written to the database, this operation will fail with a busy, like, SQLite busy error.

But if you start with an immediate transaction, then you will have a right clock right from the beginning of the transaction, and and then you will not have this error again anymore. So, basically, the the for configuring, SQLite, I I think there are, like, 4 main things to do. Change the journey mode to AL, set a busy handler, and also configure your, memory map to have, like, a shared cache across all processes rather than a cache pair connection. And, yeah. And and and and and that's it.

And and set your transactions to be good. These are the 4 things that need to be done. Lightstack does that automatically. There is, also the SQLite gem, SQLite adapter gem from, Steven also does that. And I think these some of these are being merged into, into the upstream, SQLite driver for active records Mhmm.

Adapter. So And so how do you make these changes? Do you do it at Ruby level, or do you have to run some SQL, or, how it how's this configuration done? Like, in in light stack, it's it's done by by overriding the, SQLite driver and applying these at, initialization, making sure all connections share these, attributes. So this is on the the rails end or the SQL end or whatever, not on the SQLite end?

It's it's on the SQLite adapter end or SQLite driver end, basically. So in case the database to do anything different. You're telling the application it has to manage its connection this way. More or less. Yeah.

But but at at the end of the day, RAID, itself is not aware of that. It's not Right. Doesn't see that this is happening. The the changes that are go are gonna go upstream will will bring that knowledge to the RAID's driver. Mhmm.

That makes sense. And, regarding backups, like, is is LightStream part of Lightstack, or is that something to No. LightStream is not part of Lightstack. You can use LightStream aside from using Lightstack. LightStream is a very, nice tool.

You can you can basically back up the database, like, incrementally and put that on s 3. Like, I personally, use a different method, which I'm I'm trying to spread as much as possible. But, like, in in in in most situations, when you when you have a VPS or like a VM on on something like Linode or or DigitalOcean or anything, my recommendation would be to put the database on a a a replicated volume rather than on the VM directly. And in that case, you have you have the whole database living, in in, on the network, basically. And and then if you have a a file system that supports copy and write, semantics, like like, better of s or x s, then you can have a backup in in for of the whole database in in under 2 milliseconds by just copying the file.

And you can have as many back backups of these as you as you want. Like, you can have, like, a Chrome that is doing a backup every second. Or you all all you need to do is open a, like, a read transactions, take it back, and it will be consistent. I I have also written about that. I have a blog post about backup strategies, and I explained some of these, including LightStream, but but I don't go deep in there.

But I also explained some of the other options if you have, like, a file system. So, like, if I was if I didn't really care about having, like, every second of data, like, backed up if I wrote, like, a cron job to back up the entire database file, like, every hour or something like that and upload it to an s 3 or some remote storage. That's basically a viable backup solution. Right? Because it's just a file.

Yeah. But but, like, let's stream will be even better here because it does just an incremental update. Okay. So not every second. It just sends the the changes.

Or or as as I as I mentioned on on a volume, on a replicated volume, then you have your data replicated across the, data center. You don't need even s 3. You just make a copy, a a copy, CP, of the file. And it will be a copy on right, and it will it will, it will if if you have gigs of of information, it'll take, like, 2 to 4 milliseconds to copy. Wow.

As long as all the transactions have completed. No. No. No. It didn't unrelated to transactions because you you open you open a a re transaction and you copy.

You don't care. You get snapshots. Oh, I gotcha. Okay. You get a valid snapshot of the database.

Okay. I'm I'm planning to add, some some functionality in light stack that would do that for you, like open the transaction and issue the right copy command. But it will not work if you don't have, it will not work efficiently if you don't have, like, XFS or better of s of or ZFS. So, are there any, like, other gotchas that someone coming from a postgres background if I'm writing a a rails app or any any web app, let's not say real, any web app with SQLite as a database, is there anything else I need to be aware of? Yeah.

I'm not chasing myself in the foot. Definitely. And and and that's the the the biggest concern when when you're dealing with a a database that is not I I I like to call it a decentralized embedded database because it doesn't have a server to coordinate, actions, which is basically, right performance and, specifically, things like creating an index on large table and things like that. Because of its nature, SQLite will will always take, like, a lock on the whole database whenever it's doing the right operation. So only 1 writer at a time.

Keeping keeping your, rights, small and and and tidy will mean that you get very fast rights. Like, you can do, like, tens of thousands of rights per second in on a SQLite database, and it will beat, any any other database. But if you have really large writes, any other, database on a on a, like, a reasonable configuration on the same configuration like a single node. But if you have really large right transactions that do a lot of things, especially if you're trying to do things within the transaction, like going to a server or something, then you're you're hurting the SQLite performance a lot. And and that would, while this will degrade other solutions as well, including Postgres and MySQL, they will be a lot more graceful at providing a chance for other, other transactions to run at the same time.

SQLite will not be able to do so. So if if you have, like, really large transactions that are trying to do a lot in writing, you should hopefully avoid that and break down, or you will suffer from, like, queuing right requests because only 1 writer at a time. That's one thing. The other thing is some some some of these are unescapable, like, creating an index. When you create an index, you have to wait until the index is finished, and it has to have, like, a a snapshot, like, a a correct, state of the database while being created.

So it cannot allow other writers to write at the same time, even to other tables. So in in that case, creating, creating, an index on the large table, real large table can actually, take your database offline briefly. And, is I think, I don't know if I am completely making this up or if I read it somewhere, but is, like, n plus one queries a feature rather than a bug when it comes to SQLite? Yeah. It's it's it's it's not it's not a feature per se.

Basically, here's the thing. The overhead of running a query on SQLite is way, way less than, running, a call across the network stack, whether locally or remotely. So if you have n plus ones, they're not as expensive. Of course, having 1 running one query instead of, like, n plus one queries is cheaper. But the the the difference is not as big as with, client server database.

It's a lot less. Like, it could be, like, 50% more or something depending on n. So, in that case, if doing an n plus one would actually make your code nicer, Just go for it. Because because it's not as penalizing to your system as it is with, something like, something like, MySQL or Postgres. The the authors of SQLite, they use m plus 1 intentionally in some cases on their website on and on the fossil, SEM system because it makes things a lot easier.

And and You're hurting my head. Encapsulation a lot. Like like I had that beaten out of me. I swear. Yeah.

Like, think think about it. It's it's an optimization. Like, eliminating m plus 1 is an optimization. Yes. It's not something that would make your code look better.

Right. Actually, it's the other way around. Well and and it still works. Right? It's just not as fast.

So Yes. The the the idea in in SQLite is is not that big deal. It's not an issue. Like, optimize for for your, code organization and encapsulation better. Right.

Yeah. Just a different mental model, isn't it? Because, yeah, suddenly, a database is, just local. Imagine if you cannot, do partials in, in in in views. Like, you'd have to have those read and and do all the partials inside in line.

It's it's exactly the same. Partials are, more expensive, but but you you don't go there because because then it's a nightmare, code wise. True. Yeah. So let's, let's pick up on something we're actually briefly discussing, before the show started.

But, so I have done a fair amount of work on search and Postgres, the Postgres native full text search. And, for the client I'm working with at the moment, I've also done quite a lot of elastic search. What what is the search story on SQLite, and does Lightstack have anything kind of building upon that? Yeah. So, light, SQLite has its own native, search implementation, which is called FTS.

There are multiple versions of that, 3, 4, and 5. 5 is the currently the most, maintained version. So, SQLite does have, an FTS story. The problem with the FTS story in SQLite is that it is slightly rigid. So you have you have to have a table that maps to another table in the database, and the mapping has to be, static.

If you change a column and you wanna add it to the index, you have to rebuild the whole index, things of that sort. In light stack, we have, a light search component which builds on top of the, FTS 5, but it also brings a lot of dynamism to it. So it actually implements, like, a dynamic layer. It it's a lot, the interface is a lot like millisearch. So if you if you use that, you do the say you you do define your index almost the same, but then all this is is going down into an FTS 5 database.

And, FTS 5 module. I'm sorry. So, there is, I guess, there is a strong, search story there, and the performance is amazing. Like, you have to see, I have that blog, comparing, search to, and then you have that in the benchmarks as well on the repo. Mini search to, FTS 5, and I I kept repeating the benchmarks because I couldn't couldn't believe the results.

Mhmm. Like, image search is has a lot more functionality, I would say. So vanilla, FTS 5 doesn't have, for example, like, something like, type of tolerance. And and you need to add some modules in order to, support multiple languages. But aside from multiple languages, and type of tolerance, the the the performance difference is, like, it has to be seen to be believed.

Oh, okay. And is that, like, just, in, like, performance as in time the time it takes? How's the performance with regards to Yeah. Accuracy of search results? That's that's a different, thing.

That's that's more abstract, isn't it? Yeah. Yeah. That's that's a lot, harder to to measure. But but at the same time, it's it's a known quantity.

Like, working in LLP is is has like, this the industry has been has been doing that for long, and then the tools to to produce those are already there. So while while type of tolerance is not there, by default, you can actually implement that. There is, there is stemming using the, traditional stammers from Snowball, like Porter and others. And, and these are mostly the stemmers that are used by almost all implementations if we're talking about Latin based languages and, Latin alphabets, basically. And and and and and then, aside aside from that, it's just typotolism.

That's the difference. So you get you get, you get near search. You get phrase search. You get, like, you can mix and match and or and and things like that. You have perfect search.

All all the, all the known, suspects, basically. So, from from a search quality, perspective, I believe it it's a the the biggest difference is type type of, tolerance and and fuzzy, support, which is basically the same. This is not there. But aside from that, you get you get you get everything that you would get with a typical, search engine, like elastic, Mealy or, Sling sort of things like that. That's a pretty solid story.

To be honest, I wasn't expecting it to be that solid. It it it is it is a very it it's strange. Like, the the SQLite team, all 2 of them, they produce, very, very solid software and components, and they they build so many things from scratch. It's it's mind boggling. And yeah.

So I would I would love more people to actually use these. So what I'm curious about is, and and that it's funny because you mentioned that you've been doing this since 2,005, and I I got into Ruby and Rails in 2006. So I've kinda seen a lot of the same stuff come and go. What changed? Right?

Because back then, SQLite, yeah, it was the default when you ran Rails without any flag for the database, but it was essentially understood that, yeah, you're probably going to replace this with another database. So what changed to the point where now people are realistically looking at this and going, I could run this in production? Yeah. I guess I guess this impression that, SQLite is, yeah, just like, something to start with or or even to use just for testing was was the norm, and it was understandable because because many of the features that we're talking about, they happened over the years. Mhmm.

So, right, headlock, mode was not there. I'm not sure if, it was there in 2005. I guess it was. But many of the other features around it were were not. And now you see how how, how those features came together and and made SQLite that rich.

By the way, it also, when you build it, it comes with a geopoly database and, like, you can you can do, like, distance calculations and stuff like that and this is based indexes and many other features. Like, it it's it's it has grown a lot and is now capable a lot. And I think people are starting to realize that, yeah, I I might not need a lot more than that, especially with the advancement in hardware that is happening in in parallel. So you get these 2. Now you can have, like, a single box that can do a lot and a piece of software that is capable of, like, managing all your data and and your data requirements.

And and what I'm trying to do with Lightstack is to build that layer bet between both, like, your data layer and your application. That is friction free. Like, no configuration required. You don't need to do anything. Yeah.

That that makes sense. And it's it's kind of exciting just to see that, yeah, somebody could real reliably come in and and do a lot of this stuff. It's it's funny too because and then I'm working on a different layer than you are, but, I've been working on pulling together Rails Composer, which I actually got the domain from Daniel Kehoe, and I've been working on pulling this together. But it's the same idea. Right?

It's I need these areas of functionality in my application. And so, right, I pull in 1 gem, and then I can just run generators and have it. Right? Yes. Exactly.

And so, you know, with this, it's the same deal where it's I pull in Lightstack, and then I immediately have the capabilities I need to run a modern Ruby on Rails app. Yeah. Almost almost all your data, requirements are already fulfilled. Google, great application, and don't bother about the infrastructure. Don't pay for DevOps.

Like, eliminate all these points of, like, pain, headache, and and and and money, spending. So are there people out there currently running their production apps in with Lightstack? Some some I know. Yes. And then some are are running Lightstack on nonstaging.

Like, I know of a a large, large Ruby app that, their their environment, staging environment were switched to Lightstack overnight without even the developers knowing, and they they didn't know after except after, wondering why everything is much faster. That is that is a sales pitch and a half. I was gonna say, you're making me feel stupid for not trying this. Yeah. I I think I think the idea is, if if you're if you're if you're building something and starting, it's really unwise to add, to go on a more complex path.

You should go with the least resistance path until it starts resisting you. And hopefully, it will never resist you. It will never show you that friction. Yeah. You're speaking my language here.

I mean, this is where and and I picked some of this up from reading Ayush's book, the, rails and hot wire codex, where essentially, right, he walks you through I mean, the first what? 3rd, almost half of the book is, you know, building the authentication piece. But, you know, I got through it, and I was just like, this this is not as hard as I thought and, right, I kind of get all of the niceties that I want out of it because I understand it. And, you know, yeah. Anyway, it it appeals to me in that same way because I'm looking at it and going, okay.

You know, the underlying engine for this thing, which is essentially rack in the database, you know, that that's stuff that I don't want to fiddle with. Right? Yeah. And and I can just rest everything on top of it. Exactly.

Pushing that complexity away from you and hiding it completely, versus exposing every complex. Like, if you're trying to deal with the traditional app, you you'll have to think of many things, including, like, where should I put my cache? Where should I put my, do I have a single greatest instance for all of these, or do I have to split them? And how and now that I'm scaling anyways, how do I scale these, components as well? Because you can add application servers, but then your database would need to scale.

So Yeah. The nice thing with with vertical scaling is that you scale both together. You don't scale scale everything. But but with with, like, a client server architecture, you have to separately scale each one. Yeah.

That's, so, again, coming from my post grad post grad brain, like, because that's what my background is in. So, yeah, two questions again trying to find equivalents for post grads. Is there is there like a JSONB column in SQLite, or an equal equivalent? I don't know. It has JSONB.

Yes. But it's a different format. It's it's SQLite's own format, so you cannot get a JSONB record from here and just put it there. You have to convert it to JSON first. Yeah.

But but, yeah, it has it has a JSON format, and it's like, yeah. Deserialize it and then reseer serialize it into the other database. But but but almost all the JSON functions that you'd expect are there for both the JSON text format and the JSON format. Oh, wow. So you can, like, query for stuff that's inside, a JSON Yes.

And you can create an index on an a a JSON field, and you can yeah. Nice. And another thing I use quite often, especially to, like, just, like, filter stuff in a UI is, like and I like queries, and I use trigram trigram indexes in, in postgres to speed those up. So, yeah, we spoke about full text search, but, like, is trigram a thing in one one of the one of the, tokenizers available, by default is the trigram tokenizer in FTS 5. So you can have a trigram.

Okay. And that'll help with, like like, with like and I like queries. Nice. Exactly. Why don't I use SQLite again?

So Yeah. Here's another oh, go ahead. What was that? I wanted to mention that I also have an extension for SQLite that uses, roaring bitmaps. So you can create facets for, your search, results.

Like, you have, like, a search index, and you wanna generate, distributions of results. So you have, like, you get, your searching books, for example, and you wanna see see in that search how many of those books are in which categories. And this this this can be done using maps in, like, blink of an eye, like, under under 10 milliseconds. You get get so many, statistics. Stuff like you see on Amazon.

Like, those, are things and how many of those are red and how many are blue, how many are, at that price, how many are that price. All these those things can be done, like, almost automatically in in in SQLite. Very cool. So I'm wondering. You mentioned that there was a company that switched over to Lightstack overnight, and I'm wondering what that process looks like.

I mean, I would imagine that, you know, kind of the queuing and things like that, but, you know, are probably pretty easy. Right? Because the job's either been run or it's not. So, you know, pull the job from something else. But I'm looking at, like, my data on some of my apps, and some of it's kinda hairy.

And so I'm I'm wondering, okay. You know, what would it take for me to go in and Yeah. What do you mean by hairy? My models are kinda messy. How how are they?

And they SQL in in table, like like table Yeah. That's true. I I guess it's the state of my models that I'm not happy with, not necessarily the state of the data. Yeah. That that's one one level above, the problem we're dealing with.

It's completely opaque for the models. So you just You you what export and then import? Or That's an option. Yes. You get a SQL dump somehow and You can do a SQL dump and and and just copy that over, or you can do a what what I'm trying to work on is to do an an import export at the active record level so that you can easily, point it to that database and that database, and then the data gets migrated with this, active record semantics.

So any any potential, compatibility issues are just washed away. Oh, wow. That would be cool. Yeah. Like like, actually, it does have this machinery already.

We'll just need to glue it together. Right. Or or you can dump by the way, you can dump, in ActiveRecord, like, format, basically Uh-huh. And and restore after you change the driver. Yeah.

The only other thing that I'm thinking about with my setup is so I've been using Kamal to deploy, and it puts the traffic load balancer. The way I have it set up, I have a traffic load balancer in front of 2 application nodes. So, effectively, I would have to get rid of one of those. I wouldn't need traffic anymore. Right?

But but, if if they are on the same machine, then you can have a volume. Right. I can set up a volume for the Docker containers. Yeah. Yeah.

And they can share it. Mhmm. But it's it's much, much, more straightforward to just have 1. Yeah. The the nice thing about having 2 is it makes, like, 0 downtime deployments easier.

Right. But it is doable still with 1. Mhmm. And that's something we yeah. That's a topic that I want to address at one point.

You can address it now if you want. No. No. I mean, there is I mean, a Oh, address with conditionalized. Yes.

Yeah. Mhmm. Yeah. Yeah. Like, maybe maybe a light deploy or something.

That'd be interesting. Because almost almost not all, but almost all modern, application server support, like, 0 downtime deployments Right. And 0 downtime resorts. So you you definitely lose lose some of the, WebSocket connections, but that's it. Even other connections can can remain up.

Well, especially if you're using something like Kamal with the Docker base. Right? You just wait till the other the other I wanted to say machine. The other container is running. Yeah.

And then it's okay. Now everything new goes to this one, and when the other one's done, you just kill it. And and I think I think that's one of the nice things about this setup that yeah. Many many of the issues that people thought, they gonna face with with, similar setups are actually addressed. And, yeah, I I would definitely recommend, giving giving that smallish stack, a go, and then see if if it works for you.

And if you're starting something new, just just don't just don't bother. Don't bother with the complexity. Go after that. Is there is there a use case where you would say don't stop with SQL light or light stack? Let me think.

So maybe, if there is a use case where there is something that is someone that is writing huge records in a database. And and What do you mean by huge records? Like, we're talking gigabytes maybe. Some people do that. Some people do that.

And and and that's, like, the core of what they're doing. Then definitely, you need something that can spread that data over a larger, surface in, in terms of files, if you're doing, like, network or even a distributed database eventually. But but SQLite will not be the database for you. This is not, or or or not necessarily gigabytes or you're, like, treading in that, area, like 800 megabyte files, still very large. Other than that, I think I think it it it it could be a case by case basis, but if if you also have an application where so many people are writing at the same time something big relatively that or or doing, like, complex write transaction, then SQLite is a no go.

But if you have so many people doing smallish transactions, SQLite is is more than enough. It would be even faster than the typical solutions. So, yeah, I I would I would I would say, like, you'd need to have a strong case against using SQLite to not use SQLite. So do you see this becoming kind of the thing that people reach for in the future by default? I hope that would be the case.

Like, that's the that's the future I see that yeah. Like, why, why go through the complex route if if I have something simpler? And and I think it it it just makes sense to to simplify things. Like, this industry has suffered a lot from breathing complexity. Like, we've been doing really, really complex, setups, especially in in in the front end and infrastructure domains just because people wanted to play with with different stuff.

And I think it's it's about time we and and it's it's it's helping that the VC money is drying. So, I guess I guess it's it's it's about time to to realize that we need to build applications, not not build resumes and focus on on what we're trying to to deliver, the value that we're trying to deliver that rather than, this is my stack. This is my front end, like, stack and and things like that. This is not this is not what what people should be bothered with. And if your engineering team is actually focused on what type of stack they're building, I've I've consulted with a lot of teams and I've seen teams that would would talk to me about the technologies they're using rather than what value they're they're delivering or building.

Then then you're you're in the wrong place. Yeah. I've been there. Yeah. Yeah.

It's it's it's it's not fun. Like, seeing seeing really complex stacks, and you ask how many users do you have, and they tell you we have 200. And and then why bother? Like, why why all this then? Why not focus on actually getting the business off the ground and then and and running with it?

And once once you have 10,000,000 users, maybe we can talk. Like, I I hosted millions of users on better databases. And, like, I I had a situation when a friend was was asking me about their, their hosting costs because they ballooned. They were paying at the time, it was was a lot, like, 2,500 a month at AWS. Wow.

And they were they had almost half our user base or even less. And they were asking and it was gaming, a gaming startup, both of us. And they were asking how much we pay every month, and I was really, really, like, embarrassed to tell them that we're paying $75. So it's not good. So, yeah, people need to realize they can, they can get a lot of value by by removing complexity and removing redundancy and removing abstractions and indirect.

Right. It's it's pretty mad that computers are, like, an order of magnitude faster than they were 20 years ago. But Yeah. Software just seems slower than it's ever been. Yeah.

It's it's it's stuck with it where, like, it it, because because computers are being faster. Software is getting faster, like, naturally. But it's it's it's not trying to do things differently. It's not no one has come up and said, like, let's make use of the capacity now that we have in a single machine. Yeah.

I was gonna say, and and this was kinda it came to me when DHH was doing his keynote at Rails World last year, And he points out, you know, we we're we have solid queue and solid cash because disk speeds have, you know, grown in this way. Right? And he was basically pointing out, yeah, you know, we're we're gonna take advantage of this. And, yeah, it it drove home to me that the the limiting factor isn't the technology. It's the mindset.

Right? Because I'm still stuck in, you know, when I started programming in 2000 well, I I started programming in high school in the nineties. But when I started doing Rails apps into in 2006, yeah. You know, I'm looking at it and going, oh, I've got to account for all this stuff, and so I have to go wrangle all of these things to make it work. And and so the limitation was the way I thought about applications being built, not not the hardware or the technology that was available to me.

And then I think that's what you're saying, Mohammed. Yes. Exactly. Exactly. Like, people people like, we have we have been, conditioned to do things a certain way.

Like, you need you need cash, you put around this instance. You need, a job queue, you you put around this instance and the job runner and stuff like that. Yep. And and people think this is this is the best they can get. And I I I would challenge anyone.

Anyone. I I wanna see a faster read, cache read performance from any any, adaptive for, for for rails, than than, LightStax cache. Of course, solid cache is is is fast as well, but it's not as fast because it relies on active record. But once you hit SQLite, there is nothing faster than this. Like, you're reading literally from memory.

It it's an order of magnitude faster than the RAID adapter. Okay. Hang on. Hang on. Time out.

You're you're saying that you built, the light cache and the is the queuing this way as well? Yes. So you're not relying on active record. No. No.

No. You're just going straight to the database and going, What's in the cache? Exactly. Exactly. Because because the the nice thing about trade ins is that it has these really nice interfaces for active job and for, act active resource cache and for a light cable, for active, action cable.

I'm sorry. And and what you do is you provide an adapter that does the job rather than going one level up and and relying on active record that you you, you you just deliver the functionality directly. Right. Because you're using SQLite. Yeah.

It ends up being very fast. Like, really, really very fast. Not much faster than, of course, having the active record abstraction on top of you. And active records buys you a lot of things, like, like distributed, databases and, like, you know, having multiple database, for the same, like, the same application talking to multiple databases and, sharding and stuff like that. You don't need that for SQLite.

Yeah. It's it's one database on the same machine, so you don't need all of this. You're making me think. Yeah. It's it's really hard to kinda change your mental models once they get set.

Like, my, like, my background is in, was in mobile development when I started working as a professional software developer. I started off as an iOS and Android developer, so it's 2014, And I only switched to Rails in 2020. And I remember, like, when you're in a mobile app, going to the disk is considered slow. So when you launch the app, you basically load up whatever you can in memory. And if you're gonna go to the disk, you have to kinda really think about the code you're writing because you don't wanna lock up the UI.

So I had this drilled in me going to the disk is slow, going to the disk is slow. So when I came to Rails and I was trying to understand Russian doll caching in Rails, I couldn't understand why there was, a benefit because we still had to go to the database to get the updated app value of the record. Mhmm. I couldn't understand that in a web context, rendering the HTML is a slow path. It took me months, and I'm not exaggerating months to change that mental model.

So I completely get that people have been doing this for decades. I have this model set about how you structure an app with the database server and stuff, and now they're just, like, it's really hard to just change that. Yeah. I'm glad I'm glad you brought that up because one one of the really nice things about about switching to Lightstack or not nice, actually, the bad things about switching to Lightstack, that it suddenly highlights how slow your, front end is. Like, the rendering path, like, the the view layer, basically.

It it's really highlighted because you eliminate a lot of the bottlenecks at the database layer. And now your bottleneck your core bottleneck is rendering whether you're rendering via JSON or rendering, like HTML, in your app. So it just highlights it very, very fast, and and you now have to deal with this level if you wanna keep up with the with how fast the back end has become. I think I think the other the other thing here is, like, Rails has a lot of, or Ruby itself had a lot of progress lately in the, JIT compiler, domain. So Ruby code is is gonna run faster, and it's getting faster and faster, which is really nice.

We're looking at, like, 50% plus, speed improvement. Well, depending on what your app is doing. Yes. But but yeah. I'm looking at something like Lobster, for example.

It's almost 50% faster in in the benchmarks that, the Shopify team is is sharing, which is really nice. It's it's a full fledged trades application. Now imagine this is one bottleneck, computation. Now the other bottleneck is IO. Now imagine if you also get even a nicer boost in that in that regard.

So, actually, by the way, moving to light stack doesn't just, improve performance. Like, think think about your situation where you have 2 VMs talking to a single database. You'd probably end up with 1 VM the same size and getting almost the same performance from your application, especially if you don't have a huge bottleneck in the view layer. If you do, then, yeah, you'll need, basically, as computational power as you had in in the previous setup. But, still, you're you're saving a lot of cycles that you otherwise would have wasted in the, communication over the network stack.

You'd waste save that for other computation needs. Very cool. We're getting toward the end of our time. Is there anything else you wanna add or maybe just summarize where we're at with SQLite and Lightstack? Yeah.

So I would say, there there is there is a big update coming soon, hopefully, like, the next version of Lightstack, which will have the Lightkiddie officially, released. And I'm hoping also to to, bring, one other major advancement, which is concurrent queries in, in Ruby. One one of the issues of of, SQLite is that it's a c extension. And, c extensions, basically, as you're running c code, it takes over the process, the Ruby process. Doesn't allow other, things to to happen while it is running.

So, that results if if you're doing the read query or write query, whatever the type of the query you're doing, if it's a long one, it will lock the process until it's it's finished. And, the the thing that I'm trying to to bring forth in the current next version is the ability to run multiple queries even in in a single threaded application if you're using fibers or in different threads if you're using threads. Mhmm. So they will run concurrently next to each other and allow, the application to be a lot more responsive even if you hit a really slow query. Cool.

Very cool. Yeah. And and and I'm hoping I'm hoping the to to also make the other aspects of the story solid, deployment, backups. And if if that, yeah. Once we're there, I I really think it makes a lot more sense to go with the simpler solution.

Mhmm. Regardless of of what you're doing, it makes a lot more sense to go with the simpler solution, and, hopefully, this will be the simpler solution. Nice. Alright. Okay.

One one final thing I wanna I wanna say, like, I'm I'm really, I'm offering, basically, free consultation for anyone who wants to migrate to Lightstack. So just ping me, and I'm happy to, to help and and and and help you, like, gain the performance benefits and and and an opportunity for me to learn also about what could happen in the wild in the process. Nice. How can How do people reach you for that? I'm available on on x, so you can DM me at any time.

And, also, you can, you can send me an email. I'm not sure if I can share my email here or not. Yeah. Basically, it's old Moe, same handle, like, x, at gmail.com. What were you saying, Ayush?

I was asking the exact same question as you. Okay. Yeah. Good deal. Alright.

Well, let's go ahead and do our picks. Ayush, you wanna start us off? I hadn't thought about this. I've completely forgotten we do these. Do you wanna go first while I'm thinking?

Sure. I'll I'll throw a few out. So it's been a few weeks since we recorded just because life is insane. Yeah. So I played a game with some of my friends that I hadn't played before.

It's a really, really simple game. It's called 6 Nim. It's got a bunch of other names too, so you might have played it. I think it's also called, like, take 5. Not sure.

Let let me look it up on BoardGameGeek. Yeah. It's called take 5 or category 5 or anyway, so what you do is you have numbered cards. It's super simple. BoardGameGeek has a weight of 1.19, which means that it's a a casual game that is pretty simple.

And they say that 8 8 and older can play at night. That's probably fair. It'll play up to 10 players, 2 to 10 players. I think it took us a half hour to play a full round. Anyway, the cards are numbered, 0 or 1.

I can't remember if it starts at 1, but it goes up to a 100. No. It goes past the 100. But, anyway, so what you do is, everybody plays a card out face down. You flip the card over, and then the lowest card gets put out on top of the stack that it it most closely matches to.

Right? So if the top card on any of the 4 stacks you've got in front of you is a 7 and you put out a 10, unless the 8 or 9 are on top of any of the other stacks, you're gonna put it on the 7. Right? And then the next one goes and so on. You take the stack if you play the 6th card on the stack.

And then the rest of the game is is let's say that the lowest one is a 7 and you play a 3, then you get to pick which stack you take, and you're trying to get the fewest points possible. And different cards, different numbers are worth different number of points. Some of the cards are worth 1, some are worth 2, some are worth 3, and some worth 5. And and so you just play it till all the cards are gone, and then you tally up your score. That's the whole game.

It was really fun. Just kind of a a, you know, an interesting filler game. And so, my first pick is gonna be 6 NIM. I'll put a BoardGameGeek, link and an Amazon affiliate link into the chat or into the comments. A couple of other picks.

My son and I went and saw A Quiet Place day 1 last night, and it was good. I've only seen A Quiet Place. I haven't seen A Quiet Place 2. And but so it was good. It wasn't as good as A Quiet Place, but it was it was definitely, you know, something that was worth seeing in theater.

And, Yeah. I guess the only other thing that I'm just gonna let people know about, if you wanna follow me on, Twitter or, Instagram or something like that, I'm gonna be posting videos about doing AI. So I've been pulling a lot more AI stuff into top end devs. I got really inspired by the episode we did with Obi Fernandez about building, AI agents. And so this is this is not on the level of, hey.

I'm gonna train all this data into a model and then use my model. It's it's a level above that where it's, okay. I've got a large language model. You know, what can I make it do for me? Yeah.

I'd I'd like to I'm sorry. Go ahead. No. Go ahead. Yeah.

I wanted just to have a shout out for Obi. Like, he's he's been one of my who'll be heroes for Yeah. Since so long. Yeah. Yep.

And I guess he's getting ready to release the rails eight way whenever that comes out. Rails 8 does, which should be in a couple of months. But yeah. I think, I think rails, it's gonna be early next year because I think 7.2 is gonna come out, like, now, or maybe it's just out. Yeah.

I know that they were looking to try and get it out around Rails World, which is gonna be in September, but I don't know what the timelines look like now because I haven't talked to David in a while. But, anyway yeah. So he he got me excited about that. I've kind of been looking at Olympia AI, which is the company that he put together. Mhmm.

But I'm looking at building an AI assistant or set of assistants that effectively are focused on podcasters. Right? So it's, you know, it it does a lot of the work of, well, I want so many of these kind of episodes and so many of those kind of episodes or, hey. Go invite so such and such a person. You know?

Go invite, Mohammed to come talk about Lightstack. Right? So it it's smart enough to go make some web calls, go scrape the Internet, go find your email address, and then send you an email, right, kind of thing. Or, hey. We didn't record this week.

Put out an episode that we haven't, you know, rereleased within the last year. Right? And so then it can go and it can say, well, these are the top ones and right. It's smart enough to figure all that stuff out, you know, or even to the level of, you know, Ayush said this, you know, and then he he he kinda thought better of it afterward. Right?

It's like, I shouldn't have said that because my client doesn't want me to reveal these certain things about right? So I can go tell the the bot, hey. Go take out when Ayush talked about this, this, this, and this. Right? And then it it'll go in and actually have the tools to go and extract it.

Right? You know, and and my editor does a terrific job, but it'd be interesting just to see how far we can take it with that stuff, right, where it's cleaning up that kind of a thing. Right? Mhmm. Just tell it, hey.

We recorded a new episode, so it logs into StreamYard. It downloads the files and kinda does some preliminary cleanup on them. Right? So anyway, it's it's that level of stuff. I think it'd be way fun.

So I'm working on that and then I'm pulling together because I know a lot of other people want to learn it. I'm looking to do a boot camp where we use a lot of these tools right down to some of the LLMs, like, the llama 3llm or something, right, where it's, okay. I've got a baseline thing that I have to train some. Mhmm. Or maybe I am just going straight to GPT 4 or Gemini or some other thing.

Right? And just having it do the work for me because it is capable. And so just just working all that in and, you know, just over 3 months, just help people go from not knowing anything about AI all the way up to, okay. Now I can add these features to my application. So keep an eye out for that, most mostly on social media.

And, yeah, I'm also looking at putting together an AI summit September. So, anyway Yeah. And then, the last pick I have, and I think I'm gonna connect it to a a GoFundMe or something like it, is I would very much now that, next year is gonna be the last year for Rails, RailsConf, I would like to put something together like RailsConf. And I know we have RailsWorld, and they kinda move it around. I guess they're doing North America and then not North America.

And so I would like to alternate opposite them. Right? So that there's a conference in North America every year, and then there's a conference somewhere else out in the world. Right? And just see if we can get more access.

But there is no way I can afford to cover things like an expo hall or things on my own. And so if you all are interested in something like that and would like to contribute, I mean, obviously, we're we're gonna you know, at certain levels of, you know, of helping us out, you know, give you tickets or shout outs or, you know, any number of other things. You know? And I'm thinking, you know, kind of base level, maybe get a ticket a little higher level. You get lifetime tickets.

You know? Another level, you get sponsorship at a certain level. So, anyway, if you're interested, I did buy the domain. I think it was railsexpo.com. So, go to railsexpo.com in, like, a week or 2 after we recorded this, and I should have something up where it tells you what we're looking at for contributions and things like that.

So, anyway, that that's all the stuff. I kinda wanted to get those out there just because those are things that I wanna put together for the community that I think people are interested in. Ayush, what are your picks? So I've got to do both nontechnical picks this week. I reread a book from my childhood, a few weeks ago.

It's called, Havocar Racer by Matthew Riley. And, yeah, it's just one of those books that it kinda shaped my teens a little bit. Like, it has a quote that I absolutely love, and I loved at the time as well, which is, to is human to make the same mistake twice is stupid. And, I pretty I pretty much reread the book just to try and remember what the context of that quote was because I remember the quote, but I couldn't remember any of the context around it. Right.

But, yes, I I enjoyed reading it again. It's it's a book aimed more at, like, probably a teen audience, but even now in my thirties, I I still quite enjoyed it, despite it being quite far fetched at times. But, yeah, so that's one pick. The other pick is, a musical pick, which I don't think I've done for a while. It's a band called Solstice who have been little bit obsessed with unhealthily obsessed with for the last couple of months.

I basically had their 2 albums the last two albums, on repeat throughout my birthday since about June, like, start of June, which is, yeah, just a bit mad. And I haven't got bored of them yet. So, yeah, if you're if you're into any kind of rock music at all, I'd highly recommend checking out Solstice. The last two albums are called, Light Up and Shia, s I a. Yeah.

Those are my 2 picks. Cool. How about you, Mohammed? You got some picks for us? Not, not not really.

I'm not sure if that is interesting enough, but, as we, were, like, arranging to to get into this, discussion, I was working on concurrent index creation for SQLite, which will allow indexes to be run-in the background. I have I have now, like, a set of, like, features that are not open source yet, and I'm hoping I'll be able to, build the service around that soonish and allow you to host your Ruby SQLite powered applications, for for a fraction of the current costs at a much higher performance than people, are, accustomed to without having to deal with with, like, Docker images, Kamal or things like that, even even simpler. Like, more like Heroku and Stripe. Very cool. Alright.

Well, we'll go ahead and wrap it up. Thanks for coming. People find you on socials as old Moe. Just remind people of that. And, yeah.

Till next time. Max out, everybody.
Album Art
Unlocking the Power of Functional Programming and Elm with Richard Feldman - RUBY 646
0:00
01:22:24
Playback Speed: