Innovations in Ruby Concurrency: Tips and Tools - RUBY 648

In this episode, they dive deep into the world of Ruby concurrency and explore the nuances of optimizing performance in web applications. Join our Chuck and Valentino together with special guest JP Camara as they share insights on the tools and techniques that can transform your Ruby projects.

Special Guests: Jp Camara

Show Notes

In this episode, they dive deep into the world of Ruby concurrency and explore the nuances of optimizing performance in web applications. Join our Chuck and Valentino together with special guest JP Camara as they share insights on the tools and techniques that can transform your Ruby projects.
JP kicks things off with a discussion on their new Wave 3 microphone purchase, which has dramatically improved their audio quality for podcasts and meetings. They also share their experiences at the Boston Ruby meetup, where they connected with prominent figures like Jason Sweat and Kevin Newton.
Our special guest, JP Camara, a principal engineer at Wealthbox, brings his extensive knowledge of Ruby concurrency to the table. With over a decade of Ruby development experience, JP has been contributing to the Ruby community through his in-depth blog series and work on the GBL instrumentation API. He'll be shedding light on concepts like job queues, the colorless programming approach in Ruby, and the benefits of tools like Sidekiq and SolidQ for managing background jobs.
Chuck and Valentino also join the conversation, emphasizing the importance of concurrency and parallelism in modern applications. They discuss practical examples, challenges, and best practices for efficient resource management and the impact of serverless computing. Plus, discover some fascinating board game recommendations and insights into using microcontrollers for concurrency tasks.
Whether you're a seasoned Ruby developer or just getting started, this episode is packed with actionable advice and technical wisdom. Don't miss out on this essential discussion that could take your Ruby skills to the next level!

Links


Socials

Transcript

Charles Max Wood [00:00:04]:
Hey, folks. Welcome back to the Ruby Rogues podcast. This week on our panel, we have Valentino Stoll and now. I'm Charles Max Wood from Top End Devs. Go check out my latest and greatest ataiforruvy.com. We have a special guest this week, and that is JP Camara. JP, you've been writing about concurrency. We were chatting before.

Charles Max Wood [00:00:28]:
You live back east. I don't know if you wanna go into more detail than that. But, yeah, maybe you should give us just a little bit of your background, and we invited you for Ruby concurrency series. So if there's a story behind that, I'd love to hear that

JP Camara [00:00:42]:
Sure. Yeah. Thanks for having me. Like I said, I'm I'm JP Camara. I'm, Yeah. I live in in Rhode Island. It's on the East Coast. I'm a, principal engineer at a company called Wealthbox, and I've been doing Ruby development for about, I've been developing for like 16 17 years, but I've been doing Ruby development for about 12 years, and some other languages mixed in there as well.

JP Camara [00:01:04]:
And so, I write technical blog posts over jpkameric.com and for the past year actually, I started about a year ago writing, a series on Ruby currency. It's something concurrency in general, something I'm super interested in, and I've I've wanted to create, like, a great in-depth resource for the Ruby community. And so as a result of doing that as well, I ended up contributing a bit to, the GBL instrumentation API. I contributed the, Mac OS support. Yeah. So I I've done a little bit with that. So I've worked a little bit with Sean Bussier and, Ivo Anjo with his GBL tracing gem.

Valentino Stoll [00:01:40]:
Mhmm.

JP Camara [00:01:40]:
And I contributed the, macOS support for Koichi's, MN thread scheduler for Ruby 3.3. So it's it's been pretty fun. Like, I've it's been a lot of work, and I still have a lot left to do, but it's it's taught me a lot. It's allowed me to, like, contribute to Ruby itself. I've learned you know, I'm a very terrible c programmer now. And so so, yeah, it's it's been an interesting year, kind of digging into this. And so about about 3 months ago, I started releasing the first parts of the series, and those have been pretty well received so far.

Charles Max Wood [00:02:13]:
Nice. Well, thanks for the work.

JP Camara [00:02:16]:
Yeah. It's

Charles Max Wood [00:02:16]:
I I look forward to benefiting from it.

JP Camara [00:02:19]:
I yeah. I hope people can. Like, it's, I mean, the GBL instrumentation API, I'm using it, like, kind of a as, like, an education resource. That's kinda how I got into it. I was like, oh, I really wanna be able to show people, like, how threads coordinate, how the GBL plays into that. And Evo, who I think you've had on the on the show a couple times, his GBL tracing gem creates, like, a UI layer for that. And so I'm using that as a basis for both using his gem and also creating some, like, animations of, like, how threads swap between each other and stuff like that. So that that'll be in my next blog post that hasn't come out yet, which is specifically digging into threads.

JP Camara [00:02:52]:
Yeah.

Charles Max Wood [00:02:54]:
Very cool. Yeah. Well and it's funny because, you know, you you said you've been around the Ruby community for, like, 12 years. I can't remember how long Valentino's been doing this. But, you know, it it's been it's been a while. Right? I've been doing this, what, 16, 17 years. And, you know, I I got into programming getting into Ruby. Right? So, anyway, it's it's interesting because I don't think anyone's given a coherent explanation even back when we were just kinda doing, like, DRB and threads, given a coherent explanation of, hey.

Charles Max Wood [00:03:27]:
This is what this is, and this is how it works. Right? Mhmm. People just complain about the gotchas. Right? I tried using threads

JP Camara [00:03:34]:
and, oh, so bad. Right? Right.

Charles Max Wood [00:03:38]:
So so this is this is very much needed. The other thing is is that, a lot of the naysayers on Ruby, cite some of the issues with concurrency. And, you know, it's like it's like, no. It's it's there. Right? If you want to use it, you can use it. You know, that doesn't necessarily still make it the best tool for every single job, but it does a lot more than you're given a credit for. So

JP Camara [00:04:06]:
Yeah. Absolutely. Yeah. I mean, in in terms of, yeah, how people perceive it. You know, there there is a just to go back to, like, how people used to think about it. I don't remember how many years ago it was, but, like, I don't know how well. And I like, it's a pretty well known book, but I don't know if you guys are familiar with Jesse Stormer, I think his last his last name is. He created the series called Working with Ruby.

JP Camara [00:04:28]:
And so, and so he was working with Ruby processes and working with Ruby threads. So, like, those were sort of my like, I I had a lot of experience in other languages with concurrency, but once I came to Ruby, like, I didn't do a lot with it at first. You know, I did, like, Rails stuff and everything. And then I ended up reading those books, and they they really gave me a much deeper insight. And I I kinda wanted this to be sort of like a this series is kinda like a successor to that and, like, in addition, like, just more context about it because it does a good job of kind of explaining, like, these are the things you can do with with threads. And and Ruby, between threads and processes in particular, gives you the majority of what you need, to do any to do anything, really. And and that's obviously demonstrated by companies handling, you know, 1,000,000, billions of requests, all that sort of stuff. Like, it's very capable, and you just have to know how to use it.

JP Camara [00:05:19]:
But, yeah, the great part with his books is they they're a little outdated, but, like, a lot of the core stuff is still there and great, and they're all free now too. Yeah. So you can put them on the site. But but, yeah, it's definitely the the concurrency the state of concurrency today in Ruby is is very different. Like, there's a lot more education. There's a lot more understanding of how to handle, like, threading issues and things like that. There's a lot more deep, like, embedding of how threading works into a lot of the different tools that we use, like Puma, Sidekicks, SolidQ, all those things. And there's so many great abstractions.

JP Camara [00:05:50]:
A lot of the time, you don't even really need to use them. You can just use the abstractions. You know? And that's that's sort of what I advocate for while also just trying to educate, also how to use them and how they work and and that sort of thing.

Valentino Stoll [00:06:01]:
Can we take a step back?

JP Camara [00:06:02]:
Sure. Yeah. Absolutely.

Charles Max Wood [00:06:04]:
Like, I always I always get, like, I

Valentino Stoll [00:06:08]:
always get thrown by, like, okay. Like, what is parallelism? What is concurrency? Like, how they're related, how they're not. Sure. You know? And and how they, like, you know how how the handshake works with the system level stuff. Right? Like, each operating system has their own, like, implementation of, like, how to, you know, process things. Like, how is it all related? Like, can we just, like, identify what we're talking about here?

JP Camara [00:06:35]:
Sure. Yeah. And I'm not sure if you're teeing me up for that section of my most recent blog post that goes into what is concurrency. Perhaps you are. Well well played if you are. But, yeah. So, like, there's, it's a great point that people kinda throw around the words concurrency and parallelism, and and they think of them as sort of meeting the same things. They're they're a little bit different.

JP Camara [00:06:56]:
But, like, concurrency is essentially how do I break up a bunch of tasks so that they can kind of work independently? And I can and they can be isolated, and so I can kinda have, like, a good sense of, how to test them in isolation, how to run things in isolation, but coordinate them. And but it doesn't really matter, like, how they actually get run behind the scenes. So concurrency is kind of how you break up a bunch of tasks that may work together, but it may be that they actually just swap back and forth between each other. So for instance, like yeah. Please.

Charles Max Wood [00:07:29]:
I was gonna I was just gonna say, here we're gonna say for instance, and I was gonna say, well, then technically, people should be familiar with the idea of concurrency just with, like, your job queues on your Rails app. Right?

JP Camara [00:07:41]:
Absolutely. Job queues are a a perfect example of concurrency because especially in, you know, the the Ruby version that most of us use, you know, CRuby. Mhmm. Most, you know, Ruby code is constrained by what I don't wanna jump too far into this right now, but it's constrained by something so that only one piece of actual Ruby code can run at the same time.

Charles Max Wood [00:08:02]:
Right.

JP Camara [00:08:02]:
And so because of that, most of the jobs you write, you know, what they're really doing whenever they're running, like, CPU bound Ruby code is they're they're hopping between each other. It's like I've got job a, it runs for a little bit. Job b runs for a little bit. Job c runs for a little bit. And then where the the the real value of having threading in Ruby comes from is where you can actually parallelize things. So if we get to the parallel aspect of it, Ruby can't parallelize actual, like, running Ruby code, suite EU bound Ruby code. But anything that what what's called blocks, like, anything that blocks in your code can be effectively parallelized. So for instance, if job a, b, and c all make a query, you know, job a makes a query, it blocks job b makes a query, it blocks job c makes a query, it blocks.

JP Camara [00:08:46]:
They're all running those queries in parallel. And so Right. A job system is is absolutely the, like, perfect example of concurrency in Ruby where, like, you have these, independent tasks that all run and are swapping between each other and at certain key points can actually run directly in parallel. And so you can actually have 3 things running, simultaneously. Whereas concurrency as a as a basic principle, sometimes they run at the same time, sometimes they hop back and forth between each other. Concurrency is kind of that abstraction that lets you not care about it. I write these independent tasks. Operating system, Ruby, runtime, handle this for me.

JP Camara [00:09:23]:
Make it parallel sometimes. Otherwise, you know, just hop between them, share resources, that sort of thing.

Charles Max Wood [00:09:28]:
Right.

JP Camara [00:09:29]:
Does that does that kinda help define it a little bit, Valentino, or or do you want more on that?

Valentino Stoll [00:09:35]:
Yeah. No. That's great. I mean, I I loved how how your latest articles kind of summarize it as an orchestrator as being concurrency, and, you know, parallelism is more of just, like, the things running at the same exact time. It doesn't really matter, like, you know, there's there's nothing anything waiting for it.

JP Camara [00:09:56]:
Yeah. So Right. Yeah.

Charles Max Wood [00:09:58]:
I kinda wanna jump in here, though, because, some people are gonna say, okay. Well, but why why do I care, and why do I need to know about it? Right? Because, the the why do I care might be that it's faster or more efficient or what whatever myriad of other reasons. Right? But then why do I even need to know about it, especially if there is an orchestrator behind the scenes that does it all for me?

JP Camara [00:10:22]:
Yeah. That's a good question. I think there's a couple layers to why you might wanna care about it. So the start of the series I wrote so, you know, the the piece we're kinda coming in at for anybody listening to this episode is, you know, we came out at the point of the the blog post Ruby methods are colorless, which talks about how like, what the different primitives of Ruby are for concurrency and parallelism. Because some things in Ruby can actually run-in parallel, and we can talk about

Charles Max Wood [00:10:49]:
that. What is what do you

Valentino Stoll [00:10:50]:
mean by colors? Like, I I was curious about this. I I love your explanation in your post, but,

Charles Max Wood [00:10:57]:
you wanna clear that up?

JP Camara [00:10:59]:
Yeah. Absolutely. That's a that's a great point. So colorless programming was actually a concept I had no idea about a year ago either. That's kind of, like, part of what got me going in this whole thing is somebody just literally I don't know if you guys have heard of the Primogen. He's just, like, kinda JavaScript personality. And he he talked about, like, colorless programming in Go. And I'm like, what? I was like, I'd never heard that term before.

JP Camara [00:11:20]:
What does colorless mean? Same as kind of you're asking right now. And so I looked into it, and what really colorless means is that there's some languages that, and and JavaScript is the example that we use here. But some languages, when you wanna do things, concurrently or in parallel or whatever, you have to explicitly say, like, hey. This piece of code, I want you to do this asynchronous thing now. And the most common way you'll find in languages is you literally say, like, here's a async piece of code. When I go to run it, I have to tell you, I would like you to await this thing to finish. Right. And and so And that's a JavaScript contact.

Charles Max Wood [00:11:56]:
Yeah. Yeah. My word's not working. But, yeah, it's it they they have keywords async await.

JP Camara [00:12:03]:
Exactly. Yeah. JavaScript has async await. Rust also actually has async await. Python, a bunch of other languages. There's, like, kind of a fork in some languages have chosen to go async await, and some languages have have chosen to go a different way. And so when you when you have async await, it effectively, like, I describe it as kind of, like, infecting your code in a sense. It's like every piece of code that wants to use this async await syntax has to buy into it.

JP Camara [00:12:27]:
You're always, like, it's it's these layers of, like, I'm calling async await. Okay. The thing calling that has to do async await. The thing calling that. And the moment you don't do that, you kinda get into this, like, clunky syntax, promises, and and all that stuff. And so the cool thing about colorless languages and and Ruby and and some other languages as well, is that the runtime takes care of of that asynchronous behavior for you. You don't have to worry about saying, like, I'm about to do this. I wanna do this asynchronous thing so I have to know to call await, async, all those types of things.

JP Camara [00:12:58]:
In Ruby, basically, like, you say, like, I wanna call this thing. I wanna let's say, an HTTP request. And so I make my HTTP request, and Ruby goes like, oh, hey. This is a blocking thing. I'm just gonna tell your thread or your fiber or whatever, go to sleep for now. I'll take care of this for you behind the scenes. And if there's any other threads or fibers running, you know, like, if our job example, if there's another job that needs to do something, hey. You can do something now.

JP Camara [00:13:20]:
And so I don't have as a programmer, I don't have to worry about that. I don't have to have that, infect my code. And the reason they call it colorless, is from an article from 2015 where he referred to, like, you have red and blue functions. So, hopefully, hopefully, I don't get it wrong. But red, I think, is the asynchronous function and blue is your sync your synchronous functions. And so, like, you you end up having this, like, a coloring of your code. Every async function is this red one. Every every non async is this blue one.

JP Camara [00:13:45]:
And the moment you wanna do an async thing, you're you have to make your function red. And if that function calling it wants to do an async thing, you make that function red. And so you you have this color to your program and this, this distinct syntax. And in Ruby, you don't have to have that color and and some other languages as well, like, Java actually, in fact, is also a colorless language. So so, yeah, that's that's the colorless concept.

Charles Max Wood [00:14:10]:
Gotcha. Yeah. So so let's rope this back into okay. So I don't have to I don't have to color my methods. So why do I care?

JP Camara [00:14:19]:
Yeah. So so the series kind of goes into a couple aspects of it. The fur the first part of it is really, I it's 2 parts, and it's called your Ruby programs are multithreaded. And the thing that I want, the point of me writing those articles was part of the series, but also just part of my own interaction with different gems and things over the years that have had threading bugs in them. Because I think it's easy to forget that in almost every programming language you use, behind the scenes, there are there is concurrency happening whether you want it or not. Right? If you're using Kuma, you've got threads running. If you're using Sidekick or SolidQ, you've got threads running. Even if you're using, like, a process based server, there's there's concurrency elements that are running there.

JP Camara [00:14:59]:
And so being aware of them helps you write safer code. So just as a baseline, just if you want to, like, create code that is safe to run, there's certain things that you need to identify. And and to identify those, like so for instance, like, I go into, you know, like, global variables. Right? It seems so obvious, but there are still there's legitimately still times where I interact with gems, and I I've submitted some some fixes and stuff for certain gems where there's just global variables being used. And they work fine as long as your threads don't end up swapping between each other. And so you and especially in CRuby, like Mhmm. A lot of times, they won't swap between each other, and so things seem to be okay. And then you just have these random errors, or you have these random data corrections, or you have, like, users' data get mixed together.

JP Camara [00:15:41]:
And so part of the reason I think people should care and should have an understanding about how concurrency works is just for their own to just to keep your code, like, safe and understand the principles to keep your code safe. It's actually not super hard to do, but there's certain key things that you should understand to keep your code, consistent. And on top of that, I know I've kinda gone long winded on that a little bit. But the other reason to care about concurrency is it helps you scale your applications. Right?

Valentino Stoll [00:16:05]:
Right.

JP Camara [00:16:05]:
At at a smaller scale with applications, it probably doesn't matter too much if you really understand it. But as you, you know, as you get into, like, tens of thousands, 100 of thousands, millions of jobs and requests and things, how you tune and how you organize your code and and what you call and how what order you called in suddenly starts being really valuable and understanding, like, oh, if I know how a thread works or a fiber works and I run this code this way Mhmm. I can scale my application better. Right. And so so, yeah, that's that's kind of and, you know, the third part for me is just I just find it very interesting. So if you're somebody who just likes reading about interesting things and you like Ruby a lot, you know, I find it interesting in that regard. So so yeah.

Charles Max Wood [00:16:43]:
I I love the curiosity, by the way. I I feel the same way. I've been, I I started getting into after we talked to Obi Fernandez, on the AI stuff, and I just Mhmm. I can't I can't put it down. I just I I can't make myself put it away. So Sure.

JP Camara [00:16:59]:
And that's one of the the the most enjoyable things. Yes. Right. Obsessed with a a piece of programming or a particular technique or whatever. It's just yeah. It's the most fun.

Charles Max Wood [00:17:09]:
Right. But going back to your other point, you know, whether it's Puma or Falcon or something else, you know, at your web server or, you know, depending on how your job queuing works, you know, whether it's using threads or fibers or processes or something else. And and they tend to use a blend of them, actually,

JP Camara [00:17:28]:
a lot of them. Right.

Charles Max Wood [00:17:30]:
So so understanding how they orchestrate some of that, yeah, you you can get more horsepower out of your machine.

JP Camara [00:17:36]:
Absolutely. Yeah. And and sometimes, you know, like, there's there's great abstractions, like, Falcon or Puma or or Sidekick. But even your own code sometimes, you know, you might be like, oh, like, I've got the synchronous piece of code, but I need to, like, call a few different APIs. Right? And if you can if they're not dependent on each other and you can call them in parallel, it's great to know how to do that. Like, what's the best tool to do that? What are the gotchas? And, you know, I'll I'll get into that, like, later in the series as well. But, like, you know, largely, I think for, like, your own code that you're writing, I I tend to say, like, you know, don't threat if you don't have to because you can, you you can shoot yourself in the foot if you do. But if you're going to do something that you wanna have some amount of, like, parallelism for IO or block operations, you know, I I tend to recommend, like, async because I I just think it's a lot more deterministic.

JP Camara [00:18:20]:
So that's like fiber scheduler and that sort of stuff. Right. It's more deterministic. I think it has better tooling around it. Yeah. And it just there's less of a chance of of screwing something up because it operates in a single thread and and all that kind of stuff. But, yeah. I feel like every time I talk about something, I just sort of trail off at the end.

JP Camara [00:18:41]:
But here we are.

Charles Max Wood [00:18:44]:
And these are my No.

JP Camara [00:18:45]:
I I'm Yeah.

Valentino Stoll [00:18:46]:
I'm with you. I've been trying to take on more and more of, like, doing more things at once, and I'm I'm finding I I shoot myself in the foot, often. Yeah. Because you don't think about, like, okay. Well, you're performing all these things at once, and you're blocking Now you're waiting for each of

Charles Max Wood [00:19:04]:
those Mhmm.

Valentino Stoll [00:19:06]:
Instead of just you're waiting for 1, and it's more you know, it's less deterministic the more things that you start to do in parallel.

JP Camara [00:19:12]:
Absolutely.

Valentino Stoll [00:19:13]:
Because more things can go wrong. So I'm I'm curious, like, you know, how do you go about deciding? Like, before you even start, like, diving in, like, how are you, like, okay. Well, this makes sense, like, to run, concurrently, like, using threads or fibers? Like, what are your, like, decision points for these kinds of, of tasks?

JP Camara [00:19:33]:
Yeah. That's that's a great question. I mean, I think as a base decision, you know and and I had a little bit of a, like, Twitter interaction about, like, a a recent tweet about somebody suggesting something and and realizing it was actually not thread safe. And and going back and forth a little bit, like Samuel Williams and and a couple other people about, you know, it's actually best to start off by saying, like, maybe just don't. Like, you know, let let, you know, start the baseline you should always start with is let's use these, like, battle hardened great tools that are available in the community. You know, like a a sidekick solid cue, good job sort of thing for my background jobs. Like a Falcon or a Puma for my server. And start off by tuning those.

JP Camara [00:20:15]:
Right? That's the first place that I would ever start off tuning something is saying, okay. Like, I I wanna scale up more. What kind of traffic do I get? Do I have do should I have more threads? Should I have more processes? That sort of thing. And so, like, the first layer of concurrency for me is always just taking advantage of the tools themselves. And then deeper than that, I think, is figuring out, well, okay, within these tools, how can I better split up my work? And so, like, what what I'll kinda get to is really, like, the lowest level of concurrency for me is the point I get to where I'm like, okay. Now I'm actually gonna use a thread or async. Because I'll I'll use that as my last resort. Even though, like, I have a lot of familiarity with it and I and I do use them, it's it's kind of like, I'll use that as the last piece.

JP Camara [00:20:58]:
Because the best thing you can do, especially if you wanna split stuff into concurrent pieces, is to start, like, utilizing your job system, really. Kinda like like Charles said, earlier on. Or you you go by Chuck. Right? Sorry.

Charles Max Wood [00:21:10]:
Yeah. No. It's fine. Charles Bend. Yeah.

JP Camara [00:21:12]:
I feel like I've heard Chuck for years, and for some reason, I just looked at my name and I'm like, mister mister Charles Maxwell, I'll refer to you as. But, basically, yeah, you say, like, okay. Well, now I wanna I wanna increase my concurrency. So, I might have, like, particular operations. I'll have more threads like a Puma or or a a Falcon. But maybe I'd now then go, like, okay. Well, now I wanna I wanna increase my concurrency for my jobs. And so you start to, like, think about, okay.

JP Camara [00:21:37]:
How do I coordinate these jobs? How do I split them into smaller pieces of work? And so so that's that's kind of the next layer for me. The first one is I just kinda tweak things about servers, make sure, like, my throughput is good, and then I might just start to say, like, okay, this particular job is kinda slow. How can I break that up? And so to use Sidekick as an example, like, Sidekick actually just embedded, I think iteration support into, their gems. So, like, that's one aspect where you kind of can, like, split up your work that way. But if you actually wanted to run it concurrently, you know, Sidekick has batches, for instance. Good job has batches. I'm actually trying to add batch support to solid q as well because I think it's a really valuable feature. So I I have a PR submitted for that.

JP Camara [00:22:16]:
Yeah. I I hope it gets through soon. So, initially, Rosa, she was really kind about it. She was like, this is awesome, but we're not gonna use it right now. So, like, if other people wanna try it and use it, like, we'll we'll kind of evaluate it from there. So, that I think

Charles Max Wood [00:22:29]:
do and I want it.

JP Camara [00:22:31]:
Yeah. Like, batches are awesome because, you know, I I've used them in Sidekick, but it is a paid feature. And so SolidQ having batches, I think, would be really fabulous for people because then you can say, oh, now I wanna break up this piece of work, but I still wanna do it in this, like, in these lanes, this nice constraints of this framework. So if I break it up and I put it in a batch and I say, hey, at the end of this batch, do this thing for me that I can safely coordinate my work, and I can split it up into small pieces. You know? And so so that's, like, the next layer for me. And then the the layer after that, when you've kind of exhausted your tools and you're, like, I really need to do this thing in line. It might be in a job. It might be in a web request, is to start saying, okay.

JP Camara [00:23:05]:
I wanna use a thread or a fiber or something like that. And that's when Mhmm. Generally, I would bring in, like I said, like, async for myself.

Charles Max Wood [00:23:12]:
Right.

JP Camara [00:23:12]:
Because I would bring in async and say, like, I wanna make these database connections. I wanna do these network requests. I wanna do file IO or even, like, encryption or something. You know, if you, like, kinda shell out, you can you can parallelize that as well. And so I I use async to coordinate that because, for one thing, async operates in a single thread. So I don't have to worry about, like, these weird semantics of threads and cat like, caching at the OS lever level and all this weird stuff. And then, and it it just has better abstractions. Like, it's a it's a full featured library versus, like, threads are a very primitive thing.

JP Camara [00:23:45]:
If I was gonna use threads, I would use, like, concurrent Ruby or something like that. But I would kind of I would say, like, probably go with async. The point you would maybe need to go to threads, we can talk about later. But, for the most part, for your own code, you can usually use async. But that's kind of my my mental model for it. It's, like, use the tools at the highest level, start splitting up within those tools, and then if I really need to and and sometimes you do, and I've I've done plenty of this, is split your your code using a library like async where you just have more control. And and, what what I when I say determinism, what I just mean is, like, there's very clear, like, clear cut points in your async code where things will swap out, and you can you can know what those are very clearly. Whereas in threads, it's kinda like it's a free for all.

JP Camara [00:24:25]:
Like, any piece of your code in a thread could technically swap out at any time, and there's all sorts of of gotchas that can come along with that. Yep. Yeah. So so that's how I break things down myself.

Valentino Stoll [00:24:36]:
Do you have any, like, rules of thumb for, like, how many things you do within your async calls? Like, do you try and, like, keep, like, the responsibility low? Like, how how does how do you think through that?

JP Camara [00:24:48]:
Yeah. It's another good question. I think it it's partially just depended on I I think to your point about, like, deciding on how much you wanna do, like, part of it is complexity. Right? You know, if you're doing, you know, the and also it's just, like, the constraints of what you're gonna interact with. Right? Like, if I'm gonna if I'm gonna split apart things to try to make, like, Redis calls, let's say, you know, there's an there might be an upper limit on how many connections you can even make at once. Or API calls, there may be, an upper limit on how many I can even make at once. But a lot of times for those types of things, like, you can you can scale them up pretty high, especially when it comes to async because fibers are so lightweight and they use such an efficient model behind the scenes. Like, you know, I've written code that's opening up connections to, like, thousands of things before, and it's been fine.

JP Camara [00:25:36]:
But it it also I think I think it's probably on, like, the, how, like, I guess, like, heterogeneous those tasks are. Right? Like, if they're all different tasks doing all, like, different weird things, you probably wanna start, like, coordinating them, splitting them maybe into, like, multiple jobs that go into each other or something like that. But you can once you're in it and once you're doing the async code, like, you can scale it up pretty far. And so the the constraints start to become less about your own code and really more about, like, what can my database server handle? What can the API I'm calling handle? What are their throttles? You know, those types of things. So that's when you start coordinating. And and async does come with some tools that you can say, like, hey. I'm gonna hand off these tasks, and I'll create all the tasks I need, but they're gonna be bound by, like, a a what's called, like, a barrier. Right? You can say, like, there's a barrier of, like, 5.

JP Camara [00:26:22]:
And so at at once, only 5 will run, and then the next 5 will run, and the next 5 will run, that sort of thing. And so so, yeah, like, within those, I don't the the limit is usually imposed for me more around, like, what are the external resources. And then, I guess, to follow-up on that, you also don't wanna, like, overload your memory. Right? Like, so you Right. Yeah. You can't you're not gonna make a 1,000 requests and then get a 1,000 responses back and then load them all in the memory, and now your server crashes. Right? Like, you have to you have to be sensible about that. And and, actually, I have a a chapter or or a blog post later on called, I think I call it concurrent streaming Ruby, where I, like, wanna go into approaches for how to do things in a more streaming type of way, which I think is valuable, especially when you try to break down tasks and keep your memory low and all that stuff.

JP Camara [00:27:07]:
So that's your question.

Valentino Stoll [00:27:11]:
Yeah. I think that you make a lot of great points there.

Charles Max Wood [00:27:14]:
It makes me think smiling inside at somebody DDoS ing their own post crystal maker.

JP Camara [00:27:20]:
Oh, yeah. Yeah. You can definitely do it. When you know, then you yeah. You wanna have some kinda, like, proxy or something in place. But no matter what, you can definitely eat off yourself if you're not careful. So yeah.

Valentino Stoll [00:27:32]:
That's fine. Yeah. I mean, I think that's a a great point in general with concurrency is, like, it's very easy to eat up resources if you Yeah. Start to, like, grow things too big. Mhmm. But it does bring up

Charles Max Wood [00:27:43]:
a good point of, like,

Valentino Stoll [00:27:46]:
or I guess my next topic of discussion, which is, like, sharing, like, data sharing. Right? Like and I like that you mentioned, you know, using Sidekiq as, like, before you get into this space because I think it does create that separation of data layering that I think is important when working with all of this concurrency stuff in that you give it a task, right, like the data that it needs to operate, and then it tries to work within it. But then it there's still this weird blurred line, I feel like, where

Charles Max Wood [00:28:19]:
okay.

Valentino Stoll [00:28:19]:
Well, you like, let's say you drop an ID to a sidekick task, then you use a shared database connection pool to, you know, make calls. You know, how does that work in the async space, and, like, how do you try and reason about it to make make things not too crazy? Right? Like

JP Camara [00:28:38]:
Yeah. That that is that. It's another another great point of, like and I will use the Sidekick example specifically. And, yeah, it's it's kinda like when you the lower you get at every level, you have some amount of shared resources, like, is the point you're making. Right? Like, yeah, when I'm at the lowest level, I've I've got a bunch of threads, and they're all, like, sharing this piece of memory. Right? So you're like, oh, there's, like it's so non deterministic, and I can override other people and stuff. That's no good. But to your point, it's, in a way, exactly the same at another level up.

JP Camara [00:29:10]:
I've got a sidekick job that gets an ID from the database. Like, what is a database? A database is a shared resource. Yep. And and, actually, I think in the second part of the series, I use, I think I use Redis as the example there, where I I talk about a few race conditions. And so that's kinda what you're you're you're alluding to right there is, like, there's even though, like, you can't necessarily, like, when you're doing something in thread, you could actually, like, corrupt memory, for Like, if you were had actual threads operating at the same time, it's a little harder to do in CRuby because it you you've got yeah. Like yeah. Like, especially if you use something like a JRuby or or TruffleRuby or something where they're purely parallel, you can really corrupt things. But you can definitely do that in CRuby too.

JP Camara [00:29:51]:
Mhmm. And so, so, like, there's that aspect of it. But at a, you know, at a database level, like, actually corrupting it is kinda difficult. But you can still have things like race conditions where I go, like, oh, yeah. Hey. Does this thing exist in the database yet? No. It doesn't. So I'm gonna add it.

JP Camara [00:30:06]:
And then another thread is, like, hey. Does this thing exist in the database yet? No. I'm gonna add it. And so you both just, like, slam into each other. And if you don't have uniqueness constraints for all these things, you're creating duplicate values. You're overriding values. And so there's there's these race condition things called, check then act, which is like I check if it exists and then I act. And so if you don't coordinate those, you can override each other's data.

JP Camara [00:30:25]:
Or read, modify, write, where I read a piece of data, I modify it, I write it back. And if a bunch of us are doing that at once, we're all we're all gonna be writing junk to the database. And so it's a great point that even in Sidekick, if you really wanna be sure that I'm, like, operating on this particular ID and it's safe, you you probably want to utilize something like a lock, like a database lock. Right? And so, you know, it's it's sort of like, I guess it's like the concept of, like, turtles all the way down. It's like it's like shared resources all the way down where you have to figure out ways to safely share them. And a lot of times, like, you know, this this probably comes up more at, like, a scale, I think. And and scale is relative. Right? Like, it could be 10,000 jobs.

JP Camara [00:31:06]:
It doesn't have to be a 1000000 jobs. But, you know, a a lot of times when you're at smaller level, like, locking all the time is not totally necessary. But if you have an ID coming through and you have a really critical piece of data that you wanna update, you probably should acquire a lock to that row. And then the first thing you do after acquiring that lock is go like, hey, Is this has this been updated? Is this condition already met? And if it is, you just drop out of the job. Right? Because you the first thing a lock does is it acquires and then reads the row. And so, like, you know I have the freshest data. Nobody else can access it. And so that's kind of your your way of of core that's one of the, like, simpler ways of coordinating your jobs to say, while I'm doing this thing, let me be certain.

JP Camara [00:31:42]:
I'm gonna lock this row. I'm gonna do my operation, and then I'm gonna move on from there. So, yeah, I feel like I got a little bit lost answering the question. But, but, yeah, like, that's that's kinda how we handle at the site level. Is there a follow-up to that? Sorry.

Charles Max Wood [00:31:58]:
So many.

JP Camara [00:31:59]:
I

Charles Max Wood [00:31:59]:
mean, I feel like this is this

Valentino Stoll [00:32:01]:
is definitely one of, like, I I think the trickiest parts of, like, okay. You're foraying into the concurrency realm. And then and then you it's like you realize after the fact that it's it was wasn't worth it.

JP Camara [00:32:14]:
Sure. Yeah.

Valentino Stoll [00:32:15]:
In in a lot of ways. Right? Like, if Yeah.

Charles Max Wood [00:32:17]:
I guess it comes back

Valentino Stoll [00:32:18]:
to, like, okay. We'll understand the concepts and know what you're doing before you get into it. But, you know, it's hard to also get into it at the same time. And I think understanding how, like, you know, what is shared and what isn't and kind of how the messages, I guess, pass to each other through that orchestration layer Yeah. Is important. So, like, how do you go about reasoning about that, you know, like, as far as, like do you have, like, a a very, like, rigid template for what kinds of data get shared, or or don't?

JP Camara [00:32:57]:
If we if we were to kinda utilize, like, the database sidekick kind of example again, maybe.

Valentino Stoll [00:33:02]:
Sure. Or or Yeah. Or or even the file system. Right? Like, that's a common one.

JP Camara [00:33:05]:
Yeah. Sure. Yeah. That's a good point. I I'll take a step back even, actually, and I'm I'm as I'm thinking about it more, you know, I think it's something we take for granted as like, we when it's in a job, there is a different set of constraints, but it it is actually a similar set of constraints even in a web process. Right? You know, we a web process, even if it isn't threaded, even if it's just processes, you're like on Unicorn or Pitchfork or something like that, it's purely just processes, there's no threads. Mhmm. You can have a 5 different users submit the same update or submit different updates to the same resource at the same time.

JP Camara [00:33:41]:
And how do you coordinate that? So it's sort of like there is a it's, you know, the juice isn't worth the squeeze kind of thing where, like, you get really deep into concurrency. You're always kind of thinking about it a little bit and how much does it matter to you. And I think at a base level, a lot of times, most things, you it can't it's it's probably gonna sort itself out. It's like, last one wins, whatever. You know? Like, I how much can you actually guarantee? And it's really the level of, like, what that what the importance of that guarantee on the data is. For a lot of data, hey. Five people submitted updates. Last one won.

JP Camara [00:34:14]:
I don't know. Like, you you got 5 updates. If you have a version system, you can look at the versions that came through. If somebody writes in, they're like, hey, why did this happen? It's like, oh, well, I can see my logs or I can see versions and say, like, well, you had 5 people submit it. The last one won. That's just how it works. And so then Well,

Charles Max Wood [00:34:28]:
and how how often does it happen?

JP Camara [00:34:30]:
Exactly. Not very often. Yeah. And the same can be said about your jobs too, I think. A lot of times, like, really, how often is it that 15 different updates in in jobs happen on the same resource or the same job gets enqueued a bunch of times? And there and there's ways you can mitigate that too. You know, there's, like, uniqueness gems and stuff like that for your jobs. And so I think sort of like the way I look at, like, how to approach concurrency on your servers and your job servers is, like, the simplest approach wins, which is unless you're gonna corrupt something, the last one wins, and and there it is. But then when you have a particular resource that, it it really just comes down to how important is it that I exactly get this resource right.

JP Camara [00:35:06]:
And that's when I'll start to apply, like, a a lock or a uniqueness gem or something like that or a or a batch process or those sorts of things. So I always try to come at it from an angle of the simplest solution first and then the requirements of how how dire it is that my data is always exactly right or or it doesn't get coordinated and properly, dictate. Yeah. That that's sort of the framework for me. It's, like, the simple solution wins if we, you know, evaluate the requirements. And if there's a requirement that you have to be rock solid about 5 people can update this and it, like, complains to 4 of them, then then you do that. You know, and

Charles Max Wood [00:35:42]:
you can add that lock there or that sort

JP Camara [00:35:44]:
of thing. Because I think I mean, honestly, like, I think most systems probably have these little, like, oddball inconsistent pieces of data that come up, and you just don't realize it, and it just doesn't matter most of the time. You know? But occasionally, it does. Or occasionally, it's really important depending on what industry you're in or what piece of data it is. So, yeah, that's that's sort of my loose framework, I guess. It's simplicity first, and then based on the requirements and the the importance of the data, you start to apply locks and you start to apply other tools.

Valentino Stoll [00:36:16]:
Are are there some tasks where you're like, you think right away, like, I'm just gonna async this because it'll just rip through all these

Charles Max Wood [00:36:22]:
I was gonna ask that.

JP Camara [00:36:24]:
Yeah. I think it it a lot of that really I try to evaluate I try to evaluate, like, upfront scale of a task, and that will sort of help me dictate how much I I wanna do. So for instance, like, let's say you're, you're allowing users to upload, like, these huge files or something, and you need to do some processing on them. Well or, like or your constraints are like, hey. We, I'm trying to think of a a good example. Let's say, like, people can upload. They wanna, like, bulk import data into your system or something so they can upload a bunch of, like, CSVs or Excel files or whatever. And depending on what constraints you put on it, like, that's a pretty parallelizable task.

JP Camara [00:37:04]:
And so my first thought for something like that really would probably go to, like, okay. Can I, like what what's my, like, strategy for uploading these on the front end? And then once it gets to the back end, I want I I know that unless I constrain it, this user could, like, take down my server. Like, they send me a 500 megabyte file. I load the whole thing in the memory. I try to operate through it. It's like, crap. That process just crashed. Right? Well, okay.

JP Camara [00:37:25]:
So in that case, what I probably wanna do is I probably wanna offload it to a job, and I probably wanna try to chunk up that file upfront. Like, the first step of my job, you know, using a batch, I might say, okay. Let me split this into 5 pieces and then run those in parallel. And that's a that's a really nicely parallelizable task. And so there are definitely things like that where, like, the size of the data, it indicates to me, like, I need to do some some async kind of processing and split it up right away. But, you know, if you've got a CRUD form and you're submitting 5 fields, like, but whatever. I don't I don't need to do anything special there. But even within that, you can get challenges too.

JP Camara [00:38:00]:
Right? Like, maybe you have this, like, really complicated form. And in that case, you might not do async, but you might try to do, like, a, you know, an insert all in Rails or something like that. Right? It's like, oh, yeah. I could just insert each record, but I've got 50 different records, and now it's gonna take 3 seconds to to do, like, individual steps so that I would do an insert all. And so but, yeah, like, there's there's definitely tasks that get presented to you. And the size of your application, I think, dictates it to you and what how many users you have. Or you're like, yeah. If I had 5 users, this would scale pretty well if I just did it all here.

JP Camara [00:38:28]:
But now I've got, you know, 5,000 users. And so if a bunch of them do this at once, I really need to paralyze this, and I can do it pretty easily using a sync or sidekick or whatever. So there's definitely things we're up front. Like, as much as I say simplicity, there's definitely things up front where I'm like, I need to do some streaming. I need to split this into multiple tasks. I wanna use threads and processes and all those things, to my benefit so that instead of taking 15 minutes to process this 500 megabyte file, I do it in 5 seconds or 10 seconds. Right? And and you can do that kind of stuff by splitting this out. And it's such a satisfying thing too.

JP Camara [00:39:01]:
Right? So, like, I think performance I don't know if about for you guys. Performance is, like, one of the most satisfying things to to improve, and parallelization can often get you that, you know, at a cost of some complexity, but but it's it's a good feeling to take something from, like, 5 hours down to, like, 2 minutes or something. Right? Yep. Yeah. So that's how I

Charles Max Wood [00:39:21]:
I was gonna say, when I when I shave off a couple milliseconds, I don't always care. But, yeah, the 5 hours to 2 minutes, it's like I'm a freaking badass.

JP Camara [00:39:30]:
I'm just Oh, yeah.

Valentino Stoll [00:39:31]:
That's the

JP Camara [00:39:32]:
best feeling. Oh my gosh. Yeah. But you're right. Yeah. Milliseconds. There have been points in my life where I've cared about milliseconds, but mostly just as, like it's probably just like a pride thing at that point. You're like, why is this 50 milliseconds? Like, it should be 10 or something.

JP Camara [00:39:46]:
It's like, it does not matter day to day, those types of things, unless you're, like, you know, like a Shopify or something. Like, yeah, shaving off 40 milliseconds for Shopify probably makes a difference for some things we're doing. But for for most of us, it doesn't.

Charles Max Wood [00:39:59]:
But but even then, it's not the 40 milliseconds. It's, hey. We saved ourselves, you know, $60,000 in compute across all of

JP Camara [00:40:07]:
our requests.

Charles Max Wood [00:40:08]:
Right? That's the big number that gives you the badass feeling.

JP Camara [00:40:11]:
You know what I mean? That's a good point. Yeah. You you deploy a new version of of Ruby with YJ, and you get, like, 20% faster response times. You're you're someone like Shopify or GitHub, and it's like, well, I just saved literally saved us 1,000,000 of dollars.

Charles Max Wood [00:40:22]:
Like,

JP Camara [00:40:23]:
that's incredible. Right. That's a good point. But, yeah, those

Charles Max Wood [00:40:25]:
those things. Wondered with, like,

Valentino Stoll [00:40:27]:
the the, you know, the push for serverless, like, you would think that, like, you know, that your compute time would be, like, the most important thing at that point. Like, because you're because you're you're paying for whatever time it takes for the thing to run. Right? Right.

Charles Max Wood [00:40:43]:
But I think it depends. It we're off on a tangent, but I think it depends on what what the critical feature is of it. Right? And so if if the critical feature is something other than the compute time, right, it's, accuracy or, you know, some some other aspect of of what you're doing that affects the user, and the user will spend more with you maybe yeah. Anyway. It's it's not it's not always one metric, and it's not always one metric across the same problem set. So

JP Camara [00:41:13]:
Right. And then I mean, you know, when you spread things out, even if the added, like, compute time may seem higher, a lot of times you do end up getting things done faster, so, like, the sequential version. But I haven't I personally haven't done a ton of, like, full serverless stuff, so it's it's not something I think about all the time. And the cold starts scare me so much. So yeah.

Charles Max Wood [00:41:33]:
I mean, that's that's that's another version of of of parallelization, though, is the serverless. Right? You're just pushing the cloud and cloud to it.

Valentino Stoll [00:41:45]:
That's true.

JP Camara [00:41:47]:
It's, yeah, it's kind of like, it it's, again, like, the layers of concurrency go beyond what a language offers you. And then you start you know, you've got the lowest level, then you've got, like, processes, then you've got how you coordinate your servers. And, yeah, serverless, technically, it's, like, you know, quote, unquote, like, infinitely scalable. Right? It's just everything's just popping up anytime you need it. And so there's just more and more layers of of what that concurrency means. But I think for Ruby specifically, you know, you benefit a lot more from tuning at the Ruby level before you get up to higher levels. Because if you don't do that, then you tend to you might, use a lot more memory, for instance. So, like, you know, if you don't take advantage of processes and copy on write and stuff like that, then every process is gonna take the same amount of memory.

JP Camara [00:42:29]:
You can't take it you can't have any benefit from forking or I think you talked to, Jean Bussier about, like, pitchfork on 1 podcast maybe. And, like, you know, pitchfork has, like, reforking and, like, you get really optimized memory there. And if you if you try to do things from, like, a, serve purely serverless perspectives, like, everything's pretty much taking the same amount

Charles Max Wood [00:42:48]:
of memory.

JP Camara [00:42:49]:
You can't do anything about it. Whereas if you if you isolate a little bit more, you can get really efficient memory working on a a more isolated level. And then and then it's it's a similar thing to I shaved off 30 milliseconds. Well, I shaved off 200 megabytes of memory, right, over the over the spectrum of all my processes. Now I get an extra, like, process or 2 or 3 or or whatever. And so now I have more capacity. And processes are one of the best forms of capacity because they they have a lot lower, latency, which I think is kinda one of the benefits of, like, a unicorn or a pitchfork. It's latency on process is really good.

JP Camara [00:43:22]:
Yeah. There's just I don't know.

Valentino Stoll [00:43:24]:
It's So building on building on top of the the idea of, like, you use what the language offers you. Like, you've been getting more involved in the language. Right? Yeah. And and kind of where that is. So, like, where where does maybe Ruby like, where is it missing things and pieces of this puzzle? Like, what is being worked on that you can, like, maybe shed some light on that

Charles Max Wood [00:43:45]:
I have a follow-up.

Valentino Stoll [00:43:46]:
Could use some improvement?

Charles Max Wood [00:43:48]:
After you answer this.

JP Camara [00:43:50]:
Sure. Sounds very, like, secretive or dire or something. I don't know. I I just don't wanna confuse the question. Answer this question correctly, and you'll live. Blue. No. It's green.

JP Camara [00:44:01]:
Yeah. Exactly. I feel like, like a Monty Python. I just got, like, ejected into the air kind of thing. So, yeah, I I, you know, I don't wanna I don't wanna speak too much for the language itself. Right? Like, I'm not a core maintainer or anything like that. I I've I've, you know, I've I've submit you know, I've I've contributed some stuff to the M and thread scheduler, and and there's actually a local guy, Seung Hun Jung, who is a, he's a student researcher at Brown, and he he's done some really interesting stuff with raptors, so I've been going back and forth with him a bit. But to me, I think, in terms of the language itself and expanding on that, there's, like, a there's a bunch of pieces in place already that I think just need maybe more, like, contributor coordination or community involvement or something like that.

JP Camara [00:44:47]:
And in particular, I think those are, you know, Samuel Williams is, like, insane in terms of how much contribution he's done for the fiber stuff he's done. Right? He goes into, like, every project. He's like, do this, do this, do this. And fibers will work great, and then that benefits it. So, like, fibers are in, like, pretty good hands with him, and those are going in a good direction. Raptors, you know, I I'm sure you've I believe you've talked to people about Raptors before. And Raptors are sort of, like, you know, it's they're almost like this, like, unfulfilled promise. It's like we've all I was, like, super excited about Raptors for for Ruby 3, and they're just always, like, not quite there.

JP Camara [00:45:20]:
And so I I would like to see more, like, community involvement in Raptors so that people can take advantage of more of that actual parallelism. And I think if Raptors got more stable than they are now, people would probably start to, like, integrate them more into gems. And then now you have, like, a truly parallel thing that doesn't take a whole new process of memory. Right? Because you can do parallel Ruby otherwise, but it's processes that use a lot more memory, all that stuff. And so with Raptors, it's a it's a pure parallelism thing. And so I think there's, like, key inconsistent features in though in those that if if those got fixed, we could get maybe more community involvement and we could expand the area of Raptors. The other piece that I think is really valuable that that, was the piece that I contributed the Mac OS support for is the the MN thread scheduler. And so I think the MN thread scheduler is probably one of the most potentially beneficial things to the ecosystem.

JP Camara [00:46:12]:
And the nice thing about it is it can be potentially beneficial without anybody really having to do much of anything. And so the MN thread scheduler basically just says, like, it operates a lot more like Go coroutines, like in in or Go routines. In Go, you have Go routines. And, like, you don't think about threads. You don't think about anything. You're just like

Charles Max Wood [00:46:28]:
Right.

JP Camara [00:46:28]:
I just create 5,000,000 Go routines and whatever. They just they just do my work. And so the the M and thread scheduler is trying to fill, like, a somewhat similar role in that it backs itself with some actual OS threads, but a lot of the rest of it uses the same kind of principle as fibers. They use this thing called a reactor, and it it uses some, like, efficient OS processing and stuff to handle it. And so I think if if MN thread scheduling support got really strong, more servers could enable that, and it would actually you would be able to support, like, the same level of concurrency on existing servers with less threads and so less memory overhead, less CPU overhead, less thread scheduler coordination. And so from, an outsider's perspective who's contributed a bit through to CRuby itself, that's probably, like, my hierarchy. It's like, Samuel Williams is doing a great job promoting and doing great stuff with Fibers. Like, that's in pretty good hands.

JP Camara [00:47:18]:
Ractors and and then thread scheduling are kinda, like, pretty much in Koichi's camp, and I I would like to see that get expanded more. And that and then thread scheduler get more, like, full support. We start seeing it used in servers and, like, being better able to scale servers. And then kind of the cherry on top to that is if we got better Ractor support, then people can reach for that and say, I've got this super parallel thing. Like, I wanna do some image processing or something, and I don't want to spin up another process, whatever. I hand it off to this Ractor, operates in parallel while I'm doing this other thing. Perfect. I've got my parallel stuff in Ruby.

JP Camara [00:47:51]:
And so those are kind of the 3 areas for me. Like, threading itself is just, like, a pretty solid, like, long term thing. M n the m n thread scheduler, if it succeeds, I think would would kinda take over for that and and become, like, the new default, and it would probably increase the the would hopefully increase the performance for a lot of servers, or at the very least decrease the overhead of those servers. Right. So, yeah, those are those are kind of the perspectives of of seeing the internals of Ruby. I've been trying to read more Ractor source code too to see if I can help with some of that stuff because I really want Ractors to to be more stable. And I think, like, Sunghyun's work, he created a server called Moro, so we could probably reference that or something. It's like an actual raptor based server.

JP Camara [00:48:37]:
And it's it's sort of like a a bit of a toy project, but, like, the principles could be, like, kinda powerful for certain things. But because of of bugs that are in Ractors, it it just hit a wall, and it hasn't gone any further since then. But yeah. And then thread scale driller, fibers, let's let's get Ractors going too.

Charles Max Wood [00:48:55]:
That's my

JP Camara [00:48:56]:
that's my answer there.

Charles Max Wood [00:48:57]:
My follow-up question was along some of these same lines. I mean, you're talking about features that we either have and could use or, you know, maybe could, I guess, become more usable in the future. Yeah. But I was also thinking along the lines of things like, you know, YJET, for example.

JP Camara [00:49:18]:
Mhmm.

Charles Max Wood [00:49:19]:
Not not in the sense of concurrency, but YJET was, you know, you you'd compile with a flag or you'd Yeah. You know, you'd turn it on, and now you can kinda you can turn it on without using the flag when you compile it, or you can, you know, you can kinda have it on by default. Are there features that are in Ruby that you have to turn on that way, or are there tweaks that you can make or flags you can give the VM that that also allow you to do more concurrency or better concurrency?

JP Camara [00:49:52]:
Yeah. That's a really interesting question. I I don't have a I'm trying to think through if there's anything that, like, sticks out to me. You know, it that I think, again, like, the thread thread scheduler is still kind of experimental, but I think it would be become sort of that kind of thing. Like, I see it serving a similar role to YJ, where, like, you know, once once YJ is just a thing you can kind of, like, say, like, okay. I wanna turn this on and, like, things just get better. I think the M and sched thread scheduler could be that same kind of role for a server or for Sidekick or something because the overhead of those becomes lower, and now you just kind of get, like technically, you could you can scale your server higher based on that. So that's that's probably the only one that comes to mind, but it's not really one that you can actually utilize on production today.

JP Camara [00:50:37]:
So it's not a good answer in that regard.

Valentino Stoll [00:50:41]:
Otherwise

JP Camara [00:50:44]:
I guess, actually, probably the one that comes to mind that actually exists, that you could tweak something or whatever to turn on to to potentially get some benefit is probably reforking. That's the only one I can, like, really think of. And so pum both Puma and Pitchfork have a capability for reforking. And reforking is something I haven't really dug into too much myself, but it just comes to mind as, like, hey. Like, if you turn this on, if you take advantage of this, you can potentially have processes that take close to as you know, like, near as much memory as, like, a thread would, but can actually run-in parallel. So you kind of get, like, that that Ractor flavor without having to, like, change how you code everything. And so so, yeah, reforking is probably the only thing I can think of that's, like, semi built in right now that you could take advantage of the certain things. And I'm curious to see if more tools end up picking up on the reforking concept.

JP Camara [00:51:36]:
It's still technically, I think, experimental in Puma. It's obviously, like, the flagship feature of Pitchfork, and that's what Shopify is using. And that, I think, saved them it was, like, some of some it was, like, 11 or 15% of, like, memory and latency or something like that. I think it was in it was it was a I there's an article that John Bussier, posted about it. Because I was I was asking at one point, like, how is this doing? And then a little bit later, they they had posted an article saying, like, Pitchworks on prod. Here's the the benefits that we got out of it. So that's probably the only thing I can think of today that you you could take advantage of that that you're not. But I bet there are things out there, and I I hope that there's more things I find as I dig into the series that I'm like, oh, wow.

JP Camara [00:52:16]:
Like, yeah. I can tweak this thing. I can tweak this, like, stack size or something like that. It's like, you know, when you're trying to scale and tune your your Ruby instance, you can take advantage of that. Yeah.

Valentino Stoll [00:52:27]:
Yeah. It's funny. I remember, seeing the fallout from all of the pitch fork stuff, and it it kind of was, like, was a reflection of all the work they had done previously, like, with copy on right and, like, object shapes.

JP Camara [00:52:38]:
Like, it

Valentino Stoll [00:52:39]:
was like a culmination of events that just, like, oh, hey. Look. We, like, shaved off this massive chunk of time. Right? Like,

JP Camara [00:52:46]:
That's a great point. That it it really does it really has

Charles Max Wood [00:52:49]:
it all. Together, and now all four lines disappeared

JP Camara [00:52:52]:
at once. Exactly. Yeah. That's a that's that's, like, the best way to describe it because you're right. Like, earlier versions, I seem to remember. I think it was, like, Ruby 2 or something like that when it came out. It was like, oh, we've got, like, copy on right. But they when they did a garbage collection, it, like, marked every object.

JP Camara [00:53:07]:
It was like, no. All the copy on right goes away. They've done all this work to improve that over time. It's easy to forget about that stuff because it's come so far. But it isn't I mean, like, you know, I I do see, like, an incredible amount of work, like, looking at the issue tracker and all that stuff that these people are doing. You watch, like, Ruby talks and stuff where someone's like like, was it Jeremy Evans? I think he did a talk on, like, it's like, oh, this particular thing, I made it like a no op because you don't actually, like, use this particular object or, like, unless you do this other thing. And so it, like, saves, like, 5 megabytes of memory. It's like all these small things just culminate together.

JP Camara [00:53:39]:
Yeah. And we all just we all just benefit from it, which is awesome. Yep. Not stuff you would think of as a, like, a higher level programmer. Like, I I think we all are, which is, like, oh, like, I'm thinking about, like, how do I make these, like, HTTP requests better? And they're like, how do I make how can I do a 1,000,000 iterations instead of Right? 500,000? They're like, oh, cool. That's great. I'm glad you're thinking about that.

Charles Max Wood [00:54:01]:
Yeah.

JP Camara [00:54:02]:
Yeah.

Valentino Stoll [00:54:04]:
I know. It's it's the race to see, okay, well, are are resources cheap enough that we could just have, like, more resources? Or, like, does it, like

Charles Max Wood [00:54:11]:
you know, does it really matter that we're making things more performant in the end? Right?

Valentino Stoll [00:54:14]:
Like, I don't really know, but I'm glad people are working on it.

Charles Max Wood [00:54:17]:
Well, it's it's interesting you bring that up because at scale, all the tiny things add up. But if you back it all the way off to, hey. Chuck's running his stuff out of a garage in Linode. Right? Mhmm. You know? So, you know and then that's effectively what I'm doing. Right? I'm I'm in my house, and I run things from here and, you know, host on Linode. So, if if I can get more resources out of that server or maybe 2 servers or however I have it set up before I have to scale or figure out some of this other stuff. I mean, it's it's not just down to, hey.

Charles Max Wood [00:55:01]:
The server's more efficient, but it's also, hey. I can get much further down the road before I have to add complexity and figure out how to do the ops to make this stuff come together. So it benefits people on both ends.

JP Camara [00:55:14]:
Right. It's almost like it benefits people on both ends the most. And then in between, it's kinda like bearing levels. But, yeah, when you're at that that lower end where you're like, I just really I don't I wanna spend, like, $8 a month on this thing. Like, yeah, if you shave off 15 megabytes of memory, it's like, oh, that actually did benefit me. Now I can, like, do this extra request or whatever. So that's a really good point.

Charles Max Wood [00:55:34]:
Yeah. And then and the other thing is is my scaling strategy is I'm gonna buy bigger servers.

JP Camara [00:55:39]:
Right? Sure. Yeah. Yeah. You know? And if you can live

Charles Max Wood [00:55:42]:
in that land and it's easy and it's cheap anyway,

Valentino Stoll [00:55:47]:
I always like thinking,

Charles Max Wood [00:55:48]:
you know, can I run it on a pie? You know? And, like, that's, like,

Valentino Stoll [00:55:52]:
the crux of, like, you know, they're so cheap and, like, you know, can you just install it and make it work there? You

Charles Max Wood [00:55:59]:
know? Yeah.

Valentino Stoll [00:55:59]:
And, like, that's that's honestly a a a significant barrier because, like, it is some resource constrained. Like

JP Camara [00:56:07]:
Mhmm. Yeah.

Valentino Stoll [00:56:08]:
And but but it does have, like, a a very specific value that's very small. So, like, you know, I I like that to be my my point my, like, you know, point of, can I get this to work here? And it makes me wonder, like, you know, bet you know, if I had a Commodore 64 lying around, like, could I run anything on that? Like, I don't know.

JP Camara [00:56:27]:
Sure.

Valentino Stoll [00:56:28]:
Probably probably not much. You know? Like Right.

JP Camara [00:56:31]:
Yep. Probably, yeah, probably not most, like, higher level languages. I mean, I'm not even sure. I maybe, like, an mruby or something because I know mruby is, like Yeah. Targets, like, like, microcontroller type devices.

Charles Max Wood [00:56:43]:
Yeah. Embedded.

JP Camara [00:56:43]:
So Yeah. Embedded devices. Yeah. So maybe there. But, yeah. There was, like, an interesting talk. I think it had maybe it was Ruby Kai or something about they're using, like, mruby to, like it was for, like it was during COVID, and they were doing it was to do, like, martial arts, like, remotely or whatever. I'm not sure if you guys

Charles Max Wood [00:56:59]:
Oh, cool.

JP Camara [00:56:59]:
But I think they use, like, mruby to, like, control, like, the the the smaller units, and and then they used, like, JRuby to do, like, an Android app to, like, control how the thing responded to it. Right. Yeah. So but that's true. Like, a Commodore 65, yeah, Would anything run there. I guess c probably would. C probably can always route everywhere depending on how you write it. But yeah.

JP Camara [00:57:22]:
I sort of promised myself my whole life I would like, most of my programming career, I'm like, I'm never gonna learn c. That seems like, why would I do that? And now I'm doing it because it's it is still just like if you wanna get into low levels of languages or anything, it's it's still the one to use. And it's pretty it's not terribly hard to learn, but it is terribly hard to not destroy everything with by accident. But

Valentino Stoll [00:57:44]:
yeah. So I'm curious. Like, I know we're, like, almost out of time here, but, like, I always wonder, like, okay. Like, if I have this thing, like, running on, you know, a pie or something like this, like, you think, okay. Well, I just buy another pie, and I can replicate it, the process, and, like, spread it over the machines. Like, you know, how easy is that from, like, a concurrency perspective? Like, within Ruby, like, is that, like, a a forking or, like, not even a forking, but, like, is is

Charles Max Wood [00:58:11]:
there even, like, a shared, like,

Valentino Stoll [00:58:12]:
you know, system library or built in way to to do that kind of thing, or, am I kind of SOL dependent on IBM or something to have a library that I have to use.

JP Camara [00:58:27]:
Yeah. I'm not sure I have the best answer there. I I've always liked the idea of, like, Pies and and, you know, Arduinos, stuff like that. And I'll buy them, and then I'll, like, not end up really doing very much with them and just feel sad about it. But, yeah, that's interesting. I think it probably would be like a an mruby type thing, I would imagine, depending on I get I mean, some pies can actually have pretty okay hardware on them, right, to my understanding. Like, they're not they're not super beefy, but they're certainly not like an Arduino. Is that is that correct to say?

Charles Max Wood [00:59:01]:
Yeah. I think so. Yeah. I I think you can run rails on it.

Valentino Stoll [00:59:05]:
On a

JP Camara [00:59:05]:
So I think you probably could take advantage of all of the regular concurrency Like,

Charles Max Wood [00:59:10]:
what I would what I would love

Valentino Stoll [00:59:12]:
is just to have, like, you know, k eights or, like, Kubernetes, but, like, in a physical form, you know, where I just I just add another node. Right? Yeah. Just plop it in.

JP Camara [00:59:20]:
That's a super cool idea. Yeah. And I I honestly, like, what comes to mind first for that sort of thing is probably, like, something like a sync type based, you know, because they're it's fibers. They're a lot lower overhead. They take less memory. You know, they they you know, on a on a pie, you probably don't have a lot of cores or any pores. I'm not sure. Well, I mean, obviously, you have some cores, but I you I don't you're probably not much beyond, like, a couple cores or something.

JP Camara [00:59:43]:
Yeah. Exactly. And so, like, having a couple processes and then being able to use fibers to communicate between things. And, yeah, that's kind of a cool idea of, like, you add another pie. It's just like another node available on

Charles Max Wood [00:59:55]:
your Right. Yeah.

JP Camara [00:59:57]:
Yeah. And so, like could use

Charles Max Wood [00:59:58]:
DRB under the covers? So You

JP Camara [01:00:00]:
probably could. Yeah. If you run it to really, like, just, like, communicate through, like, a more like RPC type interface. Yeah. Yeah. Communicating DRB through those would probably make the most sense rather than trying to do, some other layer or whatever. So, yeah, communicating DRB. Now I really wanna try DRB and and fibers together, actually, because I'm sure it's words.

Charles Max Wood [01:00:19]:
I bet I bet that's and and if it's not a workable solution, I can't imagine it would take very much.

JP Camara [01:00:29]:
Yeah. And probably just because we've spoken about it into the air, Samuel Williams has probably heard it somewhere in the industry. Like, wait. I sense a I sense a fiber solution that that needs to be done. So if it doesn't work, then probably I would do.

Charles Max Wood [01:00:41]:
In the force.

JP Camara [01:00:42]:
Yeah. Like, I I it's like, hardly I can ever go to, like, a a random Ruby project and not be like, oh, Samuel Williams is supposed to be the thing here. It's like he's the dude's everywhere. He's like, what is it? Like, Andrew Cain or something? The Instacart guy who made, like, 5,000,000 gems. Like, they're they're very, like, similar type people where I'm like, I don't know how you are involved in all these things and have, like, a family and all this stuff.

Charles Max Wood [01:01:03]:
Right.

JP Camara [01:01:03]:
That's great. But, yeah, that's that that does sound like a pretty reasonable solution sort of a because DRB uses like a it's like more like a binary protocol type thing. Right? Yeah. Yeah. Communicate between each other. Yeah.

Charles Max Wood [01:01:15]:
Yeah. But I don't see why you couldn't sneak that up inside of fiber and then Yeah. Definitely. Synchronously communicate.

JP Camara [01:01:22]:
I thought it would it would just work. Yeah. It would probably be surprised if it didn't. Yeah. I think it would. I would be surprised if it didn't because the the mechanisms for how they communicate are probably all the low level sockets and stuff like that that fiber support perfectly. So I'm looking forward to your your blog post or your open source project. P we'll call it p 8 or something like that.

JP Camara [01:01:43]:
It's like, Pybernetes or something.

Charles Max Wood [01:01:46]:
That's right.

JP Camara [01:01:47]:
Yeah. I'm excited about that.

Charles Max Wood [01:01:48]:
Gonna have Skynet running in his house.

JP Camara [01:01:51]:
Right. He's just like you walk into a room, and he's just covered in, like, raspberry pies. You're like, what happened? And, like, they

Charles Max Wood [01:01:57]:
I'll tell you what, with

Valentino Stoll [01:01:58]:
all this AI stuff, you know, I I I'm gonna make an AI clone of myself and then eventually just, like, have all the side projects worked on. You know? Right.

JP Camara [01:02:05]:
Right. You're like, oh, I always wanted to work on this. Like, hey. Right. Yeah. Fiber Netty is good to

Charles Max Wood [01:02:09]:
Hey, team. Yeah. Go go go for it. Right.

JP Camara [01:02:11]:
Exactly. That sounds super fun, actually. I'm I'm curious how that would work. But yeah. I mean, not the taking over our lives thing, but the the Pipernetes.

Valentino Stoll [01:02:21]:
Yeah. The Pipernetes.

JP Camara [01:02:23]:
Let's start I like that.

Valentino Stoll [01:02:24]:
Small baby steps. You know?

Charles Max Wood [01:02:25]:
Baby steps. Will be the pie guy.

JP Camara [01:02:28]:
The pie guy. Yeah.

Charles Max Wood [01:02:31]:
Alright. Well, we're devolving. Let's jump into pics and wrap

JP Camara [01:02:35]:
this up. Once you yeah.

Valentino Stoll [01:02:37]:
The the It's it's fun, but

JP Camara [01:02:40]:
the moment you get to Fiber Nettie's, that's when you know you got That's right. Yeah. Yeah. Valentino to a bakery after this for sure.

Valentino Stoll [01:02:47]:
But Oh, yeah. Oh, don't let me.

Charles Max Wood [01:02:53]:
Alright. Valentino, what are your picks? Sure. Yeah. I've been working on a

Valentino Stoll [01:02:58]:
a lot of fun AI stuff lately. I I started this thing, where I'm trying to fine tune a LLM specifically for Ruby code generation. And I I started at ruby lang dotai, and I'm I've just almost wrapping up a a library to, go through the whole fine tuning process just kind of for fun. All of this is just for fun. I hope something actually works from it. But, yeah, I've I've you know, everything's Python generated, so I we should have a Ruby, you know, tuned version of that. And I I got in touch with the the Hugging Face, you know, leaderboards, and they have very specific, evaluators. Somebody made a Ruby one, so I'm gonna try and and use, the data for that.

Valentino Stoll [01:03:45]:
So yeah, so ruby lang.ai is something I'm working on. Lot of fun. And, another I tried so we have, like, a Ruby AI Builders Discord if you're not familiar. Check

Charles Max Wood [01:03:55]:
that out. Good.

Valentino Stoll [01:03:56]:
It's so much fun.

Charles Max Wood [01:03:57]:
I just There's

Valentino Stoll [01:03:57]:
a bunch of Ruby people, like, playing with AI stuff and sharing what we're learning. And I I've submitted this, thing called the podcast buddy. I I just open sourced it, and it basically just listens, to your mic and uses Whisper locally and transcribe stuff. And then it, like, ongoingly just summarizes, and you could interrupt it and ask it questions, and it'll answer back, like, text to speech. It's a lot of fun. I'm trying to get, like, other people, like, to help me start building it on, because it's, like, kinda just stupid and fun and funny. Yeah.

Charles Max Wood [01:04:29]:
And, so I come come hack on it. Podcast buddy. Codename beep podcast buddy. And I'm gonna try and bring it

Valentino Stoll [01:04:37]:
on on this podcast, and have some fun with it too, but, we'll see. Awesome.

Charles Max Wood [01:04:43]:
Nice. Yeah. It's it's interesting you talk about that. I'm gonna veer into my picks because, what I wanna do is I want to essentially build a podcast assistant, but it's more on the management and things like that, all the stuff that I'm doing to run the podcast. So, you know, let's say that JP couldn't come on at our regular time. And so I could just tell the assistant, hey. Find out when he's free and find out when the other hosts are free and let and get it scheduled. Right? So then it does all the emails and coordination and, you know, stuff like that and and finds the time.

Charles Max Wood [01:05:22]:
Another one is, if I get an email from JP and he's like, you know, when I was talking about that one thing, I didn't say it quite right, and I I'd rather just drop it. I I typically don't do that because it's a giant hassle. If it's if it's before we publish it, it's a lot less hassle, and I'm more inclined to do it. But afterward, I usually just tell people no. Right? But it'd be interesting to be able to tell the assistant, hey. Go find where JP was talking about this thing and take it out, and then have it, you know, work through the transcript, find the time stamp, you know, cut it out, all that stuff. And so, yeah, the this is where my mind has been lately is, hey. I I do all this managerial stuff that I don't love.

Charles Max Wood [01:06:07]:
And, you know, Mikaela does a lot of it, but I keep thinking that I could have her doing other things that are more not auto auto, automatable, if that's a word. Right? Things that I can't have the AI do or things that I can't easily have the AI do. And so, you know, when we were talking to Obi Fernandez, and he was talking about having these assistants that do these different things, some of those were pieces of things that I'm looking at. You know? Hey. We have a new sponsor. You know? Here's when it's gonna run. That kind of a thing. Though, I'm I'm also considering just not doing sponsors anymore, but that's that's a different conversation.

Charles Max Wood [01:06:52]:
You know, or at least just sponsor with my own products. But yeah. So so, anyway, interesting, interesting stuff. I usually do a board game pick as my first pick, so I'm gonna veer into that really quickly. My wife and I, went to the game store. Incidentally, our neighbors own the game store. So we're we're we're paying for their kids' Christmas when we go there. But, anyway, we picked up an unlock, and I'm not sure if people are familiar with unlock.

Charles Max Wood [01:07:27]:
And I'm not sure that, BoardGameGeek actually has ratings for just unlocks in general. It looks like they've got a couple of different versions. Anyway, the Unlock Escape Adventures from 2017, that that one has a board game weight of 2.10. I'm guessing they're all probably pretty similar. Anyway, what it is is it's basically an escape room in a card game, and you have an app on your phone. And so, some of the things that you get from the cards, you put the number from the card in, then you interact with the machine or things like that. So we we bought 1. The one that we bought, it has 3 adventures in it, and they're all based on different board games.

Charles Max Wood [01:08:22]:
So one's based on ticket to ride, one based on mysterium, and the other was based on something else that I can't remember at the moment. But it's another board game that we play fairly frequently. And so that was fun. We did the Mysterium 1. We got stuck a couple of times, so it took us 90 minutes to do a 60 minute escape room. But, honestly, about 20 minutes of us was of it was we figured out, so in Mysterium, you have a place, you have a person, and you have a murder weapon. And so in the unlock, you were doing the same thing. You were finding the place, the murder weapon, and the and the the culprit.

Charles Max Wood [01:09:01]:
And so we found the location, and we couldn't figure out what to do next. And what we needed to do was go report it back to the the policeman. So, you know, otherwise, it would have been 70 minutes for the 80 minute unlock. But, anyway, it was a lot of fun. We we really enjoy them. What I tend to find is that so she and I just did it together. They tend to be a little bit more fun with a few more people, so 3 or 4 people. We've done it with 6 or 7 people, and that tends to get a little bit chaotic.

Charles Max Wood [01:09:33]:
And if you want to be involved in solving the puzzles, it just it gets a little bit hard sometimes. It's like, you know, you have 4 people leaning over the same card, and it just anyway. But I really enjoy them. They're they're a lot of fun. We we also have a Star Wars set of unlocks. And so what you do is, you open up the unlock app, and you just tell it which one you have, and then you play through it. Anyway, they're really, really fun. So, I'm gonna go ahead and put that in as a pick.

Charles Max Wood [01:10:04]:
I had somebody complain on JavaScript Jabber that I got long winded on my picks. So I'm gonna just keep it short. I've been watching the Olympic soccer. That's about the only part of the Olympics I'm interested in. My wife is watching like everything else except the Olympic soccer, but that's been fun. And so I'm gonna pick that. And then, yeah, this week or next week, I'm gonna have Ruby Geniuses and AI for Ruby dot com. Both of those domains will be up, and you can, join in the fun where we're doing weekly calls, get a newsletter, that kind of a thing so that, you know, we can kind of have these conversations in in different ways.

Charles Max Wood [01:10:47]:
And then, yeah, definitely go join the AI group on Discord because they're awesome. Alright. JP, what are your picks?

JP Camara [01:10:55]:
Yeah. I mean, I do wanna say just the unlock sounds super fun, and I also have just always marveled that you always having a board game pick. You must have, like, a a wing of your house that's just literally exclusively board games. I just I can't, like, place it in my brain. But, yeah.

Charles Max Wood [01:11:09]:
You have no idea.

JP Camara [01:11:11]:
I can't even imagine. Yeah. But you could pretty much, like, you know, 52 ish weeks a year have a board game pick. It's just

Charles Max Wood [01:11:17]:
Sometimes I duplicate it, but yeah.

JP Camara [01:11:20]:
Okay. Yeah. Yeah. I I love board games, but I I hardly get around to playing them. But it's enjoyable. So, yeah, I've got I've got a couple of picks. So I was on, Jason Swett's code with Jason recently, and we were talking a lot about, like, smarts and how people like, their mindset how, like, how you can learn and and things like that. It reminded me that, like, a very influential book for me is Mindset by Carol Dweck.

JP Camara [01:11:44]:
Good book. Yeah. Super good. I always loved it. Pre kids, very influential for me with my kids as well. Just trying to to, you know, embark or, impart that, growth mindset versus a fixed mindset. So just a great book that I kinda recommend to everybody. The other one is, like, being about to go on a podcast previously.

JP Camara [01:12:08]:
I would I just started noticing everybody, like, how anybody sounds on a podcast. It made me feel like, I can't just use my AirPods. So hopefully, this sounds pretty good. But I I went out. I went to Best Buy and I got, like, a wave 3. So, like, not super expensive. It was, like, a $140 or something. So not crazy, you know, not, like, a $500 one or something.

JP Camara [01:12:25]:
But I just wanted to have a better mic quality, and I use it for all my meetings and stuff now too. So I Yeah. Use it for other stuff as well. But I've been pretty happy with that, and it just I think it sounds, a lot better than just, like, some AirPods or something. Right? So I'm pretty happy with that. And then the the third one is just literally just the concept of going to a meetup. I've started so the Boston Boston's fairly nearby to me. It's about an hour drive.

JP Camara [01:12:46]:
And, I've their meetup started again, and it's been great. Like, I've just been meeting great. Ruby is the Boston Ruby group meetup. I've gotten to meet, like that's actually where I met Jason Sweat and and Kevin Newton when they were doing some presentations and just cool people in the Ruby community. So I've been really happy that I've kind of been reconnecting with that and and getting more involved. So so yeah. And then I guess I could pick my own blog series, but that seems a little self serving. But you can go to jpkamera.com and read about concurrency.

JP Camara [01:13:14]:
So, yeah, those are those are my picks. Two picks and a concept.

Charles Max Wood [01:13:19]:
Awesome. Well, thanks for coming.

JP Camara [01:13:24]:
Yeah. Thanks for having me. This was super fun.

Charles Max Wood [01:13:27]:
Alright. We'll wrap it up here. Till next time, folks. Max out.
Album Art
Innovations in Ruby Concurrency: Tips and Tools - RUBY 648
0:00
01:13:35
Playback Speed: