Charles Max_Wood:
Hey there and welcome back to another episode of the Ruby Rogues Podcast. This week
Valentino_Stoll:
you
Charles Max_Wood:
on our panel, we have Valentino Stoll.
Valentino_Stoll:
Hey now.
Charles Max_Wood:
I'm Charles Max Wood from Top End Devs, and this week we have a special guest. It's Ivo, is it Anjo?
Ivo_Anjo:
It's Andrew.
Charles Max_Wood:
Anjo,
Ivo_Anjo:
Yes.
Charles Max_Wood:
awesome. You wanna introduce yourself, let people know who you are, why you're awesome.
Ivo_Anjo:
Yes, definitely. So thanks for the invite. I'm Ivo Anjo, and I really enjoy messing with Ruby, the Java Virtual Machine, desktop Linux, and open source stuff in general. Of the Ruby things in particular, I really like working on performance and making Ruby better. So that's kind of how I started a bit on a quest to build better tools to help people understand because in a way I feel like often there's this perception that Ruby is slow and definitely there are alternatives when you want to build some very, very high performance things but my background is that often I've seen applications when it's not Ruby that it's slow, it's the application is doing something that is accidentally very slow that thing is just much
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
more expensive than you've thought and so you
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
kind of look at it and think hmm Ruby
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
is slow but no it's just
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
like you're doing something and you hadn't
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
realized how costly that thing was.
Valentino_Stoll:
I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry.
Charles Max_Wood:
So
Ivo_Anjo:
Thanks for watching!
Charles Max_Wood:
how do
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
you, I mean, cause
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
I just do it and then I complain.
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
I mean,
Ivo_Anjo:
Yeah.
Charles Max_Wood:
that's my solution is
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
complain loudly and often.
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
But yeah, so
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
how do you start to begin to
Valentino_Stoll:
I'm sorry,
Charles Max_Wood:
instrument your code so that you can know, hey, this was accidentally slow. And then the other question is, how do you figure out what's accidentally fast?
Ivo_Anjo:
Yes. So, I think that part of that is the challenge in Ruby and why I'm kind of working on a bunch of stuff to try to make that better. It's because I also feel that part of the issue that people have with Ruby is that they have this whole, okay, my app is slow until they say Ruby is slow, but if is slow. The fact is that, like, historically, Ruby hasn't had a really great story there. Like, it's there's not a lot of good tools for looking into what's making your application slow. Or at least, let's say, compared to what you can have for Java or.NET or even Go, like, whatever we have in Ruby is still, like, miles behind
Valentino_Stoll:
I'm not gonna lie,
Ivo_Anjo:
that. So, that's why I I've built a few tools, and part of the work I've been doing this year has been building tools to figure out more things and get more information about what your Ruby app is doing and so that you can begin on the path to actually making it better so that Ruby goes from slow to fast.
Valentino_Stoll:
you
Ivo_Anjo:
One of the things I did a few months back was I created a new gem called the JVL tracing gem, which works based on a new API that Ruby got recently. So it's going to be released with Ruby 3.2. So right now, if you want to use this gem, you need to build Ruby from master or get like a preview, release of Ruby 3.2. And what it does is it actually allows you to see which of your threads in your application is actually using the Ruby global VM lock. So, it allows you to answer part of the question, which is that sometimes if you have multiple threads on your Ruby application, because maybe you're using like a web server that uses multiple threads, you don't know that this request got slowed down, not because of something that the request itself did, but because another thread was in the background trying to do something else, and that thread was actually the one that held on to the Ruby global VM lock, and because in Ruby only one thread can kind of hold the global VM lock and execute, it means that this request got slowed down not because your code is wrong or you did something that made it slow. And this kind of feasibility is something that it's really, really hard to get in Ruby. And so I was really, really excited when this API was added to Ruby and I immediately wanted to build like some visualization for you to be able to see what your threads were doing at the point in time. So, I'm gonna go ahead and show you what I'm doing with the thread. So, I'm gonna go ahead and show you what I'm doing with the thread.
Valentino_Stoll:
Yeah, you know what? This is really great. I love the work that you've done with the new instrumentation API.
Ivo_Anjo:
Yeah.
Charles Max_Wood:
Hmm.
Valentino_Stoll:
Well, before we do, what you were just talking about makes me think of frontends. We use JavaScript for all of our frontends and they all just blast requests.
Ivo_Anjo:
Okay.
Valentino_Stoll:
And it's meant to be asynchronous, if you want to call it that even. I don't know if it's truly asynchronous, but they want it to be. And we have all these UIs. everything to be asynchronous and it's not really right because we have the GDL as an example and that we're making this progress but it's definitely hard to track all of that across the whole request right from it doesn't even have to be rails request could just be rack and it I love seeing this basically is what I'm saying.
Ivo_Anjo:
Yeah.
Valentino_Stoll:
I really hope that we can accurately shed light on what's happening in the Ruby itself, so that we can track all of the across all the threads. I mean, especially with all of the great work that is getting done with like async gems and suite of gems.
Ivo_Anjo:
Yeah.
Valentino_Stoll:
What's something about the instrumentation API that has you excited and kind of hopeful that you can make sense of it all?
Ivo_Anjo:
Yeah, definitely I think it's the... And actually, this API was added by an engineer working at Shopify. And one of the things that they built is a gem that gives you metrics around this. And one of the things that I'm really excited to quantify is how long does your... If your thread is being... If your request is being handled by a given thread, how long does that thread need to wait to acquire the global VM log? Because in a way, that gives you a number, which is how much time you basically lost because your Ruby was doing something else. And that something else can be running something on the background. It might be servicing a different request. It might be maybe Ruby was garbage collecting. But it gives you a bit of a, it's how much time it actually spent maybe waiting for a database request, which is actually quite important and can be an impact on performance too. But there's a big bit which you usually would not get visibility on, which is here's how much time your request was penalized just because your Ruby was
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
doing something else. And so
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
that's the thing that most, that
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
I'm really interested in getting more information
Valentino_Stoll:
I'm sorry, I'm sorry. I'm sorry, I'm sorry.
Ivo_Anjo:
And one interesting case is that, for instance, if you're using a web server with a lot of threads, and if you're doing a lot of computation, which traditionally a lot of, like the general kind of consensus in Ruby is that, oh, usually the most expensive part of a web application is database requests or any kind of IO. So the Ruby parts aren't that expensive. lot of information and you spend a lot of time encoding JSON or decoding JSON or doing this kind of work. This work actually is the kind of work where if you do a lot of it, will impact not just this request, but all requests that are running at the same time if you are using multiple threads. And it's kind of interesting that I had previously dug into the Ruby VM source code and saw that, for instance, Ruby, when you're doing a lot of work on the same thread and you don't try to do any database calls or something, Ruby will allow your thread to run for at most 100 milliseconds and then it will switch to the next one. And it's I had I knew intuitively about this, but it was really interesting to see the effect of this. For instance, you launch 10 threads that are all doing something really heavy as if like parsing JSON
Charles Max_Wood:
Thanks for watching!
Ivo_Anjo:
or encoding JSON or doing one minute, sorry, not one minute, but one second, one minute would be even worse, can
Charles Max_Wood:
Ha ha ha ha ha.
Ivo_Anjo:
wait for an entire second because, well, if you have 10 threads and every thread gets 100 milliseconds, then
Charles Max_Wood:
Mm-hmm.
Ivo_Anjo:
you need to go round robin through every other thread before it gets back to yours. And you might not notice this before because there was not a lot of great ways to notice this, but it's really interesting to see this show up in a visualization where you can really tell, oh,
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
Like you can see my
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
thread just waiting and waiting and waiting
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
before it could do something. It can
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
make progress, which is really interesting.
Valentino_Stoll:
So how does the waiting work? Like when you say a thread is waiting on,
Ivo_Anjo:
Yeah.
Valentino_Stoll:
you know, something to be done, how is that working in Ruby VM? And what are some ways that you've used the instrumentation to be able to like improve that?
Ivo_Anjo:
Okay, so there's kind of two ways for, like generally for a thread to be stopped by Ruby and then switched to another thread. So either what happens is that the thread is running a bunch of Ruby code, and Ruby at some point reaches this hundred milliseconds threshold and says, okay, time's up, I'm going to see, like switch to another thread thread can work. And in that situation, your thread, because it was interrupted halfway through doing something, then it immediately goes on the queue and it's immediately waiting to continue working because it was partway through something, so Ruby just interrupted it to give the other threads an opportunity, but your thread, if it could, it would have gone on without being interrupted. The other approach is when the thread for some reason kind of this is the JVL, and then goes off to do something. So that usually happens when you're doing something like a web request, usually a blocking web request, or you're reading from a file, or something like that that will need some waiting, usually that waiting at the OS level. And so the way that database libraries and APIs in Ruby itself do is that they release so that Ruby can continue doing something else in the meanwhile, and they have a mechanism to kind of signal, okay, I'm ready to, I need it again. So, in that situation, you don't get waiting until there's something to be done. So, you, your thread, do a request to the database, and then it will wait for the database to come back, and only when the database I need the JVL now because I'm ready to actually do some work and return back to Ruby land. And so, that's the difference in this case where you can, a thread might not have the JVL but not need it because it has nothing to do, or it might be waiting for it because it actually has something to do and needs to execute.
Valentino_Stoll:
Does the VM provide a way to indicate that it needs the GVLM?
Ivo_Anjo:
Yes.
Valentino_Stoll:
Yeah.
Ivo_Anjo:
So, basically, when you... There's a few APIs when you're usually at the C level, and they allow the usually C code or some kind of native code to tell Ruby when... In which state we're at. Like, am I ready?
Charles Max_Wood:
Mm-hmm.
Ivo_Anjo:
Am I going to do something else and don't worry about me? Or do I need... Am I ready to return to Ruby code and I need the global VM lock? So, That's the way it works currently.
Charles Max_Wood:
So
Valentino_Stoll:
you
Charles Max_Wood:
I'm just trying to get my head around how the global VM lock works. Cause my understanding was that
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
at least back
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
in the day, we were always talking about the global interpreter
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
lock,
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
which I think is kind of the same thing.
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
But anyway, then
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
you can correct me if I'm wrong right there now, if
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
I'm wrong. But
Valentino_Stoll:
I'm sorry, I'm sorry.
Charles Max_Wood:
yeah, so essentially what it does is it says I'm doing something that I, that And so, you know, don't switch threads on me yet. Right? So if it... I don't know.
Ivo_Anjo:
Somewhat. So maybe, so about the global interpreter lock or global VM lock, the thing is like, I'm not sure if it was ever called the global interpreter lock. Maybe it was in the past, but a lot of people use, call it the global interpreter lock because that's usually what the Python community calls their own version
Charles Max_Wood:
Okay.
Ivo_Anjo:
of the global VM lock. And it kind of stuck. So in a lot of places, people still say the global interpreter lock. inside Ruby, it at some point was called the global VM lock. In fact, in
Charles Max_Wood:
Okay.
Ivo_Anjo:
modern Ruby versions, since the introduction of Ractors, actually, they call it something else now, because you have the equivalent of one global VM lock per Ractor, but when you're not using Ractors and you're using Ruby 3.1 or the upcoming Ruby 3.2, you still have effectively the global VM lock. It's just not called source code. So, that's just naming.
Charles Max_Wood:
Yeah.
Ivo_Anjo:
So, call it whatever
Charles Max_Wood:
OK.
Ivo_Anjo:
is easier. So, I think that's not a big difference. The other part is that the way it works is effectively Ruby has this one lock that needs to be held whenever you're changing any, whenever you're running any Ruby code or you're
Charles Max_Wood:
Right.
Ivo_Anjo:
So, you're reading instance variables, you're writing to it, you're creating objects, you're... All of that, even if you are working on some C code, even if you build like a Ruby C extension, if that Ruby C extension is calling Ruby methods and accessing Ruby objects, it still needs the global VM lock. So,
Charles Max_Wood:
right.
Ivo_Anjo:
it guarantees that it massively simplifies in a way the VM design. it's there, but
Charles Max_Wood:
Right.
Ivo_Anjo:
still allows some degree of performance, especially as we were talking before about the async gem, because when you're just going to the database and you want to do a weight on a bunch of things and all that, having these kinds of models in Ruby allow Ruby to still take advantage of that, even though the VM itself is not built to actually be everything executing in parallel all the time.
Charles Max_Wood:
Right. That makes sense. Yeah. You don't want stuff changing out from under you because some, something else
Ivo_Anjo:
Exactly.
Charles Max_Wood:
is changing it at the same time.
Ivo_Anjo:
And
Charles Max_Wood:
So.
Ivo_Anjo:
it can still kind of happen at the level of Ruby code. So one common misconception is that the global VM lock protects your Ruby code and it actually only protects the VM code. So the VM code is always correct, but it might not protect your code because you don't know if Ruby kind of switches you halfway through something. So you might be just
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
doing some to
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
things, but maybe Ruby
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
switches off way, so by the time you get back,
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
the second thing might not
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
have been read, and so you read it again, and
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
you get an outdated version
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
or something.
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
So it makes
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
the VM implementation correct,
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
but you still need to be careful in your own
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
code to make sure that
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
it's actually correct as well.
Charles Max_Wood:
Right, so at this point, we started out talking about how stuff gets slow.
Ivo_Anjo:
Yeah.
Charles Max_Wood:
Right? And now we're talking about the GVL.
Ivo_Anjo:
Mm-hmm.
Charles Max_Wood:
So are those some dots we can connect?
Ivo_Anjo:
Yes, so in a way, there's a reason why a lot of big Ruby users, such as Shopify, they still use web servers, unicorn or web servers that are usually based on forking and not threads, because if you're not using threads, then you don't get hit by this problem. If you have more threads, the more you... the more probability of you getting hit by this problem, especially without visibility. Now that there's a new API to get you visibility, you can actually measure, or you will be able to measure with Ruby 3.2, like am I getting penalized by this issue of multiple threads fighting for the global VM lock, or is my application like really light on actually running Ruby code, so that in general, it's just like reading, going to the database. So I'm fine and I can keep working on my current, current multi-threaded setup because it's working fine and this is actually
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
not the reason why my application
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
is slow so there's many can be many reasons
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
why my application may be slow
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
and this one isn't it
Valentino_Stoll:
I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. I'm sorry, I'm sorry. So have you experimented like in a rack setting tracing the GVL to see how you know HTTP requests cycles work within the Ruby? Because I know what most HTTP for Ruby is more or less like an event loop. Have you experimented with any of
Ivo_Anjo:
Yeah.
Valentino_Stoll:
those to see if the GVL was in fact an inhibitor in a lot of these request cases.
Ivo_Anjo:
I haven't yet. So the part of the challenge of this information and especially the visualizations is you get like a lot of fine grained data. So when looking at just one request over multiple like seconds or even 30 seconds, if it's a really, really, really slow request, then you get a lot of data. So it's still
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
a bit of a challenge to
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
look at so much data and how do
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
you read it? And so
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
I think this is still early
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
times in terms of like
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
thing and it's a big challenge
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
how to
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
make this visualization work for
Valentino_Stoll:
I'm kind of really excited about this, in that it kind of forces you, if you want to
Ivo_Anjo:
very complex applications.
Valentino_Stoll:
get insight into your Ruby code, to make things really small and compartmentalize, right?
Ivo_Anjo:
and
Valentino_Stoll:
So you can have a very small thing that you need to test and be like, okay, can I get the ultimate performance out of many of these things running, right?
Ivo_Anjo:
Yes and no. So actually this is a bit part of the other things I've been working on this year, which is that part of the challenge that you have sometimes in performance is kind of rebuilding,
Valentino_Stoll:
you
Ivo_Anjo:
like making your, looking at your code in a realistic setting. So often it's possible, especially when you're looking at like some very specific part of your application, it's possible for you extract that bit and then look at it with benchmark IPS and then do a bunch of analyses on your own machine and figure out, okay, how can I make this better, et cetera. But a lot of the, a lot of, there's a lot of performance issues or like weird interactions that only really happen in real applications because you often don't know exactly what's the traffic pattern that can, that leads to the real issues. And one example I remember from when I was building backend services with Ruby is that I remember finding like some concurrency problems in this web
Charles Max_Wood:
Thanks for watching!
Ivo_Anjo:
services framework, which was not Rails. I don't remember which one it was, but it wasn't Rails. And actually these issues showed up as
Valentino_Stoll:
I'm sorry, I'm sorry.
Ivo_Anjo:
like failed requests in production. And it usually happened right after we deployed when we were under heavy traffic. And the problem is, like, for, I, in some of these issues, I tried, and it was really, really hard to figure out, like, what's happening here, because it really only happened when the application was getting hit with a lot of requests. So, part of this challenge of sometimes making your app faster is understanding, okay, is the performance in the issues from, like, your specific part of the code that you can isolate and then build a tiny benchmark and then we prove, or where is it coming from? Is it coming from something else? And so I think part of the challenge and the next steps on all of this work is making something that you can actually turn on in a production service, and obviously does not impact it, so that you can look at in a more realistic setting, or if it's in the production service, like a fully realistic setting of what's really happening. Why did this request or at this time were having performance issues.
Valentino_Stoll:
I'm not sure if I'm going to be able to do this. So I guess that leads me to my ultimate question. When is it useful to use?
Charles Max_Wood:
I was going to ask that. So you get cool data, but yeah, my boss cares about, yeah, resource stuff and costs and yeah.
Ivo_Anjo:
That's a good point. So it's one of the kind of challenges when you're building performance tools is what people care about in the moment and how can you meet those requirements. So often something like performance, often it's like if the application is maybe not as performance as we would wish years. Often it's not because, or even in any direction like, bad performance in a way is kind of another variant on tech
depth. So, like, tech dept doesn't accumulate because we really love having, like, tech depth and running old versions of Rails and Ruby, et cetera. It happens because, well, there's many conflicting priorities in the day-to-day of building a Ruby web service you need to balance how much time I spend on fixing bugs, how much time I spend on performance, how much time I spend on building new features, et cetera. So, one way that some of these work in like improving the performance of your application can impact the application is that if you are able to figure out sources of inefficiency in your application, and often, at an application with an analysis tool like a profiler, you will have some amount of, let's say, low-hanging fruit because you never looked at it and so there's usually obvious things. Oh, actually I never noticed that I forgot to turn off this rack middleware that I actually don't use and it's actually more expensive than I thought it would be. And these kind of things, the resources that you need to run a request. And if you are using running on the cloud and you have some kind of auto scaling, then well, you can scale down and you can save some money. So like saving money is really high in a bunch of people's to-do lists nowadays with the whole economy and whatnot. So actually improving performance can be a way of actually reducing costs resources, need less CPU, need less money, need less memory and so you need less boxes to run your application on. But that's just one angle, so that's the angle of that maybe your manager and your like your reporting organization would care a lot about.
Valentino_Stoll:
Yeah. And you know, sometimes those memory issues turned into throughput issues, right? As, as all of a sudden your Ruby application is in the middle of trying to auto scale and it's getting throttled from running out of memory.
Ivo_Anjo:
Exactly. And you can run into that
Valentino_Stoll:
Yeah.
Ivo_Anjo:
situation of they show up at the worst of times. And
Valentino_Stoll:
you
Ivo_Anjo:
so you start running on bigger machines, just in case. And then maybe you forget about the bigger machines. Or maybe at some point you're like, why did we upgrade to these big machines? And then you figure these kinds of things, they tend to accumulate and they add. Often you don't address them right away because you may have something better to do. the cost at that moment is like you don't care. You can just throw $500 more per month on it. But as your application grows and as your business grows, suddenly throwing $5,000 or $50,000 more dollars per month starts adding up. So maybe in that situation, it's worth putting one engineer or two engineers spending a few days, a couple of weeks in trying to see where the low-hanging fruit that we can improve on and like can we hunt down this issue that is like making our application cost us that much more and so I it's actually quite interesting because not a lot of not a lot of people I've talked to I work a lot in profiling and so I talked from the point of view of things you
Charles Max_Wood:
Mm-hmm.
Ivo_Anjo:
not costing you that much, then maybe
Valentino_Stoll:
Yeah, you know, it's funny.
Ivo_Anjo:
if it's stupidly inefficient, you shouldn't care about it. If it doesn't impact your user experience, and if it's not costly, then your time is better spent elsewhere. So, it's interesting to think of a profiler tool as also a tool to tell you about things you should not care about because it's actually not that important.
Valentino_Stoll:
It reminds me of reporting. that you just need to aggregate a ton of data. And, you know, when you're building these things, they always take a lot of time. And you don't really want to spend the time optimizing something that people aren't like, usually like, they need it now, right? Like, especially for like longer term things, like people don't need to see a year's worth of data at one time, like constantly, right?
Ivo_Anjo:
Yeah.
Valentino_Stoll:
It typically, maybe the some people's use cases are different, There are so many things like that where, for the most part, whatever you get out the door that makes it useful for somebody is going to work. And you often don't have to like, you're right, like trying to see where the business actually gets the best benefit from performance or not, right?
Valentino_Stoll:
So I'm curious because for those that don't know, you're the data dog profiler, right,
Ivo_Anjo:
Yeah.
Valentino_Stoll:
for Ruby. And that's one thing I've kind of missed from a lot of dashboards from APM providers, where they give you all the data, but there's kind of no starting point. I'm trying to save money on this, or I care about looking at latency today on this request, or this chunk of things.
Ivo_Anjo:
Yeah.
Valentino_Stoll:
I know I've had issues trying to find where to start profiling. So I'm curious from the professional, I know it depends heavily on what you're trying to do, but let's say you have business goals in mind, where do you start trying to whittle down where you're going to focus?
Ivo_Anjo:
That's a really good question. So usually when you're trying to, when you're trying to something like, the answer is it depends somewhat on your objective, but for instance, let's consider that you're trying to optimize for cost. Then usually what you do is you want to look at like a bigger, you want to look at like a big time range, such as like look at your application for one day for an entire week. And you see, over the course of this big period of time, what was the thing that spent most CPU time on? Because if things start adding up,
Charles Max_Wood:
BitTorrent. Oh, sorry.
Ivo_Anjo:
and you can kind of tell, okay, maybe on an entire week I spent 40 hours of my service CPU time doing this one thing. So, this kind of optimization for cost, then you will want to see this thing is the one that's spending a lot of my CPU, or this thing is the one that is using up a lot of memory, because you're looking at it in the aggregate and seeing what's the biggest consumers for my application. Note that when you're looking at this, again, you're getting the biggest consumers, but the biggest consumers are not always correlated to the user experience. So if you're actually looking at your users, experience, then you might want to look at being more specific about specific endpoints or specific things. Because let's imagine that you have 10 endpoints on your application that get hit directly by customers and are really important for the customer experience. But then you have an organization internal admin panel that it doesn't really, it's not a problem if you click a button to do some service, service thing, and it takes five minutes for the request to actually terminate. So, when you're looking at, when you actually want to improve user experience, then you usually want to filter this data to see, like, okay, what's the routes, endpoints, et cetera, that are really impactful for my customers? And then, again, what's the resource, like, CPU or memory, or even time? And it's kind of interesting sometimes to look at how much CPU is being used versus how much time is being used, because time means like maybe, maybe if you want to optimize something for the user experience, you want to optimize the whole end to end experience of like the customer doing the request with the browser or their app and then receiving it. So it really doesn't matter if you optimize like the CPU or the Ruby part of your code a lot, is being spent on the database. So you actually need to figure out, okay, where is the time being spent on? And inside that time being spent, you need to figure out, okay, do I need to make my database faster? Do I need to reduce the amount of database requests I'm doing? Or do I need to actually optimize my CPU? Or is it none of the above because what's actually happening is like a background thread that's actually stealing the JVL from me. And I need to look into that. ways of looking at it depending on your objective and what you're trying to improve. And one other way that you can also look at the data, which is also interesting, is you can also look at it when you're debugging some issue, because you can also look at... Because the tool, like the is doing right now, if actually your application is breaking right now for some reason, then it gives you some visibility of, like, okay, what's going on? What changed, like, the application is used to spend, like, well, less than one second doing this request, and now I'm seeing that the request is taking five seconds. What are those five seconds being spent on, and how can I like solve an incident on my application or at leas work around it.
Valentino_Stoll:
Yeah, I would say the more difficult traces that I've had to do are like worker related. Right, where you just have like some sidekick consuming a ton of workers and you have to decipher well was it the workers that were conflicting with each other? Was it one worker that's problematic?
Charles Max_Wood:
Mm-hmm.
Valentino_Stoll:
And I'm hopeful that with the new Sidekick 7 we get some more insight into the metrics there. Ruby 3 will provide a lot more insight too. Is there something that you look at for like worker related events that you can help make sense of what's happening? Or outside of just like an APM that giving metrics
Ivo_Anjo:
Mm-hmm.
Valentino_Stoll:
like and broadcasting them, is there something more specific where you can see, okay, well, you know, kind of asynchronously, not necessarily right, because of the GVL if you're not on Ruby 3 too.
Ivo_Anjo:
Yeah.
Valentino_Stoll:
But is there some things that you can do to like kind of hijack and sample, like do a live trace on what's running to get a better sense for where the sources are?
Ivo_Anjo:
In a way, that's what profiling does, because the way that profiler works is that it kind of sits on the background of your application and it looks at the backtrace of your application, what's going on. So every X amount of time, maybe every 10 milliseconds, it's like an annoying person asking, what are you up to, what are you up to? And then it records that information. So yes, exactly. not a lot of visibility, so might have some metrics telling you, like, okay, these requests are taking one minute, five minutes, whatnot. And usually, you might even have, like, a distributed tracing solution a bit like Datadog provides, which tells you, okay, it's here. It's definitely on this your... It's definitely on your background sidekick worker. But what's because it actually tells you, like, even sometimes down to the line of code, this is the one line of code that it spent a lot of time on, and that gets you a lot of context to begin fixing the issue. So, for instance, I recently was investigating an internal issue at Datadog where we were using, we used GitLab, like our own self-deployed version of GitLab, and we were having a performance issue Rails application. And I was looking
Charles Max_Wood:
Mm-hmm.
Ivo_Anjo:
at it with the profiler, and it was looking at the request that's timed out after 60 seconds and trying to figure out, okay, why did it request? Like, what did it spend 60 seconds on? And actually, it turned out that because I saw the exact line that was causing it, I went there to the code, and I saw, okay, the code was parsing tags and et cetera and trying to create a really big regular expression and then finding out which ones were prefixes of the others. So it is effectively trying to match, okay, if you usually use your GitHub username and then slash something name all of your branches, it was trying to find out what were the branches for
your user and this was a repository that for weird reasons had like hundreds of thousands of tags and something. So it immediately connected,
Charles Max_Wood:
Oh wow.
Ivo_Anjo:
This is what the app is doing and this is the repository because I could look at, oh, we're actually asking for this repository. So clearly it was a performance issue caused by the repository being hugely crazy on the number of branches and tags that it had
Valentino_Stoll:
So how do you take that a step further and start to aggregate these, right?
Ivo_Anjo:
So this is the kind of thing where if you don't have this kind of visibility, then you just see, oh, this one request or this one background request in sidekick was slow. And maybe Ruby is slow.
Valentino_Stoll:
Let's say one of these sidekick workers is slow and it's like, you know, it ends up throttling you know, queue or any other workers that are on that thread. Uh, like what, what do you start to use, like to step back and see a higher level of what ultimately is slowing down the, you know, the ones that are waiting and on that other one to get released.
Ivo_Anjo:
Yeah, so there is a few strategies. I've seen like some guides online on like good ways of doing this. And I believe it was Nate Berkopek, which does a lot of work in Rails performance that recently was kind of showing one approach to do this where he would have queues inside kick named for kind of the expected times that things should take, 20 seconds or up to one minute or up to five minutes. So I would say that that's a really good solution because you can name your queues in that approach and then you can use metrics to validate, are things going well or not? And are things going well or not? And ideally, okay, if it's a one-off, maybe you don't care that much. Again, cost and the something. But if it's starting to consistently happen, then you can use this tool to identify, okay, maybe, like, what kinds of requests are getting slowed down and why they are slowed down? And then you can decide, okay, maybe I'll move them to a separate side queue so that it doesn't impact the other, like, smaller requests that just queue up behind that one and then suddenly after the slow request is executed, then it's kind of like really fast, so they already impacted a lot of the user experience potentially. So that's the kind of approach that you can use, like use the tool to identify what's the bad cases and then either fix them, move them aside, delete them, maybe they're not that important, or like simplify them. So a bit of a challenge of these kinds of tools is that they, in a way, this is
what's wrong, but then you as a developer actually need to pick up from there. We still, I'm still hoping for the AI where the next step is automatic. So maybe it tells you like, this is where it's wrong and here's a pull request that fixes it. So I don't think we are there yet, but we're getting close on some things.
Charles Max_Wood:
So the question I have then is, I mean, I like the idea of banishing a whole queue of stuff to outer darkness and it can just get done when it gets done. But yeah, how do I use the tool, right? How do I plug this in and go, okay, you know, I'm picking up what I need to pick up so that I can know what to move or banish or rewrite or whatever.
Ivo_Anjo:
So as we were saying earlier, I work on Datadog's profiler. And specifically we call
Charles Max_Wood:
Mm-hmm.
Ivo_Anjo:
it the Datadog Continuous Profiler. And one of the key things about it is that it's built to be deployed in production and be always on. So the whole idea is kind of solving this problem of how do you gather this information and how do you then collect it, compare it and go over it. So the way it works right now is that we have a gem, which is the DTRACE gem. This gem is actually open source, which is kind of interesting. So you can see, ever since I joined Datadog around two years ago, you can see every pull request and issue and line of code that I wrote. And you can comment on it, please be nice. And that gem will collect the information that will run, it will run inside your Rails app or your site pick app and will collect information. send it to Datadog and then you can go on the Datadog UX and zoom in, oh, I just want to see my sidekick workers, I want to see my Rails app, I want to see them all together. And you can kind of zoom in and see which parts you're interested on and then you use that to identify, where is the bits that I should care about and where is the bits that I don't need to care about?
Charles Max_Wood:
Cool.
Valentino_Stoll:
Yeah, we've been talking a lot about kind of throughput and CPU targeting, profiling.
Ivo_Anjo:
Yeah.
Valentino_Stoll:
And that's only one side of the puzzle, right? And costs are often, not always, but often associated with memory-related performance issues. And I mean, I remember the days in Ruby 187 where you'd be running a Rails server and all of a sudden the memory would start spiking. you'd have to like restart the Ruby process and then suddenly everything would be working again. And it was just a common thing where okay well you know there were countless times where you just have a cron job you know to restart Rails every so often.
Ivo_Anjo:
Yeah, and you had the whole, like, for instance, I used Heroku for a long time and you had the whole, oh, dynos get auto-restarted after one day. So the dirt is still out on how many bugs that cover that, or in a way, how many applications that saves, because by restarting your application every day, you never did, like, if your memory problem was slow enough, then you you in a way. So yes, memory actually often can come into into play and can be a big thing, not just the, obviously, the Ruby, like garbage collection and whatnot is not particularly advanced. So it's, it's pretty good. So it was much, it's much better than it was in the one eight or the one night, one night times. And there has been incremental first, the concurrent work, then generations, then the compaction work that Aaron Patterson is doing. So, a bunch of work has, oh, and nowadays, the Shopify, a bunch of excellent engineers at Shopify are working on what's something that they call variable with allocation, which is about making memory in Ruby being more compact as well in some ways. So, all get better, but usually, Ruby applications are not well known for using, for running with like 60 gigabytes of RAM or something, because Ruby doesn't really scale that far. As you start using more and more memory, your applications start getting more and more slow. That can be just because your application actually needs that much memory, although most Ruby apps don't, or usually what happens is that you have a bug, a memory leak, and grows and grows. And so at some point you hit the limit on your container or Linux box and then your application needs to get killed. And one of the things, so I recently presented that Ruby Kaigi and on this exact subject, which is that me and another engineer from Zendesk called KJ, that provides like a heap profiler, which is like a tool to understand what memory is being used in Ruby and where is your memory going? What's spending memory? And like historically, that's one of the, I would say that this part in particular is one of the things that Ruby has missed a lot compared to like other ecosystems such as Java, which is Ruby hasn't had a lot of tooling, like this to look at what your application is spending memory on.
Valentino_Stoll:
How was Ruby Kaigi?
Ivo_Anjo:
It was really good, actually. I was really, really looking forward to it and meeting some of the core team and saying hi to Matt and meeting Koichi and Mame and a bunch of other contributors and also core team members such as Samuel Williams that's building a sync and having really nice conversations. So to be honest, I'm already like a big Ruby fan and I'm already quite excited by the goings on in the Ruby community nowadays
Valentino_Stoll:
Yeah, that's awesome.
Ivo_Anjo:
and I got out of the conference really fired up and wanting to do more and really excited about the future of Ruby.
Valentino_Stoll:
I've only been to RubyKaigi once, 2015, and that's where they introduced the Ruby 3x3.
Ivo_Anjo:
Mm-hmm.
Valentino_Stoll:
It's just an incredible way to get in touch with the core of Ruby, right?
Ivo_Anjo:
Yeah.
Valentino_Stoll:
It really is wild. Everything comes out of that, right? And then you start to see more and more of the effects of that, right? So I really enjoyed your talk.
Charles Max_Wood:
you
Valentino_Stoll:
It was, for those that don't know, hunting production memory leaks with heap sampling. So what is heap sampling?
Ivo_Anjo:
So yes, that's a good point. So the big challenge, like usually when you have a memory leak on your Ruby application, or when you suspect you have a memory leak on your Ruby application, the question becomes like, what do you do? How do you find it and you want to fix it? Like, how do you fix it? So usually there's a few gems that already do this and help you out. The problem is that those gems either do things that are quite expensive in terms of requiring CPU and memory, like by tracking objects and doing things like that. That's things that work really well. If you can make the issue happen on your own developer laptop or some staging environment, it's great. You don't need any of the things I'm about to say,
Charles Max_Wood:
Mm-hmm.
Ivo_Anjo:
because that's great. It's a bit like performance. produce it in a small environment and you can have a release, if you can see the same thing in your laptop that you can see in production, then don't bother about any of the fancy tools, just use whatever is most helpful for you and wherever it comes easier. So the problem becomes, okay, if you either get this impact on performance or for instance there is an API in Ruby where you can get the entire contents of your heap memory as But the problem is, again, if you're looking at, if you're dumping the entire contents of your memory, you need to be careful about doing it in production because one, this pauses the Ruby process while it's doing work, so might not be great. And two, what's there in the memory? Like there can be like secrets, like your database password, there can be user data if your application is attending requests, like privacy data. you need to be really careful about dealing with these things. And so, a different approach that is the one that we've proposed at RubyKaigi and that's used in, like, for instance, Go provides something like this, is the approach of sampling. So, instead of trying to track every object or giving you, like, a file with the entire content of your, like, Ruby memory, what if you track one, thousand objects, so you don't need to track them all, you just need to track them from type to time. And this doesn't give you like a hundred percent accurate view, but if your application is actually leaking objects, then even if you only look at every hundredth object, if you're getting more and more and more objects, it will be obvious that after one hour of running this information, or maybe after five minutes, but you run this information on ten machines or twenty machines, you will get enough data from your objects that you'll be able to tell, oh, actually, I have a lot of these objects living in my memory. And this is how you kind of start looking at it. And the context of this work actually started because Zendesk, and KJ works at Zendesk, and he was wanting to investigate some issues that he saw again in production, and that he could not figure out how to run on his own laptop. And he missed this kind of tool. And so he started working on it, and we started collaborating on it, we're both really interested in this area, and on trying to build this tool that just looks at your memory, at your objects, every n amount of objects. So it's a bit similar to the approach I was talking about earlier, where for CPU or wall clock information, we look at the application from time to time and ask it, what's up? The idea here is you look at objects from time to time and you figure out which ones are being kept alive. like some information out of it. So for instance, you can get where the objects came from, and then you can build a visualization that tells you, oh, actually you have like 5,000 objects or one gigabyte of objects coming from this one method that creates an object, and the objects never seem to be cleaned. So this is how you build a tool that actually can provide, it can help you figure out like memory leaks without being the whole, to get this information out of a production box if I can figure out how if I can figure out how to get it reproduced locally.
Valentino_Stoll:
Does the gem provide a way to broadcast or hook into open tracing or something like that for dumping the profiling data? How does that work?
Ivo_Anjo:
Okay, so yeah, not yet. So actually there's been some movement on the not open tracing, per se, but open telemetry, and I believe a lot of people from the open tracing community kind of ended up moving for open telemetry. And so the open telemetry community is actually working on supporting something like profiling so that you have somewhere to send this information. And then look at it, but the answer
Valentino_Stoll:
I'm not sure if you can hear me.
Ivo_Anjo:
is not yet. So right now this simpler gem for heap profiling just gives you the files and you need to figure it out locally and this is a bit the, when I put like my Datadog hat on, this is a bit the challenge that we're trying to solve at Datadog which is gathering the data is just one part of the problem, the other problem is like sending it somewhere and then presenting it and then et cetera. I started collaborating with KJ on this because we're also interested in building this at Datadog and we hopefully want to have this at some point in the future, but not yet. And KJ kind of got ahead of me and we also, but we were both looking at it and we were like exchanging a lot of codes and ideas and bugs that we ran into. And so it was a really interesting collaboration that we still plan to keep going on.
Valentino_Stoll:
I mean, that would be so great to have available.
Charles Max_Wood:
Cool.
Ivo_Anjo:
Yes, and definitely like this is one thing that in the past, I've built applications, Ruby applications using JRuby, which runs Ruby on the Java virtual machine. And I actually remember like finding we ran into like some performance issues and some memory leaks, and it was amazing to solve them with like the Java tooling. And so it kind of always stuck in my mind, this and so I once at some point I had the opportunity to build it for my to to get paid to build it so I was like yes I'm so excited I want to get paid and I want to to build this thing because I've been dreaming about it for a bunch of years ever since I've done it with JRuby and felt really like oh I want this on regular Ruby as well.
Valentino_Stoll:
Yeah, I mean, I remember seeing, what was it? Bloomberg has an open source Python memory profiler
Ivo_Anjo:
Eh?
Valentino_Stoll:
where you can just kind of, you know, live trace a running process for memory profiling purposes.
Ivo_Anjo:
Yeah.
Valentino_Stoll:
And it would be, I mean, it would be so great to have that in Ruby. And it is an area where it's definitely falling a little bit behind, right?
Ivo_Anjo:
Yeah.
Valentino_Stoll:
But I mean, like you said, it's being worked on thanks to you.
Ivo_Anjo:
Yes, and it's actually a very interesting thing to do because in a way, you need to access, you need to access a lot of the VM internals or a lot of information from the VM, and especially if you want to make it fast. So part of the talk we gave at RubyKaigi was, we started this approach of let's use a heap sampling, which is the basic idea of, at all the objects or the entire heap, but you just look at some. But even just looking at some was too slow. So part of the talk was, okay, then what did we do then? We found this approach. And then what we did, and at some point we were kind of copy pasting Ruby VM code into our own gem and then modifying it and doing really weird things. And it actually works. part of the things I'm also looking forward to is trying to get some of the changes and some of the interesting things we would need upstream so that Ruby becomes a better platform for providing this kind of, Ruby itself provides a lot of this kind of information and so that that gives you more data because Ruby knows a lot more about things than it gives you and also the big things for we were interested about talking at RubyKaigi was putting this in front of the Ruby core team and asking them, what do you think about this? Is this something that looks good? And could we maybe get some of this inside the VM? And again, like going back to the example of Java and Go, both of those VMs actually include support for profiling built-in, allows them to be really cheap about doing it because, yeah, the VM knows a lot more about than you could get from the outside. And so, actually, it's like one potential avenue for this is getting some things inside of Ruby. But even if we can't get those things inside of Ruby, or even for all the releases of Ruby, so for some reason you have other versions of Rails and you I think, or 2.4, and we still have some aces up our sleeves and some ideas on how to make it faster. And the objective is this is something like the heap profiling that you'll be able to just throw inside your application and not care about it. And once you start, once you run into an issue, you can look at the data. Or even you can use this as kind of an early alarm. So this tool could automatically detect a memory leak. So you could maybe say, get an alarm saying, oh, actually just deploy the new version of your application. And there's a new memory leak that wasn't there before. And that's really the dream for such a tool.
Charles Max_Wood:
Nice.
Valentino_Stoll:
Yeah, I'm waiting for the day where there's like a visual dashboard that I can just see everything in a Ruby process that's running and dig in deeper just like clicking through you know what is consuming object like sort everything and filter it and you know get
Ivo_Anjo:
No.
Valentino_Stoll:
a little deeper look at what's happening which is you know there are tools to do it but it's like a little bit of hodgepodgey nothing unified right.
Ivo_Anjo:
Exactly. And that's one of the... I actually, before working at Datadog, I was working on one of the competitors, on a profiler from a competitor. And the competitor actually didn't support finding the information, correlating information across different things. And one of the things I really like about the Datadog profiler is that you can actually look at a profile just for a specific that originated in a mobile app and then I can look at this part which was like slow and then I just look at the profile for this one part and again it's this kind of idea of maybe I'll have like just a single place where I can find this information and I can slice and dice it. Although I still very much believe tha the most interesting step will be the step after that because we have a lot of data and we show the data to you and you as the app owner needs to figure out what to do with the data. You need to think about, okay, I'll investigate something and then you investigate something and maybe you use that to improve your application. But I still think that the really cool next step would be for when you start getting kind of a push model where the system, the profile itself will tell you your application just started leaking memory at this version. And maybe here's where it's coming from, and maybe here's how to fix it. Or you've just made the change and you added this thing, and actually this thing is impacting your performance. So here's how you properly configure it, something like that. And I think there's a path from what tools currently have nowadays to something like this in the future. So some profilers, that there are tools that, for instance, detects n plus one queries to the database. So once you start getting this kind
Charles Max_Wood:
Mm-hmm.
Ivo_Anjo:
of data together in one place, it will be really cool because you will be told when your application is doing something
Valentino_Stoll:
Yeah, that would be super cool.
Ivo_Anjo:
wrong, rather than you needing to notice that your application is doing something wrong and then figure out what it is.
Valentino_Stoll:
I mean, it just made me think about too, like, you know, I would love to see how churn affects, you know, the performance of the application, right? Like, if you change the specific file too many times, like, hey, like, this is, you
Ivo_Anjo:
Mm-hmm.
Valentino_Stoll:
it's starting to become sensitive. Maybe this thing needs to be refactored, right? You can get alerting like that over time. It'll be interesting.
Ivo_Anjo:
Yeah, and actually Datadog is also building some of that. We have, unfortunately not yet for Ruby, sad smile and hints to my colleagues if anyone hears this, is that we're working on integrating with BS code so that you can actually be looking at your file and you can have something telling you actually this thing is pretty expensive. So,
Valentino_Stoll:
That's awesome.
Ivo_Anjo:
it's when as you're editing, you're kind of being
Charles Max_Wood:
That'd be so nice.
Ivo_Anjo:
then you're needing to like look somewhere else that this is the expensive part. So not yet for Ruby, but I will keep annoying everyone until we get it for Ruby as well.
Valentino_Stoll:
I will be hunting your Twitter feed forever until I see that.
Ivo_Anjo:
Yes, and I want it for myself as well, which is a part of the fun of building developer tooling is that obviously there are many kinds of customers and but it's much easier to put myself in the shoes of our customers versus When you're building a lot of other things that maybe you haven't used or you don't get like a lot of use so Definitely
Valentino_Stoll:
you
Ivo_Anjo:
this thing is like no. No, I want it
Charles Max_Wood:
Yep. All right, well, we're kind of getting toward the end of our time. If people want to connect with you, Evo, how do they find you on the internet?
Ivo_Anjo:
at the risk of sounding really passé and outdated, you can find me on Twitter. I'm at knuck, so at K-N-U-X. And I also have a blog on ivuange.me, so that's I-V-O-A-N-J-O.me. And yeah, I sometimes also send my blog posts as a newsletter if you prefer that. So definitely reach out and let's talk, to talk about performance and Ruby things.
Charles Max_Wood:
Cool. All right. Well, let's do our picks. Valentino, do you have some picks?
Valentino_Stoll:
Sure, so I've been participating in the First Ruby Friend program by Andy Kroll. Graciously started this lovely program for new Ruby people coming into the community to get paired up with somebody. And it's been really great so far. I've met somebody that's new to the community, new to programming, trying to switch industries and trying to just help them navigate and meet new people in the community. It's been a great experience for me, I hope that they also are having a good experience. But if you're interested, I recommend checking out First Ruby Friend and either helping or joining. It's been a great experience so far. And the last thing I have here, I found this thing called the Unicorn Board, and it's a that comes with a battery attachment and a speaker and you can just plug it in and program it to do whatever you want so I just ordered that and I'm looking forward to playing with that.
Charles Max_Wood:
That sounds cool. I'm gonna go next. So I usually do a board game pick. I'm trying to remember what I picked last week. I think it was Dice Miner. So I'm gonna pick a different game this week. It is Tenpenny Parks. And what it is, is it's, Evo's Naughty.
Ivo_Anjo:
No,
Charles Max_Wood:
Have you played
Ivo_Anjo:
but
Charles Max_Wood:
it?
Ivo_Anjo:
it sounds interesting.
Charles Max_Wood:
Okay, so it's like I said, it's a board game. I pick board games whenever I have a new one that I wanna tell people about and I've got a handful to go through for the next few weeks. So Tenpenny Parks, effectively what you're doing is everybody's building their own carnival or their own park. And so what you wind up doing is you, it's a worker placement game. So you place your worker and then you do the thing it on. So you can buy attractions or rides, you can put in concessions, you can clear trees off of your park area because the different attractions are different shapes and so you have to be able to fit them on your board. And yeah, what you're trying to do is you're trying to get the most, they call them visiting persons. If you're a board game person like I am, you'll pick up that And so, yeah, you're trying to get the most visiting persons to come through your park, or visiting people. And yeah, there's an emotion track. It gives you bonuses at the end of every round. You play five rounds. I think every time I've played it with more than two people, it took like an hour, maybe. Board Game Geek has it weighted at 2.17, which is, you know, It's complicated enough to be fun and has enough ways to win to be fun, but it's not so over complicated that you're going to spend forever figuring out how to play it. It's not so involved that it's hard for a casual gamer to pick it up. So anyway, I really enjoyed it. So I'm going to pick that. I think we're kind of to that place. We're starting our book club in December. We're gonna be reading Uncle Bob Martin's book, Clean Architecture, and there's a non-zero chance that he will show up to some of the book club calls. And so yeah, we're gonna do the calls. I scheduled them for Wednesday afternoons. But yeah, you just show up on Zoom, and we're just gonna have kind of table discussion type of thing, right? So you'll raise your hand, we'll unmute you, and you can go for it. I want it to be more of a discussion. I don't want it to be... Some of it will probably be a Q&A if you have a question for Bob, but yeah. And then I'm looking at books to do after Clean Architecture. We're probably going to take eight weeks to do Clean Architecture, so that'll be December, January. I'm kind of tempted to do seven languages in seven weeks, but yeah, we'll see. So anyway, I just wanted to let people know about that. There was something else I was going to pick and I don't remember. Um, I've been listening to this book series. I was, I was going to pick this too. Um, and again, I, I'm on shows. I don't remember if I picked it last week or not. So if I did, I'm sorry. The first book is called, um, keepers of the lost cities or the keeper of the lost cities, um, the main protagonist's 12 year old girl, um, who finds out that she's. Different. Well, she knows she's different, but she finds out why she's different. It's kind of a fantasy where they come into our world sometimes, the human world, but most of it takes place in this kind of alternative world area, whatever, that was created by the author, kind of like Harry Potter. And it's been fun. I'm on the fourth book, and yeah, I've been enjoying it. that you're looking for that you just kind of want to fall into on a deep read. It's a pretty light series, but it's fun and it's geared toward teens and pre-teens. So my kids all love them. So anyway, but I've been enjoying them enough to listen through them and not give up on them because I'm bored or anything like that. So I'm going to pick those because I think they're a good read. And if you have kids, they're definitely something you can read with your kids. I think that's all the picks I have for now. I guess one other thing I just want to put out there. Here in the US, we just barely went through an election day. And you know, everybody has feelings, right? You wish these people had won, these people had lost. Maybe you expected a different outcome than what's out there. But no matter where you come down on this stuff, just keep in mind that everybody kind of comes from a different place and we're all trying to do our best. worth demonizing people or fighting about this stuff. I mean, I feel like some of these issues are worth discussing and some of these things are definitely worth fighting for, but not at the expense of dehumanizing another person, right? They may just not get it, but that's different from them not being human enough to deserve some level of respect and dignity. So I'm just gonna put it out Most of the people I talk to, like 90% of the people I talk to, they kind of intuitively know this, but then you get the other 10% that are the loud a-holes that want to go and beat people down in any way they can. At least on the internet it seems that way. Just take a minute, see if you can come to understand each other even if you don't agree with each other. Anyway, that's what I got there. Evo,what are your picks?
Ivo_Anjo:
Yeah, plus one on what you were just saying. It's kind of hard to follow that. So I'll veer away from to more lighter subject, but plus one on that. I have a few picks. So recently, Linus Starvold, the creator of Linux, did this interview. And when asked about how Git got popular, he had an interesting quote where he said, oh, at some point, the Ruby people, strange people, they picked up Git and they run with it, or something like it. This quote has been going around like this week on Twitter and I found it like really funny. Because yes, I like this characterization, the Ruby people, strange people. I think it fits and why I like being in this community. Another pick I have is actually the other RubyKaigi talks. So the RubyKaigi talks have been uploaded in the past two weeks on YouTube, like for this year. And you have really interesting talks. So for instance, you have this talk on real world applications with the Ruby Fibre Scheduler, where he talks about how we basically started building what is now Falcon and the Async Jam and all these amazing things for Ruby, because he wanted to build like a DNS server in Ruby. And he kind of just kept building and building and this is how it ended. So like, I hopefully more people need to build DNS servers in Ruby. And another talk which was quite fun Oh, and on YouTube, some of the talks on RubyKaigi were in Japanese, but they've also uploaded the closed captions, so you can still watch them. And the trick 2022 is one talk that is in Japanese, and trick was a contest of building like tiny bits, tiny Ruby applications that do really, really surprising things. So you will see the most mind-bending Ruby code that you've ever seen, code that you Ruby, for instance, a bit
Charles Max_Wood:
Ha ha ha
Ivo_Anjo:
of source code that is shaped like an aquarium, and when you run it, it actually shows you the aquarium and the fish are moving. And then at any point in time, you can control C and stop it, and then you can copy paste how the aquarium looks right now, and if you paste it again and you run that code, the aquarium picks up from there. So, the aquarium is the code itself, and it always shows the code for the frame that you're watching, and that's like, I can't even begin to think about how one builds something like that. And one final talk that I also enjoyed was a talk called Mega Ruby, where the presenter was running Ruby on a Sega Mega Drive or Seg Genesis in the US. And he actually did the presentation on an actual Sega Mega Drive that he showed he had up there on the podium. And so he built like a tiny presentation app on the using Ruby and then did this presentation on it. And that was amazing. So this is this is why I really enjoy RubyKaigi different Ruby conferences have a different vibe and RubyKai also always have this very playful, let's do something weird vibe and I love it. And my final pick is a book called The Culture Map by Aaron Meyer, which I'm reading right now. And it's actually quite interesting because the author has done a lot of research and came up with a bunch of categories that try to characterize a bit different cultures in the world work on average when doing work. So the categorization is something like cultures that prefer low context versus high context. So when you're speaking about some subject, do you try to explain all of the context or do you kind of just assume the other person has a lot of context and you jump right into it? Things like such as indirect and direct negative feedback. So do people prefer getting indirect negative feedback or do they prefer hints? Doing something like principles first versus applications first where you're talking about like why we're doing this and the general part of this approach and then doing the specific thing or doing the reverse. And I'm kind of simplifying a bit and losing a bit of nuance but the book explores a lot about how this changes in the business world across the experiences I've had in the past working in like very global companies, in a way these books help you think through, oh, and you were just speaking about like different people come from different places and that's exactly it. Like when working at companies and when talking to customers, like everyone comes from a different place and culture influences a lot and so this book is not trying
to predict what any one specific person will do but it's trying to give structure to think about like how you should think about the different ways that people collaborate and work together. So I am really enjoying this book.
Charles Max_Wood:
Very cool. That sounds really interesting. I love diving into that and just, you know, yeah. Why do people do the things they do? Why do they think the way they think? And, you know, how do we, how do we turn that into something
Ivo_Anjo:
Exactly.
Charles Max_Wood:
positive?
Ivo_Anjo:
And everyone has a different
Charles Max_Wood:
So.
Ivo_Anjo:
normal, right? There is no normal. Everyone thinks that
Charles Max_Wood:
Yep.
Ivo_Anjo:
their normal is everyone's normal, and then you travel to somewhere else in the world. You travel to Japan and their normal is not our normal. So yeah, this book is abou that.
Charles Max_Wood:
Awesome. All right, well, thanks for coming, Evo. This was reall cool. I think my brain's still melting, so We'll go ahead and wrap
up here, and until next time, folks, Max out.