CHARLES: Hey, welcome back to another episode of the Ruby Rogues Podcast. This week on our panel, we have Valentino Stoll.
VALENTINO: How you know?
CHARLES: Charles Maxwood from Topendevs. We've got a special guest and that is Samuel Williams. Samuel, do you want to introduce yourself? Let people know who you are. Why we all love you.
SAMUEL: Thank you. Yeah. My name's Samuel. I live in New Zealand and my cat's right here with me. It's like a morning routine. And I'm having a coffee, which is awesome.
CHARLES: What time is it in New Zealand?
SAMUEL: Yes, it's 8 a.m. A little past 8 a.m. actually. Close enough.
CHARLES: I was imagining it was like 4 a.m. or something. So I don't feel guilty anymore.
SAMUEL: I have had meetings at like 4 a.m. when someone's like, let me schedule a meeting. And I'm like, okay.
CHARLES: Yeah. I'm on mountain time and it's, it's just past noon here. So I was like, okay. That's not bad.
SAMUEL: Yeah. So my back end background is in software engineering. I did a bit of computer vision growing up in university. I did an MSc in outdoor augmented reality and I've always been passionate about solving problems. Um, I've never really been satisfied with just the status quo or this is good enough. Um, and maybe that's informed the way that I've looked at the software that we have today and thought about maybe like what we could do better.
CHARLES: Right.
Hey folks, this is Charles Maxwell. I've been talking to a whole bunch of people that want to update their resume and find a better job. And I figure, well, why not just share my resume? So if you go to topendevs.com slash resume, enter your name and email address, then you'll get a copy of the resume that I use, that I've used through freelancing, through most of my career, as I've kind of refined it and tweaked it to get me the jobs that I want. Like I said, topendevs.com slash resume will get you that. And you can just kind of use the formatting. It comes in Word and Pages formats, and you can just fill it in from there.
CHARLES: Cool. Yeah. And I interviewed you for the Ruby Dev Summit. People can go check that out, rubydevsummit.com. But yeah, we talked about Async and Falcon, and I thought, you know what, there's some great stuff here that we should just talk about on RubyRogues, and then, you know, we kind of got into future of Ruby and some of the things involved there as part of the Summit. So yeah, do you want to just kind of give us the 10,000-foot view on Async, and then maybe we can get into what Falcon does in a minute?
SAMUEL: So async is an interface which gives event-driven concurrency to Ruby, provides event-driven concurrency, and it's always been designed as to give engineers the best possible experience with concurrency. And so in that regard, that sort of mental model informed a lot of the choices and the design choices. So async provides task-based concurrency. It doesn't require any special keywords for asynchronous behavior, like say in JavaScript where you require asynchronous weight. And it tries to be as efficient as possible, but it also sometimes chooses to develop a happiness over efficiency to make it easy. Because asynchronous programming can be pretty tricky. And giving people good logging, good feedback when things go wrong is really important actually. And I don't think we've solved all of those problems, but we definitely like that. That's the mental model of Async. And so Async provides this event driven concurrency and on top of that, you can build other things. And the first proof of concept that I, you know, major proof of concept that I was really fascinated in was building a web server. Because why not? And that's where Falcon came from try it out and see what we can do with Async and what the performance is like. Then we can actually measure things and do some interesting real-world comparisons. So I have a question here that I'm kind of thinking about. And I mean, some of this, you said event-driven and that just made my mind think about, well, one, arguments that I've had with people over Node.js. But also, right, because they're like, yeah, we got events, yeah, we got Ruby.
CHARLES: Anyway, but the other question I have is way back in the day, we had Event Machine. And a couple of weeks ago, we interviewed, or I interviewed Marc-Andre Cornoyer, who built Thin. And a lot of that was kind of built on some of the stuff with Event Machine. So what's different between say, Async and Event Machine, and then maybe Thin and Falcon?
SAMUEL: Yeah, that's actually a great question. As an aside, actually you maintain Thin slightly. Finn was sort of languishing a bit, and I got involved in maintaining it. I learned quite a lot about it as well. So I really like exploring the ecosystem and seeing the different kinds of implementation. So to your point, event machine versus async, or the modern version of async actually has a gem behind it called IO event. And that's the actual event loop, the thing that provides the event loop. But surprisingly, or maybe surprisingly, Ruby actually has quite a rich history event driven architectures. It's not just event machine and like async. There's also celluloid, EM synchrony, never block is one that many people haven't heard of, but was an interesting experience.
CHARLES: I haven't looked at somebody for so long.
SAMUEL: Yeah. And there's a handful of other ones. If I go digging in my notes, I'll be able to pull them out. But, but you know, I suppose my point is, there have been a lot of keen minds in the Ruby community experimenting with this. And it's interesting to compare the different approaches. But I wanted to call out that there is actually a rich history there of people experimenting and trying to build things and looking at the different approaches. So specifically with, say, Event Machine and Async, I think one of the challenges with Event Machine is that you're Event machine was written from the ground up without really thinking about compatibility with existing Ruby interfaces. I think that would be the biggest criticism I could make of an event machine. So when you build an application that uses event machine, you can't use normal Ruby IOR. You have to use event machines custom wrappers in classes to deal with event-driven IOR. And I think for TCP, you get a class instance which has methods that get called back when data is available. And I think for UDP, it's similar but a little different. And for things like sleep or timers, all of these things, they're all event machine-specific APIs. So I suppose to compare that with async, well, with async 1.x, because we were kind of in the same boat, we made wrappers for everything. So there's async.io, which is wrappers for Ruby's native IO. Async itself provides wrappers for things like sleeping. But there's other things which aren't, like DNS resolution and async 1.x process, waiting on processes. There's all these different things that can cause an application to block and that we want to kind of lift into the event loop. So when I built async 1.x, I went to Mance and that was the prototype. It's like, look, Mance, this works. People have done this before. To take it to the next level, what we need is actually hooks inside the Ruby virtual machine so that when Ruby itself tries to do a blocking operation, we can hoist that into the event loop. And it has to happen at the core of Ruby. It can't be a wrapper that sits on the outside because the problem with a wrapper is that as soon as you start using an existing application or existing code that uses native Ruby IO, that doesn't use the wrappers and so it will block. And so you can't, it's sort of a compatibility thing. So event machine sort of suffers from that compatibility I think. And so while Event Machine itself is generally quite okay and was I would like to make an assertion that I think most event loops are roughly the same performance as every other event loop. Unless you massively mess up the implementation, event loops are largely on the same level of performance. There's very few things you can do to make something like 10 times faster in the loop. It's all going through the same operating system interfaces, roughly speaking, and there's a few choices you get to make in how you organize the the user facing parts. So, but there was one other issue that I had with event machine when I actually tried to use it was that it would crash a lot and it didn't support IPv6 or IPv6 would cause it to crash or something in that fallback. And I think one of the problems with a lot of the existing like never block and celluloid and event machine and whatnot was that maintainers tried to bite off too much of the problem in one go. And I think that made it hard to maintain or even really specify the nature of what these, the problem these libraries were trying to solve and the problem that got too big. And Event Machine was trying to reinvent every Unix interface under the sun because it was trying to make them all non-blocking. And it's just really hard for one person to do that. So another point of comparison is with Async, especially Async 1.0, I was really, really specific on what I wanted to achieve and what I wanted the interface to look like. And I got from async prototype to async 1.0 release in like three months. And I was like, this is one of the most important things for users is to have a solid foundation that you say, Hey, this is it. This is, this is what it's going to look like for the rest of eternity. You can build on top of this without coming back in six months and having everything changed or, you know, not working or so complicated that we don't know how it's supposed to work. Or there's a lot of criticisms I could, I could dish out, I suppose. And that, you know, um, It's not so much that software engineers working on these problems will like better or anything. It's just more like the problem can be extremely complex if you don't compartmentalize, I suppose. So the second part of your question was how does Thin and Falcon differ? And so I think when you look at a server like Thin that's event driven, Thin, I guess in some ways you could say Thin was pioneering because it.It introduced, uh, so in the Ruby world, when you have a web server, we have like an intermediate layer called rack. Yeah. It's a gem which provides the bridge between a web server and a web application. And it provides a common interface based on the CGI specification. Yeah. Some, some sort of like hybridized version of the CGI matched up with a bunch of modern stuff.
CHARLES: Um, yeah, Mark Andre and I talked a bunch about that in the interview we did with him. So, oh, fantastic.
SAMUEL: Um, and so Thin kind of pioneered what concurrency for a web application, like what the interface for concurrency might look like to a web application. And I think this was something which it feeds into the whole Rack 3 upgrade that we did over the past three or four years. So the CGI model that we used for a long time was just request response. Here's a request that comes in, here's some headers, a payload, whatever. And then we're going to make a response and send that back out of the wire. And that doesn't really work for things like WebSockets or if you have 300 gigabytes of CSP, you want to stream down the wire because you can't really reasonably buffer 300 gigabytes of CSP. Right. Um, and so REC, uh, REC2 did not provide any mechanism really for the kind of concurrency that Thin was trying to implement. And I think Thin pioneered what some of those things should look like. And so we, I don't want to say we took them wholesale, but we took the general concepts, plus my own feeling on streaming. And we pushed it into RAP3. So RAP3 has a proper model, a standardized model for streaming requests and responses. That means when the request comes in, you can incrementally read what the client is applying. And when you write the request back, you can incrementally write the response back out to the client. And this works for things like WebSockets. Like Puma on Rack 3 supports WebSockets out of the box on a request thread. And so this happened at the same time that I built Falcon. And so Falcon informed what Rack's interfaces should look like. And I think so thin pioneered those interfaces or the nature of some of those interfaces. And then we took that and we looked at, when I was building Falcon, I was like, how do we actually do this in a way that can work across all web servers? And so we kind of extracted that knowledge, standardized and racked, that took a couple of years, actually, like I'm not even joking, it was literally a couple of years to figure out what that was supposed to look like and get a release. And so Falcon basically evolved some of the conceptual things that Thin as an event-driven web server does. And the one other point is that I'll draw back, I'll go back to compatibility, which is when you run an application on Thin and it uses like say native IO, that will block Thin. So there's operations you can do in Thin that will be event-driven and there's operations you can do in Thin that won't be. And I think things like EM synchrony, which try and provide the transparent wrappers may help somewhat but it's still not guaranteed that you're going to have like the really good experience with them. With ASIC2 and Falcon, because you have that interface right at the heart of Ruby intercepting all those blocking operations, like waiting on a process, child process, DNS resolution, reading and writing from sockets, sleeping and timers, that kind of stuff. When you run an application on Falcon, it becomes transparent and concurrent to make some extent as possible on that particular implementation of Ruby. So it's kind of hard to say what is the difference, but that fundamentally in a nutshell would be like compatibility and just general approach to concurrency and the way it's exposed.
CHARLES: Cool, very cool. I see Valentino thinking. You have a question you want to ask?
VALENTINO: Yeah, I did. So I'm curious, like, the IO event, what, first of all, I love the Async logo. I think that's hilarious.
SAMUEL: Thank you.
VALENTINO: For those that don't know, it's the little sync with a Ruby coming out of the faucet. Yeah, yeah, yeah. That's great. It's hilarious, isn't it? But I guess I'm trying to understand maybe, because I know the fiber schedule that you've made, right, ultimately that's made it into, I don't know what version of Ruby, 3.1?
SAMUEL: Yeah, 3.1 was the first version of Ruby that had a full implementation of most of the critical hooks.
VALENTINO: So which aspect of your async framework, like, is that piece of it?
SAMUEL: So the I.O., yeah, so that's a good question. When I first built Async, I was, let me step back. As a software engineer, I prefer to build modular components. I find that there was interesting observation I made when I was building. I used to work commercially on C++ projects and there was observation I made, which was the C++ build system is so complex in general, that people would, they'd build one package and then they'd be like, oh, it was so much work to build a package of stuff. I'm just going to add more to this package. And so one package grew bigger and bigger and bigger. Yeah. With like, instead of having small libraries that could work together, you'd have things like Qt or there's a whole bunch of like C++ frameworks that just go from UI all the way through to like database and network something and it's like and everything in between including like their own implementation, the smart point is and just like the whole the whole everything and it was just as a commercial software engineer working on C++ projects I just found this insanely frustrating because you'd pull in one dependency for one little tiny piece and you'd get the whole kitchen sink. When I moved to Ruby and started working on Ruby in the morning commercial sense and open source as well. I was like, well, this is great. It's like so easy to make a package like a gem. And so I've always had this mental model that like the overhead of like trying to package up source code is like kind of inversely proportional. Is that right to like the size of the packages? So if it's, yeah, maybe I got that backwards, but anyway, like the harder it is to make packages, the worse the packages become in terms of scope and like compatibility and everything. So I was really inspired by RubyGems and how simple RubyGems makes it to build that. So When I built Async and the things that came before, I always felt like I want to build small packages that work together, that are compatible, composable, easy to use. And the reason why is to avoid that multiplicative complexity that comes out of huge code bases. And it's even hard to understand. You go to a code base that's 100,000 lines of code, and you're like, where do I even start with this? Like, trying to understand what problem the person was trying to solve and what they were thinking when they were solving it. So
CHARLES: You're talking about, just to back up for a second, you're talking about something like if I wanted to use active record requiring me to pull in rails to use active record as opposed to just having active record as its own little piece of the puzzle, right?
SAMUEL: Yeah. If you go and look at other languages, you'll find that in order to use active record and whatever language you look, like if it was C++, let me just pull that example again. You wouldn't just be pulling in like a database, you'd be pulling in like a whole, it'd be pulling in like everything. Yeah, everything, yeah. Go and find like something which only does databases in an ergonomic way.
CHARLES: Right.
SAMUEL: Yeah, and yeah, like maybe Boost is another good example. I know that Boost can be used independently, but we pulled Boost into projects and it would add like 30 minutes to the compile time and like 30 gigabytes of 10-minute storage just for the like intermediate file. This is like crazy. Anyway, so With async and gems that came before and gems that have come since, I always felt strongly I've got to compartmentalize this so it's easy for people to understand. And actually it was interesting because I've had that feedback from people who look at the code and they're like, oh, this is easy to understand. Like, I get it and I learn a lot from your code. So I'm always happy to hear that feedback. Async one, because it was really the proof of concept, they used an existing gem called niofar, which is an event loop that was It's actually used by Rails now inside Action Cable. But NeoFR is just an event loop interface. It uses lib-ev under the hood. And it provides non-blocking I-O for Ruby. And the strongest thing about that gem is just how robust and long-lasting it is. It's been around forever. We have very few bug reports. It's solid, it's like it's just been, it's a work-coil survive. It doesn't do anything else apart from I-O, non-blocking I-O. So it only does that part of the problem. Um, because it's the problem I was going to stop an async one, you know, as a proof of concept, that's what we use. So async draws on NIO for, uh, is the backend and it provides non-blocking IO. And you can build other primitives on top of non-blocking IO. Like if you want to do process wait and all you've got is non-blocking IO, what you do is you make a thread, you start your process on that. And then when the process is done, you have a pipe, like a notification pipe you write on and the main event look is just waiting on that there end of that pipe and so that's when you know the process is finished running. So there are ways you can achieve, you can compose that together to achieve more complex behavior. In async two, because that was more about we've shown the proof of concept and now it's all about performance. So async one and async two actually have exactly the same or very, very similar interface, like from the user development point of view. But the backend is completely different. And so the IO event jam provides an event selector similar to NIO4R, but it's much more comprehensive. And it's quite the challenge to implement it across all platforms efficiently and support the wide range of features that we wanted to support non-blocking, like waiting on processes on BSD versus Mac OS versus Linux. It's all a little different. And so the IO event gem is where we multiplex out to the different operating system interfaces to do things as efficiently as possible. Does that answer your question, someone?
VALENTINO: Yeah, I think so. I guess that makes... I guess I'm trying to find the upstream Ruby aspect of it and what you're bringing to... Like, where does the Fiverr schedule itself get involved in that stack?
SAMUEL: Yeah, okay. So.. The IO event gem provides a class called selector. And the selector class provides a lot of the low-level interfaces for waiting on IO. So waiting for something to be readable or reading from something, waiting for a process to become, to finish. And so it provides all these low-level implementations. And then async, async has a scheduler and a scheduler has a selector. So the selector is something which I suppose you call it a selector because it's selecting things that are ready to go. And is basically saying what's ready and then doing that stuff and scheduling the fibers to resume execution. So where the fiber scheduler interface comes into play is when you start an async block. At the top of that block, the async scheduler that's allocated as part of that block will register itself with Ruby as the Fiber scheduler. So then when you do operations inside that block, those operations will be redirected to that particular scheduler, which will then decide, do I need to delegate this to the event loop, the selector, and wait for something to happen or whatnot.
VALENTINO: Gotcha. So, I mean, that's pretty cool. Have you found like a that certain platforms are more efficient at doing selecting or the scheduler is more efficient in certain platforms? Or do you not even bother measuring at that level?
SAMUEL: No, actually, I've spent a lot of time measuring at that level. It's a really fascinating problem area. Gosh. So there's a couple of main implementations. On Linux, you have Epoll and IOUren. On Mac and BSD, you have KQ. And on Windows, which we don't really support very well, we have IOCP. And then for other platforms, we just use, we defer to Ruby's internal select, which is often implemented on top of poll, but not always. So to give you like a very tangible example of like where something is very different, But this is more of interface thing than operating system difference. Uh, and I'm actually not sure if this is still true today, because it's not something I'd readily go and check, but, um, the select interface on, on Unix. Can only handle up to file descriptors 1024. If you have more than 1024 file descriptors in your process, select will break. Um, and the reason for this is because at least my understanding at the time, select uses a bit set. So it literally uses like a piece of binary data and it flips the bits zero one, depending on whether you're waiting on that. And the challenge with that is every iteration on the event loop, you're providing, you have to construct this bit set, like, hey, here's all the things I'm interested in up to like whatever. And poll is the same, it's just a list. And then you supply that to operating system and the operating system's like gonna go through that list one at a time and say, hey, this is ready, this is not, this is ready, this is not. And so if you have like say five connections, that's five things operating system has to do. But if you have like 500,000 connections, that's 500,000 things the operating system has to do every time it goes through that loop iteration to process events. And so the reason why I mentioned this is because this is where you sort of have a fundamental performance difference in the APIs. The most primitive APIs are linearly proportional to the number of connections you have in terms of individual operation to find out what's ready and what's not. But people found that was a problem. People were trying to build high-performance web servers and DNS and then whatever else and quickly ran into limitation of this interface for event-driven IOR. And so... We ended up with things like poll or E-Poll or KQ where instead of giving that whole list to the operating system every single time you want to check for readiness What you do instead is when you Monitoring a file descriptor or like when you're monitoring something like a network connection or a web request or whatever Is there data available? What you do is you register that into the operation and say hey when this thing becomes readable You let me know and then when you go through the event loop, instead of providing that huge list, you basically say, yeah, I told you before, if this thing comes readily, let me know. And you basically say, is it ready now? Tell me if something became ready. And so we turn it into sort of like, instead of every iteration of the event loop being proportional to the number of connections, it's now just sort of a constant time operation, basically proportional to the number of ready connections because you're gonna get back a list that's the same size as the number of connections that have actions to occur. Honestly, that's pretty much what you'd hope for. And so KQ and EPOL both work like that. And they're considered really the gold standards, I suppose, for high-performance event-driven networking. The biggest problem with those interfaces is that they are really geared to readiness based notifications. Now, basically what that means is, when you have a socket and data is coming across the network, when the data comes into that socket, the socket is now ready to be read and you'll get a notification, he says, hey, there's data available. You can read the request or whatever. If you try and extend that to the file system, we have files on disk. A file on disk is always readable. You can always read the data. It's not. It's not reading the data. You're not waiting for the hard drive to like, you know, spin around and get to that point at the code level. What you're waiting for is the data to come off the disk and end up in a buffer somewhere that you can access. And that's, I don't want to say it's the slow part because it's pretty fast, but like, you know, that's the part where you're waiting. And so the readiness-based notification kind of breaks there because it's there's no easy way for you using E-POL or KQ to basically say, hey, I want you to go and read this data and then let me know when it's available to be read. Like, what's the, like, there's no API for them. Um, and so there was an original attempt to solve this called like AIO or async IO, like at the Linux. Um, and I think it was just considered really bad. Like everything I've read about it is basically just don't use it. And I've not personally used it. don't really want to say like it's horrendous, but I think when I last read about it, what I read was that a lot of it was emulated in libc and was basically just mapping back to sequential read sprites or something. What, what we, the evolution of, of this API from EPOL is, is, is one that supports, not just readiness notifications, but one that also supports completion based notifications. There's two kinds of notifications. Readiness is when something is ready to start and completion is when something is done executing. And so when you read data from a desk, you're not interested in waiting for the disk to become readable, the disks are always readable. You can always issue a read request on a desk and like a socket where if the data is not there, it will just stop your wait forever until the data comes in. Completion base is when the data is ready to be used. And so Windows has IOCP and Linux recently experimented with something called IOU ring, which is now getting more traction. And in fact, I think IOU ring was so good. It was very funny. Windows actually copied it and now Windows has their own IOU ring, which actually I tried to use it and it was like horrendous. I couldn't figure it out how to get it to work at all. So.. So in a nutshell, there's a variety of different interfaces. Fundamentally, what it comes down to is readiness versus completion and having those different, support for those different events. So in IO event jam, we do support IO-U-Ring. It's actually the default on Linux. Now, if you have an up-to-date kernel and it will actually use where possible completion-based notifications for file system and sometimes network IO, depending on the circumstance. And so that's actually, Funnily enough, it's not a huge performance win. As I mentioned before, event loops, much of a muchness. You're beholden to so many factors outside of your control. And as you asked, do I measure this stuff? Yes. And IOU ring, actually, when you really get down to it, when you really start trying to micro-optimize some of these things. There are some incredibly different, like challenging insights that you derive from benchmarking and micro optimization that, I suppose they're kind of disappointing in a way as engineers, you know, when you're trying to actually get to that level and then you finally realize, oh, even though I did all of this, I ran into this wall here that I can't overcome. That can be a bit frustrating. And I are you're in, while it brings a lot to the table, still has a few things like that. And that's, to me, that's actually quite fascinating and what we do next.
CHARLES: So cool.
SAMUEL: Yeah.
Hey there, this is Charles Maxwood. I'm excited because I wanted to let you know about this thing that I pulled together that I had just, I've been dying to have this for years and I never felt like I could. And then I just realized that there's no reason why I can't. So I'm putting together a book club and we're gonna read development-focused books, career books, technical books, whatever. The first book that we're gonna do is going to be Clean Architecture by Uncle Bob Martin. If you're not familiar with Clean Code or some of the other stuff that Bob has done check that out. I've also talked to him on the Clean Coders podcast, which is on Top End Devs. But yeah, we're gonna get on. He's gonna show up to some of our meetings. And what I'm thinking is we'll probably have like five or six people part of the conversation along with Bob and I at the same time. And we'll just, so somebody can come on, they can ask their question, and then we'll just rotate people through. So we'll mute one person, unmute another person when it's their turn to come on and be part of the discussion. So we'll do that for like an hour, hour and a half. And then the other part of it that I'm putting together is just kind of a meet and greet gather area on Gather Town. And so after the, the meetup and the call, what we'll do is we'll all go over to Gather Town and you can just log in, walk up to a group and have a conversation. And that way we can all kind of get to know each other and, and make friends and, and get to know people across the world. Uh, one thing that I'm finding is that, yeah, the meetups are starting to come back but a lot of people don't have the opportunity to go to a meetup. And I really want to meet you guys and talk to you. So we're going to put all that together. It'll all be part of that book club. You can go to topendevs.com slash book club to be part of it. And I'm looking forward to seeing you there. The first book club meeting will be in December, the beginning of December. We're starting the first week of December. And, um, you'll also be part of the conversation about which book we do next. I have one in mind, but I want to see where everybody's at. So there you go.
VALENTINO: So I kind of want to change the topic a little bit because I mean, a lot of this stuff is fascinating to understand how things work under the hood. But these are not things that at least in my, there's not things that I plan on touching, right?
SAMUEL: It's just like, okay.
VALENTINO: I know that there are little gnomes in my computer that are banging on this stuff, right? And you're teaching the gnomes how to bang on other stuff. Right, so I'm just gonna run my Rails app or my Ruby, you know Ruby script or whatever on my machine and it's gonna work its magic. So if we come up a few levels to kind of the level where I'm usually programming, right, where I'm either writing a script that does a bunch of things, right, and so then I have the async non-blocking IO stuff that's interesting or, you know, things like that. Where do you see people using async in their own stuff? Yeah,
SAMUEL: really good question So.. It's interesting. Sometimes I hear about like big companies using async. They never really divulged much information. So it's fascinating to learn about different use cases.
VALENTINO: I know. I want to play buzzword bingo with my projects too. I'm using AI and blockchain and throw in a little bit of this and a little bit of that. I mean, I can't tell you what I'm working on. It's stealth mode.
SAMUEL: Yeah, absolutely. Where I think Async really shines right now is in applications that are IO bound. And that's especially true for things like API servers, proxy servers, anything like proxy gateways. So if you have, we just had someone recently talk to this. They migrated an application from, I guess, Perma to Falcon and Async HTTP Faraday. with a few minor changes because I think that async HTTP fairies supports persistent connections by default. So the need a little more bookkeeping in the application level, but, um, and I think they saw like half the latency, twice the throughput and half the running costs. And so, um, this is the kind of like, and so that was a API gateway, if I recall correctly. And so this is the kind of thing where a request comes in and it's basically just getting proxied through to like a different system. And we've seen like a variety of people with these types of systems in production, having great success with the event-driven model. And I think... A lot of these people, when I talk to them, they don't have to make many changes to their application, which was one of my original goals, was to kind of transparently try and go in there and make all of those IAO operations non-blocking. But sometimes applications do explicitly want to opt into concurrency. Sometimes concurrency is a performance thing, and sometimes concurrency is actually a functional requirement of an application. And I think it's important to be aware of those two distinctions. So.. Where concurrency is about performance is if you have Falcon and it's just doing like HTTP as an API gateway, you can run, like if you run that on Falcon versus Perma, you'll get better throughput on Falcon without any changes. So you improve your performance, but there's no actual change to your application behavior or anything like that. There are also situations where you might be like, Hey, I have a deadline. Let's say you have, um, let's say you have, uh, some data that comes in and you want to sell it to the highest bidder. So you need to know who's going to be the highest bidder. Maybe the value of the data decreases over time. Maybe it's by the second. And so in that scenario, what you want is you want concurrency because you want to talk to as many people who are going to pay for the data, find out who's going to pay the most, probably have a deadline on when they're going to reply by, so you need to have strong guarantees around the operational timing of this of all these requests you're fanning out. And so async can do this too. This is where you probably use a fan out, MapReduce style approach where you basically, hey, I have all these providers I want to talk to. I have the data that comes in. I want to fan it all out as quickly as possible over five second deadline or a 10 second deadline. If they didn't reply by then, see you later. And then go through and figure out who's going to pay the most and then finally ship the data off to them or something like that. And so we see use cases like this as well, where people are using concurrency at the application level to achieve things that would not be easily possible using other approaches. And what I say easy, like, look, if I'm completely frank, like, of course there are other ways to achieve this in Ruby, like threads and maybe even I've certainly had experience with other Ruby gems, which without a foundation like Async, without a generic foundation, you have gems like Tofias, which is an HTTP gem, which can do multiple requests. But then you have other gems which do the same thing, but they're not compatible because they all use different event-driven foundations. And so they're not compatible with the server. They're not compatible with each other. So if you start trying to mix this stuff together, which I've seen... It just ends up like a huge disaster. So I think that's the value proposition of async is it's a foundation on which if everyone or, and it's the fiber scheduler, which is the common interface, async. What is it saying? Async is dead, long live async or long live the fiber scheduler. Async is a great user developer experience in my opinion, but if you're building a library, you should target the fiber scheduler, not async. And this you really want to, like, well, you can choose, of course, but, um, and so when you run your web application on Falcon, um, not only should you see improved concurrent, I O concurrency. So database queries will run in theory, like at the same time. Um, if you have like a high latency query, like that's not uncommon, then, um, you've got to put an index. Uh, it won't solve the performance of an individual request, but it will, it will improve the concurrency of the whole application to the extent possible, then where it gets really interesting is when you start adopting that model in your application.
VALENTINO: Yeah, I'm curious about that. Like, I mean, especially now that, you know, the Ruby world is still like very much dominant by Rails ecosystem. And so there have been some more, I mean, I'm curious, because Rails was not always like, you know, runnable on Falcon, right?
SAMUEL: Yeah, yeah, so.
VALENTINO: Yeah, what was that story? Like, what were the challenges there? And like, Is it okay now?
SAMUEL: Um, I don't normally tell the story in public because I don't know how I'm going to like upset, but, um, I think it's about time I tell it. So, um, when I originally went to-
CHARLES: Hang on, let me get some popcorn.
SAMUEL: Yeah. Look, this is no criticism of anyone. It's more just a funny story. And it's probably fair enough that everyone had the opinions they did at the time and still do, and still do to some extent. Um, but it's also a challenge for me, I suppose. When I first went to the Rails people and said, hey, can we support async or Falcon or this concurrency stuff? I didn't really get a strong positive reaction. I didn't get a hugely negative one. Not everyone looks at it through the same lens that I do, and that's actually quite good. Diversity is often a good thing, especially in software engineering. Well, actually, I'm not sure about that. Tim's versus faces. Yeah. Anyway. Sir. I didn't want to push too hard on that and cause more contention, I suppose. And so it was around the time when I was building Falcon, I thought, okay, well, actually, it's not Rails. The problem is not Rails. The problem is actually Rack. And what was a really problem? Rack was not evolving at all. And it certainly wasn't evolving in a way that would support streaming requests and responses and whatnot. And so when I first started working on REC, I don't know, I think there were like hundreds of open issues. Like it was just a project that had been stagnating unfortunately. So I went through and I literally closed, I don't know, like maybe a couple hundred issues, pull requests, stuff like that. It was a lot of work. And I wrote to Jeremy Evans, who's amazing. And gosh, I would like, let me say, I think we're a great team. And I don't want to like take credit because I think we've equally been instrumental to the whole thing, but I want to say he's a great partner, a great, great, great teammate in this open source maintenance. And so because I felt like it wasn't too much, it wasn't really worth trying to push too hard on Rails. So I went to Rack and I sort of solved all the problems there. And then we finally released Rack 3. And the great thing was I only needed to get consensus between me, Aaron, and Jeremy. Then we went to Rails and said, hey Rails, here's rack three and here's how it works. And so then we made a bunch of pull requests to Rails to make it compatible. And that was basically objective truth at that point. So with Rails, Rails 7.1 with all, I think there was like maybe 10 or 20 pull requests that I made now supports Rack 3, which means that Rails supports streaming requests and responses. It means you can drop a WebSocket in the middle of a Rails controller and have it work. And if you run on Falcon, you get event-driven currency with that as well, which is pretty neat. So the challenges with Rails have been extensive. Rails has, I like to use the word ossified, ossified around the request per thread or request per process and going in there with like request verified was kind of like, and so there were a lot of assumptions that we completely destroyed. And so now we have something called one of the most critical pieces was done by, I think, I sort of mispronounced this like Jean Bossier, Jean Bossier. He implemented the, I think he implemented isolated execution state or maybe it was someone else, I can't remember actually. I apologize to whoever wanted that. But it was a really fundamental feature to basically configuring Rails to, instead of storing its request state per thread, which on Falcon, every request runs on the same thread. So of course, if you're sharing your request state per thread, you're sharing your request state with every other request at the same time. Instead, it isolates it to per fiber. And then, that change has to percolate out across all the different systems in Rails. The biggest one is Age of Record, which is also the biggest source of database, sorry, I-O latency and lots of applications. And so right now, there's a pull request to address some of the issues that we've identified with the performance of Age of Record on a fiber-based server. So I think Rails 7.1 is compatible, like fully compatible with Falcon and Rails 8 will make it even better. And I think we're on track to seeing much let me just say, I think Rails 8 will just work, and it will just work really well.
CHARLES: Nice. So on a related note, Where was I gonna go? Oh, I remember I'll let Valentino ask this question. Then I'll ask my question. Sorry Yeah,
VALENTINO: I was just gonna say like I'm not gonna lie when I first saw like rails database, you know, do it later kind of You know a sink Having somebody just like sprinkle those around everywhere in his currents in the state they released it at it seemed like it was a little too much magic for what it was doing at the time. And I now I feel like a little better now, now that you've gone through the state of things.
SAMUEL: Well,
VALENTINO: can I be clear? I know we're still not quite there to just like, okay, everything's concurrency in Rails now. I know that's not true, but.
SAMUEL: Yeah. I want to just call that a little distinction there. So you've mentioned a feature in Rails called load async. Is that what it's called?
VALENTINO: I think so.
SAMUEL: Yeah. I'm a little disappointed they chose this naming convention because it's actually load and background thread. It's not, it's got nothing to do with async, the like, the gem that I created. And while it might solve some problems, I actually had feedback from some people who tried out on their applications and said performance was either the same or worse and that it actually was hard to know when it should be used. So I think that the value of load async in Rails as it stands today where it does a database query and a background thread is when you can stack up several database queries that are slow and have them run in parallel. That's the key advantage. But the problem is, there's only so far you can push that, I suppose. Even in Falcon, there's only so far you can push parallelism concurrency. And when you parallelize operations, instead of having the linear sum of the total time you have, you stack them on to get the time is the slowest operation. But I suppose in the real world use case, not that many people are stacking up queries. Like, they're knowing when to do it and where to do it is the challenge. And so with Falcon, we still essentially either do linear or if you explicitly opt in, you can do the same thing like multiple database queries in parallel. But it's not just the individual request that If you run that in Puma, you're still tying up the entire request thread. But in Falcon, that whole request will just get set in the background until the data from the database comes in, and then it will get resumed. So. Maybe in summary, like the performance characteristics of load async and rails as it stands today, it's not really easy to have an intuition about where it makes sense and that to me, like it comes back to what I've said really at the start, which was building the good developer experience or building the interface, which is kind of intuitive and makes sense is really like fundamental to this whole problem because concurrency is hard enough without trying to get users to think about these issues and trying to figure out where this makes sense or where it doesn't make sense.
CHARLES: Yeah, that makes sense. I'm going to push us to another topic here really quickly. It's kind of related to some of this stuff where you, I think Valentino actually brought it up before the show and then I asked you about it heading in and that is, he said something about you building Falcon into a platform that's more than just a web server. And where you started to discuss that, before I stopped and said, let's talk about this on the show, it sounded very interesting. So do you want to kind of plug into that and explain to people what you're looking to make Falcon into.
SAMUEL: Sorry, I was replying to someone.
VALENTINO: It's the greatest keyboard.
SAMUEL: Yeah, yeah, yeah. Cherry brands. Look, so when I first made Falcon, as a proof of concept, I kind of squashed a lot of stuff into it, including, so there was a gem that there was a client HTTP and server gem called async-http. That gem supports HTTP 1 and HTTP 2 today and hopefully in the future supports HTTP3. I've been working on it, but it's quite, HTTP3 is probably the most insanely complicated standard that I've ever tried to deal with. Um, so, um, Falcon was kind of an adapter. Async HTTP has a, there's a gem called protocol HTTP, which provides a generic request and response to semantic objects. So like the headers, the body, status code, the version that kind of, the stuff that's the same across all versions of HTTP. And then async HTTP takes that and puts it into a, makes the request and response run in an asynchronous task. So each individual request or response is operating concurrently. And then what Falcon did originally was adapt RAC applications into async HTTP server compatible interface. And that was actually non-trivial because there's things like Rack Hijack and all the header mapping stuff and dealing with response bodies. But there's quite a few complexities in there to make a Rack application to a straightforward HTTP interface. And after a while, I just thought, this is something that I actually want to use in a more generic sense. Like mapping a RAC application to an async HTTP server is not something that's Falcon specific. And Falcon became more about, so there's a gem that does this called protocol-rac and protocol-rac provides all the mappings between async HTTP protocol HTTP and RAC. So you can take an incoming request that follows that very simple semantic format and map it into like a RAC object and then you can map it back again and it does all the mattings for things like handling streaming requests and responses. So if you throw a website, it will work correctly, which is kind of kind of important. So Falcon got split up into pieces. And so what Falcon does today is Falcon is kind of an application container. And what that means is there's a gem called async container and async container lets you do multi-threaded, multi-process or hybrid or various other ways of deploying an application and scaling that application. And what Falcon does is it just coordinates basically that container for like rolling restarts, blue-green deployments, these kinds of like things It has a configuration which basically just says, hey, I want you to host this RAC application on port whatever. And so you can run any application in there. It doesn't have to be a RAC application. You can actually run a native HTTP server compatible interface application. And Falcon will host that. And you can actually host multiple applications. And that's why it's one instance if you want. So it's just basically a container now for hosting applications. And all individual pieces are just connected together. And so where I think this goes next is I quite like to see us simplify like Rails deployment. Yeah, this is really like, if I was to put a goal to it, it would be simplifying Rails deployment. And I think there's been a lot of work in this area, but it's still quite challenging. Like you need a database, probably you need Redis, you probably need background job runners, you probably need obviously a web server usually sometimes you get other things running in the background there. And we've had things like, what's that tool code that starts multiple processes? Gosh, I forget. There's a couple of gems for coordinating all these processes for you. But I wanted like,
CHARLES: Foreman to do that.
SAMUEL: Yeah, Foreman, that's the one, yeah. And so I've used it in the past and it works okay. But i felt like it's still quite difficult to deploy a server. And so what I wanted is why I'm going to have to have a self-contained application. So Falcon application that includes like a job server. And you basically just go, here's my hardware, go and run this. And you have to think about that.
VALENTINO: Oh, that'd be so nice.
SAMUEL: I think even action cable can be quite tricky to deploy, you know, like with action cable, often you have like any cable or you have Puma that's dedicated. It's Puma running on a different port. There's just like all these like funny tricky things that have fallen out of the assumptions we've made about how things are gonna be deployed. And I feel like if we just step back and go, hey, what problem we really can solve here? Falcon can better address those by exposing kind of this application host-based model. We basically, well, I need a job server, I need a web server, I need this server, that server, and I'm just gonna start it all up and now my application's running.
CHARLES: Yeah, in fact, from my development stuff, I started out with the Tailwind CSS gem for Rails. I guess it's Tailwind CSS dash Rails. Right, and so when you install it, it installs the bin dev executable, which effectively calls out to Foreman and says, hey, run the Tailwind watcher alongside the web server. Right? And so then as I've added in full text search and things like that, right, now it spins up the full text search engine and, right all the things that have to go into all of that stuff. And so anyway, just kind of throwing that all together, it'd be really nice to be able to deploy that way because yeah, when I deploy it's okay. And I'm using Kamal and so I need an accessory for this and an accessory for this. And then I need two app servers. One of them is the app app server and the other one's the job server. Yeah. Right. And so it orchestrates like six or seven different Docker containers across three or four servers to do all the things that I need. And yeah, it would be beautiful to just be able to run all or most of that under a, hey, I've got this application that runs like this. And then, sure, maybe I have a caddy or traffic load balancer out there that I have to manage separately. But yeah, anyway.
SAMUEL: I think I think we've made things too complicated.
CHARLES: So David, DHH, when he gave his Rails world, his Rails world keynote, and he was talking about all the different things that are going into Rails. And he talked about, hey, we've got this, and we got this, and we got this. And then specifically he got into the asset management for Rails. And what he said was, we picked up Webpack and Webpacker because it was kind of the most comprehensive way to do it and it did everything that we wanted, but it was complicated and gross and hard to use. Right. And so what then what he pointed out was, but we had to take that step in order to get to where we got to with import maps and, and, uh, prop shaft. And so when I look at it, it's like, yeah, we've kind of done things in a way that is not exactly the most friendly way, but I think what it's done is it's kind of given us the tools for, okay. We know that we generally need all these things now and we know how to run them all. So yeah, unify the interface. I think it's an essential step getting where we are.
SAMUEL: Yeah. To your point, I think to know the happy path, you have to know the unhappy path.
CHARLES: Well, sometimes you get lucky and you can just happy path your way through it, but. Yeah.
SAMUEL: But then you don't know, then you don't really know how happy you're lucky.
CHARLES: Yeah.
SAMUEL: Because you never really experienced the pain of like doing it.
CHARLES: Yeah. True. Very true.
VALENTINO: I mean that's kind of the negative of Rails as a framework, right? It provides all this magic. If you specialize in knowing the feature, then you need to know, right? Otherwise, you forget about it. Yeah, but like, okay, well if you run into an edge case, and the person that is expert is unavailable, then you're spinning your wheels trying to figure out how it all works.
SAMUEL: I mean, I want to talk to the specific point. A long time ago, I did Objective-C development with AirCores frameworks and they were completely opaque. There was no source code. If your NSTable view didn't work correctly, there was no way you'd be able to figure out why the thing was rendering wrong or why the column was in the wrong place. Because you'd put a break point and there's nowhere else to go. And I really want to call that how amazing Ruby and Rails is. I mean, of course, open source projects are all similar in this regard, I suppose. It's part of the definition of, of, of having the source available, but it's just so refreshing if you've had that experience, that pain to be able to go into the source code all the way down to like the interpreter. I mean, occasionally I do find bugs and I'm digging in the interpreter, putting like F for nested, trying to figure out like what's going on. Um, we're so lucky to like, to have that. And I just, I want to call it out because like, maybe people don't realize like if all you've ever done is use Rails and frameworks where all the source code is available, you may not have experienced the level of pain that comes from using closed-sourced frameworks or libraries or whatever systems. And it can be extremely painful when you're trying to debug an issue. Like I'm talking like multiple weeks with multiple engineers trying to like figure things out. So we're extremely lucky with Rails in that regard.
CHARLES: Yep, absolutely we're kind of getting toward the end of our scheduled time as far as usually picks take up a little bit of time. So really quickly, if people want to.
VALENTINO: Can we get to one quick thing? Because I just saw that Samuel released like a spike on async job, which is like one of the most requested like things I've been waiting for is for something like this rails has done a pretty good job of abstracting an interface for job execution.
SAMUEL: And I think Mike did a great job of creating Sidekick. And I think we've converged on what that looks like now as a user interface for application developers. Mike's work was really pioneering, I think, in lots of ways. And async job. Well, Sidekick uses multiple threads usually for executing multiple jobs, which is perfectly acceptable. And I think Headlight also use Ractors if possible. I think that's still a way out, but async job is basically runs on a similar principle, uses Redis. I'm hoping to also add like a database backend as well at some point, maybe SQLite or just generic database layer. And then it basically is used to schedule and execute jobs. And so my goal is to have this provide, so on the client side, it will provide like a active job adapter, and that will probably be the main most common use case, I would say. Then on the back end, it will just use async, and probably I'll make it very easy to spin up as part of a Falcon application. Basically, in your Falcon configuration, have a web application and a bit of job server, and have that just execute and coordinate as efficiently as possible. So yeah, async job is something I've been meaning to work on for a long time. Um, I just didn't really have the bandwidth for it, but the current spark is looking quite good. Like it's, it's, uh, I think async. Um, maybe to give an interesting context, um, I often work on the library of async. So I'm not that often like a consumer of async. But with async job, I was kind of a consumer of async. It was a very fundamental part of async job. And so.. I had like that good experience that I was trying to, you know, from the other side. It was really straightforward to write the code. Redis just worked. It was easy to do like quite complex things with Redis, like have a heartbeat. A job server with a heartbeat, which then recovers jobs if the server dies. So a bunch of this functionality, it was all done in sort of like, you know, maybe what, 100 lines of code, 200 lines of code. And if you compare that with, I know that Sidekick does a lot more, but Sidekick also has a lot more lines of code. And I'm a big fan of like less lines of code equals less bugs. So keeping things like small and simple is really admirable,
VALENTINO: I think. Yeah. Well, I'm looking forward to seeing more progress on that because, yeah, I feel like that's the one thing as Rails app grows is you try and figure out how to execute all your jobs faster.
SAMUEL: Yeah, yeah, yeah.
CHARLES: You guys drop a link to that in the comments. It goes out to Facebook, YouTube, and Twitch, not Twitter or LinkedIn if you're over there. Awesome. All right. Yeah, it looks like Valentino dropped it. All right. Well, I'm going to push us toward picks and wrapping up the show then. But before we do that, Samuel, if people want to connect with you, if they have questions or things they want to do with Async, how do they find you?
SAMUEL: The best way to do that is through GitHub discussions. I have a personal GitHub discussion under github.com.au. But if it's async specific, then just go and find the project I async and use the discussion. There's a whole bunch of early adopters and fantastic open source maintainers and contributors who will, if I'm not available, jump in usually. And that's, I mean, like to me, that's amazing. So yeah, that's the best way to get in touch. And the great thing about that is we're building a knowledge base. So you might want to do it. If you've got a technical question been doing a search first might help out as well.
CHARLES: Awesome. Well, we'll put a link to your GitHub in the comments and then yeah, my team can pull them out and put them in the show notes. Let's go ahead and do some pics real quick.
Have you ever wished that you had a group of people that were just as passionate about writing code as you are? I know I did, I did that for most of my career. I'd go to the meetups, I'd try and create other opportunities and it was just really hard, right? The meetups, I got some of that, but they were only like once or twice a month. And it was just really hard to find that group of people that I connected with and really wanted to, you know, talk about code a lot. Right. I mean, I love writing code. I think it's the best. And so I've decided to create this community and create it a worldwide community that we can all jump in and do it. So we're going to have two workshops every week. One of those or two of those every month are going to be Q and A calls, right? Where you can get on, you can ask me or me and another expert questions. The rest of them are gonna be focused on different aspects of career or programming or things like that, right? So it'll go anywhere from like deployments and containers all the way up to managing your 401k and negotiating your benefits package. Well, we'll cover all of it, okay? And then we're also gonna have meetups every month for your particular technology area. So we have shows about JavaScript, React, Angular view and so on We're gonna have meetups for all of those things. I'm gonna revive the freelancer show. We'll have one about that, right? So you can get started freelancing or continue freelancing if that's where you're at. And I'm working on finding authors who can actually do weekly video tutorials on some thing for 10 minutes. This related, again, to those technology areas so that you can stay current, keep growing. So if you're interested, go to topendevs.com slash sign up and you can get in right now for $39. When we're done, that price is going to go up to $75. And the $39 price gets you access to two calls per week. The full price at $150, which is gonna be $75 over the next few weeks, that price is gonna get you access to all of the calls and all of the tutorials and everything else that we put out. From Topendevs along with member pricing for our remote conferences that are coming up next year. So go check it out, topendevs.com slash sign up.
CHARLES: Valentino, do you have some picks?
VALENTINO: Sure. So I love hacking on hardware in my free time. And I came across this, somebody made an open source wearable AI device that uses the Cora AI embedded system. And I'm real... Coral, sorry, coral.ai. So I'm really, I got a whole set of things that I'm gonna try and use a Raspberry Pi W and the coral and fuse them together and hope that I can make my own wearable AI device. And so he has a whole project down. I forget the name of it. I'll put it in the show notes though. But I'm looking forward to building that next weekend.
CHARLES: Very cool.
SAMUEL: This sounds awesome.
CHARLES: Yeah, I'm going to throw out a few picks here myself. The first one is I always do board game pick on the show. So last night I was hanging with my friends and one of my buddies bought a new game. And so of course we played it and it was awesome. It's called Fire Tower came out in 2019 two to four players. We played it with four players just to put that in there. We were kind of time limited and I think it took us hour, maybe a little more than an hour to play it. Hour and 10 hour and 15. And that was with us learning to play the game and playing the game. Um, and effectively the idea is, is each player has a tower in one of the corners of the board, and then there's a wildfire in the middle of the board and, um, you take turns. So the wind's blowing in a particular direction. So you add a flame token to the board in that direction from any other flame token, right? It has to be next to it, but then you play a card and so the cards can put out fire. They can expand the fire. They can change the wind direction. And so the, the, the way you win is you take out everybody else's tower. So you move, you, you move the fire toward them and away from you is kind of the deal. I really enjoyed the game. We'll also point out that I won the game by taking out each of my friends one at a time when you're eliminated, you are the spirit of the forest. And so you roll the die for the direction of the wind. And then you have different actions you can take. So North lets you, I think, put a fire token out. Or one of them lets you change the direction of the wind. Anyway, so it's way fun. It was a lot of fun. But it's relatively simple. Board Game Geek waits it at 1.79, which very easy casual game. So I'm digging that. That's it was, it was a ton of fun. So I'm going to pick that. If you're in Utah, I think it might be sold out, but if it's not, I'm going to Salt Con in a couple of weeks, three weeks. And Salt Con is a board game convention in Davis County. So just north of Salt Lake. So if you want to come play games with me, let me know you're going and that that'll be fun. I'm heading up with one of these buddies of mine and anyway, fun, fun, fun. So, uh, excited about that. Um, I also have the Ruby dev summit videos up. If you go to rubydevsummit.com and you put your email address in, you can get access to the videos for 24 hours. Um, they were incidentally have also been going out on the Ruby rogues feed this week. Um, but I plan on taking them back off of the feed. And so if you didn't get them, you didn't get them and I'm sorry. Um, But just go to the web page and we'll hook you up. Just trying to think what else. I mean, mostly just been heads down working on Rails clips and Ruby bits. And so if you're looking to learn more Ruby or Rails stuff, go check out the videos there. And I'll wrap it there. Samuel, what are your picks?
SAMUEL: Sorry, you're gonna have to explain this concept to me. I'm not familiar with it.
CHARLES: No, it's okay. So every show we just pick stuff that we like. So, stuff you're enjoying. So yeah, so if it's a TV show. So another one I could pick my wife and I watched. Only once on Netflix Yeah, and I can that was a fun series Board games tech stuff, whatever.
SAMUEL: I've been watching future. I'm a I'm really happy they revived it sure Such a great show yeah, I Suppose what's great about future armor at least I don't know what the latest version, you know the latest season It's not too bad But the early ones they're all apparently they had more years of PhDs in the room than you could be alive for one lifetime. And the characters in the stories were always really hilarious. So yeah, future Rama. Go watch him. You probably won't be disappointed.
CHARLES: That's one that's always been on my list, but I never quite get around to it.
SAMUEL: You haven't watched them? Okay.
CHARLES: I've watched it with my kids a couple of times and it's a huge hit.
SAMUEL: Yeah.. Some of them you have to be like, ah, maybe we shouldn't watch this anymore.
CHARLES: Yeah, some of it looks like some of the humor might be a little bit off color.
SAMUEL: Yeah. It looks pretty funny. Very clever. There was one episode where they actually published a mathematics paper because they solved a mathematics problem. It was like a Professor Farnsworth invented like a brain switch like a head switching machine. But the point was once you'd switched with one person, you couldn't switch back. And, um, they actually had to invent, uh, a new mathematical, uh, theory to show that it was possible to switch back. Um, and they published it as part of the show, I think. Yeah.
VALENTINO: That's really funny. Very cool.
CHARLES: All right. Well, thanks for coming, Samuel. This was a ton of fun such interesting stuff. Fantastic,
SAMUEL: it was a pleasure to be here and to ask you your questions.
CHARLES: Yeah, all right, well we'll wrap it up here. Till next time, Max out.