Ruby Revelations: Boosting Speed and Efficiency - RUBY 637
In this episode, the focus is on the cutting-edge developments in Ruby technology. They delve into the intricacies of high-performance web servers, such as Agoo, and explore the advantages of using multiple workers to optimize Ruby applications while sharing insights on overcoming challenges like GBL lock issues. From discussions on GraphQL implementation to comparisons between Ruby and Go in development environments, this episode offers a captivating exploration of the evolution of web processes, middleware usage, and the future of project direction. Join them as they unpack the dynamic landscape of Ruby technology and its impact on modern web development practices.
Special Guests:
Peter Ohler
Show Notes
In this episode, the focus is on the cutting-edge developments in Ruby technology. They delve into the intricacies of high-performance web servers, such as Agoo, and explore the advantages of using multiple workers to optimize Ruby applications while sharing insights on overcoming challenges like GBL lock issues. From discussions on GraphQL implementation to comparisons between Ruby and Go in development environments, this episode offers a captivating exploration of the evolution of web processes, middleware usage, and the future of project direction. Join them as they unpack the dynamic landscape of Ruby technology and its impact on modern web development practices.
Socials
Socials
Transcript
Peter Ohler [00:00:02]:
Everybody,
Valentino Stoll [00:00:05]:
welcome to another episode of the Ruby Rogues podcast. I'm your host today, Valentino Stohl. I'm joined by co host Ayush.
Ayush Nwatiya [00:00:11]:
Hey.
Valentino Stoll [00:00:12]:
And we have a very special guest today, Peter Oler. Peter, why don't you introduce yourself? Maybe tell us a little bit, about how how you're so great in the Ruby community and, all the lovely stuff you're working on.
Peter Ohler [00:00:26]:
I don't know if I can claim greatness, but, I started started with Ruby quite a while ago, early 2012, somewhere maybe 2011. I was over in Japan, started working for a a a company, KVH, and, we're starting to put together a web offering and, decided the best way to or the best language would be Ruby. So we started on that, and I found out that well, first of all, that the JSON parser was really slow. So I wrote JSON parser as a bit faster. And, years later, after after leaving, Japan, I decided that it would be a good idea to make a more high performance, web server and also stick in GraphQL, which, had started to come popular at the time. And the Ruby offering, well, was pretty pitiful, actually. So I tried to make something that was easier to use and higher performance, and that's Ago.
Valentino Stoll [00:01:40]:
So real quick, just when you say, JSON parser, you mean the OJ gem. Right?
Peter Ohler [00:01:48]:
Yes. Exactly.
Valentino Stoll [00:01:49]:
I think it's it's very popular, and has kind of come the de facto, at least in my opinion, of which which JSON parser to use, for its speed and performance. So, I think that's good to clarify. So I'm super interested, in this Ago, web server. It is a web server. Right? Like,
Peter Ohler [00:02:12]:
it is a web server.
Valentino Stoll [00:02:14]:
Pretty awesome. So you have a ton of benchmarks on here, which is kind of incredible. Do you wanna just kinda, like, walk us through the high level, like, why use this over, you know, even Sinatra or something like that?
Peter Ohler [00:02:31]:
Right. Well, I mean, you've Sinatra's been around a long time, and it works fine. But its focus wasn't so much on performance as it was on maybe ease of use or, you know, getting things to work, because it was it was a an early system. It's all written in Ruby, which, you know, it's great writing Ruby. I like writing Ruby. But if you're gonna go for performance, you need to make a a c extension to, you know, to get all all you can out of it. And that's, again, that's part of the reason that, Argo came about. So in terms of Sinatra, works great.
Peter Ohler [00:03:15]:
If you want a bit higher performance, then Algo is probably the way. If you're going GraphQL, I would definitely suggest Algo, just because the the current offering with Ruby for, for GraphQL is much harder to work with. With, with Hoggle, all you have to do is basically say, hey. Here's my class. As long as it implements, the appropriate method, then you're off and running.
Valentino Stoll [00:03:45]:
Honestly, I I really love this use case. I've been looking yeah. GraphQL. I've been looking for, like, just drop in, hey. I wanna use GraphQL and Ruby. And, there's, like, you know, you can use it in a Rails context with, GraphQL, Ruby Gem. Is that what this is using under the hood, or is it, like, custom implementation?
Peter Ohler [00:04:06]:
It's custom implementation.
Valentino Stoll [00:04:09]:
That's
Peter Ohler [00:04:10]:
awesome. All all written in c.
Valentino Stoll [00:04:13]:
Yeah. So this is super appealing to me. I'm gonna have to give this a try. What, what sparked kind of like do you are you like a GraphQL? Like, you know, we should use this for most things, or what's what's part of the need to to start here?
Peter Ohler [00:04:34]:
In my work, I've I started to use GraphQL. And, again, found most of the implementations, the the APIs are very clunky, harder to use. So, I wrote something for Argo, and I also wrote something for Go. Both, I'll try to make it as easy to use as possible. Again, not not requiring a whole lot of wrappers and and build up, but, basically, just pointed at the class you're interested in, exposing as a GraphQL object and, you know, how to run its course. And that was kind of the approach. I I wanted something that I could use that was, well, that was easier to use.
Ayush Nwatiya [00:05:23]:
And, AGO is rack compliant. Right? So you can use it with any any RAC app.
Peter Ohler [00:05:29]:
Yes. Exactly. Cool. It act it actually has, some other features as well. As you may it might be where GraphQL also does push. And we were trying to myself and another fellow, we're trying to get, well, a API specific update the specification for, for Rack to support that, without much success. We went back and forth a number of times, and there were enough people that were not interested in seeing that or didn't want to have the API extended that it it never made it. But, it's available in in Agua.
Valentino Stoll [00:06:10]:
That's awesome. I mean, I was looking at the, some of the documentation here on the GraphQL stuff, and it seems to support subscriptions, which is really interesting.
Peter Ohler [00:06:21]:
Yes. That's what I was referring to.
Valentino Stoll [00:06:24]:
Yeah. So gotcha. So, like, how does I'm curious. For those that don't know, GraphQL subscriptions are a way to kind of stream responses. So how does that kind of work under the hood?
Peter Ohler [00:06:40]:
Use well, you can, use a number of implementations. WebSockets is well, my preferred choice if you're using browser. If you're if you're trying to set up, an out of band subscription, then I typically use something like, NATS, which is a a messaging system.
Ayush Nwatiya [00:06:59]:
And so is the key feature of Aggo, is it high performance or is it GraphQL or is it both?
Peter Ohler [00:07:06]:
Yes. It's both. Okay. High performance and easy to use GraphQL.
Ayush Nwatiya [00:07:13]:
Cool. So, like, if I was approaching it like an idiot and say say that I'm said idiot who builds rails apps, simple rails, grad apps, and I use Puma, What's the reason for me to drop Puma for Ago? Or is that is that not the right use case for it?
Peter Ohler [00:07:32]:
Say say that again. Maybe
Ayush Nwatiya [00:07:33]:
I So like, I guess I'm just I just build rails apps. I build Yeah. Prod rails apps. And I use Puma as my web server. So if you were explaining it to an idiot, in this case me, what's the reason to drop Puma for Igo? Or is it not the right use case for Ago?
Peter Ohler [00:07:54]:
It depends on the the amount of, well, amount of request you get. So if you need something higher performance, then you might go with Argo. If you wanted to step into the GraphQL world, again, Algo.
Ayush Nwatiya [00:08:10]:
And with Algo, then I wouldn't need, like, NGINX or carry reverse proxy in front of Pima, I'm guessing, because that's the usual setup with rails, isn't it? You have a a reverse proxy because Pima is not very performing, so you need, like, a web server in front of it to serve static assets and stuff. But I'm guessing with Igo, that's not a problem.
Peter Ohler [00:08:33]:
That's right. Exactly true. 1 of the, iGo also does support having multiple workers in in separate, in set I want to say separate threads and separate, forked applications. The, well, the advantage of that, of course, is is you get around the, the GBL lock that you have with Ruby because you can have n number of workers all churning away at the same time. The problem is if you have shared data, then accessing that shared data, typically database, has overhead in itself as opposed to having everything in the same process, like, within a a hash map or something like that. So it's a trade off, and you have to look at your application to decide. If you, if you are interested in in, in just starting out with GraphQL and want a nice example, I do I did write a paper for, what was
Ayush Nwatiya [00:09:32]:
it,
Peter Ohler [00:09:35]:
AppSignal.
Ayush Nwatiya [00:09:37]:
Oh, okay.
Peter Ohler [00:09:38]:
It's called the well, it's a song application. If if you're on the Readme page for Argo, you'll see a reference there under the news, and it's the 2nd news item. But that
Ayush Nwatiya [00:09:51]:
kind of
Peter Ohler [00:09:52]:
walks you through it. Yeah. Yeah. I've never seen GraphQL before. I don't know what I'm doing. Here we go. Here's how to do it.
Valentino Stoll [00:10:00]:
Mhmm.
Ayush Nwatiya [00:10:02]:
Yep. I see that there. I'll give that a read a bit later on. So what is it that makes Argo so high performance? Is it just the fact that it's written in c, or is there something other, something else a bit clever going on?
Peter Ohler [00:10:19]:
Of course, there's something more clever than
Ayush Nwatiya [00:10:22]:
There has to be.
Peter Ohler [00:10:24]:
Languages, languages are what give you the performance by themselves. What they do is they give you the means to make something faster. So there's less overhead in c than there is in Ruby. That doesn't mean just because you write it in c then it's gonna be faster. I've seen plenty of code that's written in c that is abysmally slow. One of the one of the things that Algo does is it uses multiple threads. Now that's a problem with Ruby because multiple threads means you've gotta keep, handing over the the GBL, which there's overhead in that. With Algo, what happens is the request comes in.
Peter Ohler [00:11:09]:
It gets put on a a queue. That queue gets picked up by n number of worker threads. They process the request, and it's only when it gets to the well, the very end where it hits the Ruby side again, where it gets to JBL, does the the work, gets the response, gives up the lock, and then sends a response back to the requester. So probably 90% of the work is done outside of of Ruby.
Valentino Stoll [00:11:39]:
This this reminds me a lot of Shopify's Pitchfork. Are you familiar with that?
Peter Ohler [00:11:45]:
I'm not. No? I know what Shopify is, but not not Pitchfork.
Valentino Stoll [00:11:49]:
They they've been trying to, I think they've actually switched to using this for maybe not for their, main monolith, but it's a bit it's like a fork architecture HTTP server. It sounds a little bit similar.
Peter Ohler [00:12:11]:
Yeah. That happens at a different level. With Algo, the it has workers, and those are basically forks of the whole process. But within each one of those forks, there's still multiple threads that are processing the HTTP requests before it basically gets to the Ruby part and says, Ruby part, do your thing. Get the response. And then, again, it leaves the Ruby part alone and jumps back into multiple threads.
Valentino Stoll [00:12:39]:
Gotcha. And what's using what are you using for the, queuing system?
Peter Ohler [00:12:44]:
Oh, I brought my own.
Valentino Stoll [00:12:45]:
You you?
Peter Ohler [00:12:47]:
Of course.
Valentino Stoll [00:12:48]:
Of of course. But performance when when performance is, of utmost importance, you always write your own.
Peter Ohler [00:12:55]:
Pretty much.
Valentino Stoll [00:12:58]:
So I mean, I would love to dive into some of these benchmarks. You have a ton of benchmarks here in your repo. You give one for a rails context as well.
Peter Ohler [00:13:08]:
You
Valentino Stoll [00:13:08]:
know, how does it stack up? Let's say, you know, at Ayesha's point, you know, how does it stack up for a rails app running Puma? Like, what are how how are the benchmarks?
Peter Ohler [00:13:22]:
You know, it's been quite a few years since I did the benchmarks. I don't remember what real what I was using with rails at the time whether it was Puma or something else. I didn't see big differences, though. The the benchmarks were done with a single well, I guess they're done in a couple different ways, but generally not with both workers because with workers then you have to worry about the overhead of the shared, data store, which is typically quite a bit higher than than what you get from the the web servers themselves.
Valentino Stoll [00:14:02]:
Gotcha. Yeah. I mean, I'm just I I keep circling back to the 100,000 requests per second. Mhmm. And then that kind of like you mentioned, you know, hitting the limitations of of Ruby's benchmark because it goes so fast. Can you elaborate kind of on on that?
Peter Ohler [00:14:21]:
Right. So, your test tool has to be faster than what you're testing. So, the original or what I was originally using for the Ruby benchmarks wasn't able to keep up with the the rates. So I I had a I knew I could go or I knew I could process more data than what was what the benchmarks were showing. So I tried out my own and was able to get a significant boost in, and throughput or measured throughput.
Valentino Stoll [00:14:57]:
How do you know that? I'm curious. Like, where do you get to your point where you're like, like, I know this thing really.
Peter Ohler [00:15:05]:
Yeah. If you start up, well, a web server is kinda unique because it can have requests coming in from multiple sources. So if I started up 2, 3, 4, benchmarking tools and we're able to hammer on, Algo and see that all 3 of those benchmarking tools were pegged out, and I knew it could handle more than than each one of those individually.
Valentino Stoll [00:15:31]:
And what what protocols does this, currently support?
Peter Ohler [00:15:36]:
HTTP.
Valentino Stoll [00:15:37]:
Just, like, h two
Peter Ohler [00:15:39]:
v two? Oh, no. It's just HTTP and and h p s.
Valentino Stoll [00:15:47]:
Okay.
Peter Ohler [00:15:48]:
It it has not been upgraded to use h p 2 yet, so it it doesn't handle some of the more, well, some of the newer features. Yep. And,
Ayush Nwatiya [00:16:01]:
what about WebSockets? I'm guessing if you have GraphQL streaming, as you said, then it supports WebSockets as well. Right?
Peter Ohler [00:16:08]:
It does support WebSockets. Yes.
Valentino Stoll [00:16:12]:
So school so what are I'm I'm curious, like, the the real life, you know, how's it how's it handled in in production? Right? Like, are you running this in production? Are you using it for, like, massive scale yet? You know, how is it faring?
Peter Ohler [00:16:30]:
Yeah. I've you know, I have to just rely on the users to tell me that. And as you may be aware, open source projects typically don't get a lot of feedback except for issues. So you get complaints, but you don't get any of the, any of the things that say, hey. It's doing great. You know? I like it a lot or something like that. It's more about issues. Often, they start out with, I really like it.
Peter Ohler [00:17:02]:
But Yeah.
Valentino Stoll [00:17:04]:
I mean, honestly, this is like a I I love like, I again, it's circling back to GraphQL. Like, I feel like as, like, you know, it's I'm torn because on one hand, most Ruby apps are are gonna be using rails in some capacity for their data connection. Right? And so, it would be nice to just, like, quickly get a GraphQL server up and running in a Rails context. It is it fairly straightforward to, like, connect all those pieces together and mount this, like, on its own, like, on its own subprocess from Rails or or for its process? What is what is, like, kind of like that path look like to success?
Peter Ohler [00:17:56]:
I haven't, I haven't done a lot with rails myself.
Valentino Stoll [00:18:00]:
Okay.
Peter Ohler [00:18:02]:
As you might as you know, it's not in a high performance system. So I I tend to steer away from it. So
Valentino Stoll [00:18:10]:
I'm curious then if you do if you're not using Rails, what is your, like, preference for, like, database in combination with the GraphQL side?
Peter Ohler [00:18:20]:
Wrote my
Valentino Stoll [00:18:30]:
I have a feeling that will be the bottleneck.
Peter Ohler [00:18:38]:
Right? Mostly, quite honestly, I'm I'm doing most of my latest work in Go. But the databases I'm typically using are either Mongo or, Redis.
Valentino Stoll [00:18:51]:
Okay. Do do you find that, either of those would be, like, probably most performant alongside of Algo?
Peter Ohler [00:19:03]:
Depends what you're trying to build or what you're trying to store. Redis is probably high performance or higher performance, but it's a little bit more restricted than what you can store or how you store it in there. Mongo gives you a lot more flexibility in terms of query capabilities and, yeah.
Valentino Stoll [00:19:24]:
I'll admit it's been a long time since I've used Mongo.
Ayush Nwatiya [00:19:28]:
I'm
Valentino Stoll [00:19:28]:
curious, you know, since since you have experience with that, like, how how is the, the pace kept up for the for, for Mongo? Is it still, like, fairly performant?
Peter Ohler [00:19:40]:
Oh, yeah. I would definitely it's it's only getting better.
Valentino Stoll [00:19:43]:
I I forget why people decided to switch off of it, to be honest.
Peter Ohler [00:19:47]:
Yeah. I'm not I don't know why they would. We're still using it quite heavily in in both in Go and in Ruby. But, for Ruby, it's kinda nice because you store everything in JSON. So you take your Ruby object, you encode it into this JSON, and you store it. You fetch it back.
Valentino Stoll [00:20:07]:
Yeah. I mean, I remember and
Peter Ohler [00:20:09]:
you're ready to go.
Valentino Stoll [00:20:10]:
I remember my first RailsConf. It was wildly popular, definitely amongst Groupon as an example, which, you know, is is much smaller now. And living social used to be a thing. I don't know if they're still around, but right. I don't know why Mongo kind of dropped off the reels, and Ruby ecosystems. It's definitely not as prominent, but, right, I I always liked it because you could just like dump data at it, and it it could handle it.
Peter Ohler [00:20:41]:
Right. Exactly.
Valentino Stoll [00:20:42]:
Yeah. It reminds me I have a friend that's, he loves CouchDB, for the same reason.
Ayush Nwatiya [00:20:49]:
He's just
Valentino Stoll [00:20:50]:
like, you know, relax. Just get just go on the couch.
Ayush Nwatiya [00:20:52]:
Drop it in there.
Valentino Stoll [00:20:58]:
That's funny. So it's interesting. So, I'm curious, like, then, do you find it kind of, like, easier to connect GraphQL, like, typing system because you're using Mongo in those cases because it's so flexible? And the I imagine the data structures are similar.
Peter Ohler [00:21:19]:
The, well, for GraphQL, the the way Argo works is the data structures are just Ruby objects. So, yeah, it's easy. It's a Ruby object. It's easily encoded in JSON, which is easily stored in Mongo and vice versa. So that's really easy to say yeah. So it makes it easy to say, hey. Get me all my all my songs, And you just do a query on the the song database in Mongo, and, there they are. You pull them back out, and they're already decoded, and your work is done.
Valentino Stoll [00:21:56]:
Oh, that's really cool. I'm looking at your example now with the artist and everything. Oh, that's really cool.
Peter Ohler [00:22:06]:
Yeah.
Valentino Stoll [00:22:06]:
Oh, man. I'm gonna play with this more. So I see you have locks in, in your mutations. It I I imagine there's a lot of, like, internals happening in AGO that requires locking mechanisms.
Peter Ohler [00:22:25]:
Actually, those locks are are simply because you have well, because you're getting HP requests, it allows you to have multiple requests in process at the same time.
Valentino Stoll [00:22:37]:
Oh, I see.
Peter Ohler [00:22:39]:
Now so while Algo can handle that nicely, the the Ruby side of things can't. So, it may it may fire off multiple requests for the same object. So unless you put a lock on the resource, then, you know, you're liable to get collisions.
Valentino Stoll [00:22:59]:
Oh, this is really interesting because one of my biggest complaints of, like, rails and, like, a background queuing processing system is, like, you have to, like, load your entire application just to run a worker, which seems so bizarre to me, which I know there are, like, some optimizations that are made, right, on the, on the constant loading side, to help ease the pain of that, which has gotten better, but still, like, not ideal. Does does Ago kind of, like, help with that? Like, have you have you had a good experience, like, with, like, background workers and AGO, like, as a as an ecosystem?
Peter Ohler [00:23:45]:
Couple with the database, it works great. You know? Yeah. You can get very high throughput, and your your bottleneck becomes a database, which is is kinda what you want. You know, you don't want your the bottleneck to be your application. The, you know, one of the things that I I did notice early on was that the rails is great for putting together prototypes and, you know, for setting up you know, I want these windows to display my data. Fantastic. But if you're looking for high performance, you know, down the road, it becomes more and more difficult to to get that performance with with the rail with the rails layers on top of it. So if you can break your application up to have a hyper orange back end and with, you know, something that does just the view, making use of rails, That seems to be a good way to make it work.
Ayush Nwatiya [00:24:49]:
So when when you're writing, web applications, with Aggo, do you use any framework or do you just write, like, rack code, directly for the server?
Peter Ohler [00:25:02]:
I I don't even use rack.
Ayush Nwatiya [00:25:05]:
Oh, okay. Fair enough. I can see you all, like, going down a level even further than that then.
Peter Ohler [00:25:13]:
Exactly. Yeah. The only thing I struggle with is yeah. I'm not very good with making per UI and some dealing with CSS and that kind of thing. So, yeah, it'd be nice to have somebody hold my hand on that, which is kinda what Rails does. But yeah.
Valentino Stoll [00:25:33]:
I'm curious what I'm I'm curious what your, like, what your bottlenecks were, like, in Ruby itself. Right? Like, what what kind of, like what when you're trying to test or, like, hit the limits of the performance, like, aside from the GBL, like, what, like, kind of bottlenecks were you hitting just like in the Ruby ecosystem, to push things further?
Peter Ohler [00:25:59]:
The so it's a web server, and the first thing it does after it gets well, after it processes as a request and get some too, which calls into Ruby. After that, you know, I'm kinda hands off. It's whatever the, you know, whatever, the the application developer has written, that's gonna be have the impact on the performance. So, really, I just care about getting stuff into and getting the results back. So if you've got a a large, you know, Ruby result and you wanna convert that to JSON, that'll take some time. Even with OJ, it's still more overhead than than just saying here's a bunch of text.
Valentino Stoll [00:26:47]:
Right. So are you not doing any, like, connection pulling or anything like that to, like, keep the connection open with, you know, for multiple requests kind of thing?
Peter Ohler [00:26:58]:
Oh, yeah. It does it does keep the connection open.
Valentino Stoll [00:27:02]:
Okay.
Peter Ohler [00:27:03]:
And it, you can set the time out so that, you know, after a minute, it'll it'll drop it if there's no activity.
Ayush Nwatiya [00:27:11]:
Do you have any, examples of, like, real world usages where, where you're using Aggo and it solved problems that couldn't be solved in a in another way?
Peter Ohler [00:27:24]:
Yeah. You're asking the wrong person. I wrote the tools.
Ayush Nwatiya [00:27:28]:
Fair enough.
Valentino Stoll [00:27:34]:
I'm curious where, like, you wanna take take this to. Right? Like, I can imagine a whole bunch of use cases for it myself, but, do you have any, like, direction that, you plan to continue, pushing it to? Or I
Peter Ohler [00:27:51]:
don't see pushing it in any particular direction right now. It's it's kind of stable the way it is. I guess the next step would be, you know, web 2, but I haven't heard anybody really complaining a lot about that.
Valentino Stoll [00:28:05]:
Yeah. I mean, all I could think is people trying to hook this up to action cable. Or, or I guess I guess, probably any cable at this point.
Ayush Nwatiya [00:28:16]:
Yeah. I think the biggest, I think the biggest advantage of having, HTTP 2 support would be just multiplexing because you can if you're requesting assets, then you can send multiple assets down the same TCP connection. Whereas with HTTP 1.1, they're all individual requests. I think for, like, web application developers like myself, that's been the biggest, attraction to HTTP 2. It's just multiplexing.
Peter Ohler [00:28:45]:
Right. And, of course, with Algo, it's connections are pretty cheap. You could actually open up multiple connections to Algo, and it'll process the request in parallel. And I I have seen some people do that.
Valentino Stoll [00:29:02]:
Is that process fairly straightforward just like opening up new connections to it?
Peter Ohler [00:29:07]:
Yeah. Just open up a new connection.
Valentino Stoll [00:29:08]:
Open up a new connection.
Peter Ohler [00:29:10]:
Yeah. So
Ayush Nwatiya [00:29:12]:
Yeah. The bottleneck with, the bottleneck that multiplexing solves is more on the client side because a browser will only, I think, open, like, 6 to 8 connections, at a time. Mhmm. So if you have a a vast number of that need to be downloaded, that's gonna be a client side bottleneck, which is what multiplexing solves because it'll just do it over a single TCP connection.
Peter Ohler [00:29:37]:
Right. Right. Yeah, I think that most, machines would have a hard time keeping up with a whole lot more than 6 or 8 connections. So
Ayush Nwatiya [00:29:54]:
Yeah. Yeah. Exactly. Yeah. That's why, I I don't know the specifics of how each the magic that HTTP 2 does, but, because it does it over a single connection, it can get all those assets down pretty fast. So that's kinda led to usage of things like import maps and things like that where you don't bundle your JavaScript. You have them as, like, 20, 30 individual files because suddenly getting multiple files down from the server isn't that expensive anymore.
Peter Ohler [00:30:30]:
Right. It's there's still still overhead. I I think where h t p two, helps on that is if you've got, well, if you don't have let's say you're downloading large assets. You're pulling them from storage. There's gonna be delay as you're pulling each segment of those. So that lets you interleave it on the the same connection. If the server can provide the data fast enough so that there's no delays, then it it really doesn't provide any advantage over, having all the connections. I mean, your pipe is only so big.
Ayush Nwatiya [00:31:08]:
Yeah. Yeah. That's true. Exactly. I haven't played a massive, haven't played around, a massive amount with, the stuff myself. So I'm just talking from blog posts that I've read and performance benefits of HTTP too. I'm far from an expert.
Peter Ohler [00:31:27]:
Right. Oh, and it's also it it depends on the on what you're on what the application's doing with the data. You know?
Ayush Nwatiya [00:31:34]:
Yeah. Exactly.
Peter Ohler [00:31:35]:
Handle it. It's handling all synchronously, then it really doesn't help. But if it can handle it asynchronously, then I could definitely see an advantage there. Oh, I
Valentino Stoll [00:31:45]:
see you have an example on server sent events.
Peter Ohler [00:31:49]:
Mhmm.
Valentino Stoll [00:31:50]:
That's pretty cool. Interesting. So, I'm curious. We talked a little bit before the show about, your heavy go use.
Peter Ohler [00:31:59]:
So
Valentino Stoll [00:31:59]:
I'm curious, just from your perspective there, kind of, you know, where where do you see, like, Go as providing more benefit over Ruby in in some of these performance, you know, metrics. Like, why use a Go over, like, a a Go web server as an example?
Peter Ohler [00:32:21]:
Yeah. Actually, the, for the the go version, I actually wrote it with, as for a company, and they allowed me to open source it. So it's called Giggle. It's a u a uhnggql. We just pronounce it giggle.
Valentino Stoll [00:32:43]:
That's pretty funny.
Peter Ohler [00:32:45]:
But on on the other the other side of that is OJ. There's OGG, OJ for for Go. And it includes JSON Path and and quite a bit of others. I guess the the reason I've been drifting more toward Go other than, of course, I'm using for for the company I work with is it it is higher performance, and, honestly, it's easier to work with a larger team with it. It's a strong more strongly typed, which helps. It it also because of the way it's set up, it's easier to set up packages and have different teams or different individuals working on different part portions of the code, and then bring it all together. So
Valentino Stoll [00:33:37]:
I see. Yeah. I always I I did like that about I only did a small amount of Go development, but Mhmm. Definitely the typing, was very helpful for, you know, for people getting up to speed with things quickly, and they just infer the types Yeah. And no
Peter Ohler [00:33:55]:
Sometimes no one's in there. Yeah. Sometimes annoying. Yeah. But I'm
Valentino Stoll [00:34:03]:
curious, like, you know, what about what about Ruby, is limiting for, like, larger teams, like, working on the same thing? Is it just, like, the packaging ecosystem of Go that's more, beneficial in that respect?
Peter Ohler [00:34:17]:
Yeah. I think, Go has a little more it's a little more structured development environment. So there's a lot of tools that help you, you know, measure coverage, measure do benchmarks, kind of enforces testing. Ruby is and rightly so. It's it's a little more free form. It's great for small projects. I like it if I'm writing something for myself. Well, I work in the US and Canada.
Peter Ohler [00:34:54]:
So, I had a horrible time finding a bookkeeping system that would work. So I broke my own, and, of course, I wrote in Ruby. I there's no way I was gonna attempt that and go. Ruby is just easier, more fun to work with.
Valentino Stoll [00:35:11]:
Yeah. It
Peter Ohler [00:35:11]:
just doesn't scale as nicely with with larger teams.
Valentino Stoll [00:35:16]:
That's fair. Yeah. So I'm curious other details. Like, maybe, if, like, about OJ specifically, is there something more performant about OJ and Go over Ruby?
Peter Ohler [00:35:33]:
Well, yeah. There's there's less overhead in creating the objects. I think that's probably the biggest difference. I actually use a similar approach, a single pass parser to do the parsing, which helps a lot. A lot of tweaking there trying to figure out. It helps you learn the language a lot when you when you test out different approaches to solving the problem. Now the overhead of a function call, the overhead of a number of arguments, a number of return arguments, passing function pointers, all those things come into play.
Valentino Stoll [00:36:10]:
I gotcha. So would you say, like, Ruby objects are bloated in comparison?
Peter Ohler [00:36:15]:
No. It's, they're different.
Valentino Stoll [00:36:20]:
What are the what are the mechanisms? I'm I'm just curious. I don't know their internals well enough.
Peter Ohler [00:36:26]:
Yeah. So so think of c. You can write c code, and and you can have structs, and, you know, you can attach functions to them, and and they kinda look like objects. You gotta go, and you don't have to do as much work under the covers to make your objects. You know, they same thing that you've got attributes. The inheritance mechanism inheritance mechanism is a little, you know, less I don't like it as much as as I do Ruby, but Ruby is, much more flexible and much more powerful in terms of inheritance and the way you can embed or in include code when you're working. Honestly, it reminds me a lot of Lisp, which is that's my first serious language. Now, you know, past basic that is.
Peter Ohler [00:37:25]:
And list list flavors are and now nowadays, Clos has a a lot of the things that you get in Ruby, And I I suspect that that's where some of that came from is probably looking at LISP and seeing, oh, yeah. That that works.
Ayush Nwatiya [00:37:45]:
So, how does the, ojgem, differ from what the built in JSON parser in Ruby. Is the Ruby one, the one in the standard library, Gina, is that written in Ruby or is that a c extension? Because I know OJ is a c extension, isn't it?
Peter Ohler [00:38:02]:
It is. Yes. There's a the JSON gem, well, originally, it was in Ruby, and then with a c extension. And I think that's still true. I I don't know if the Ruby part of it is still there. I think it's all as the extension. But we took different approaches to the problem. So, you know, one of the and I've complained about this before.
Peter Ohler [00:38:28]:
I should complain, but one of the things that, the Ruby Gem encourages is monkey patching. So, basically, if you want the feature of being able to decode or encode encode your object, you basically have to modify the class, which means that if somebody else comes along and says, well, I want this other encoding system. Oh, and I picked the same names for encode. Yeah. Now we have collisions because you try and monkey patch and one overrides the other. The approach that I took with with, with OJ was the OJ is a separate package. It'll look at any object. You don't have to modify that object to to encode it.
Peter Ohler [00:39:16]:
You basically leave the object alone. It's not yours. Don't mess with it. And that's kind of the approach I took.
Ayush Nwatiya [00:39:28]:
Yeah. Monkey patching's a bit, yeah. It it's quite divisive. I'm a fan of it in certain context, but, completely see the point you're making there.
Peter Ohler [00:39:38]:
Right.
Valentino Stoll [00:39:39]:
Yeah. I was I was hopeful that that refinements would have fixed all of this. It doesn't quite work the same way.
Peter Ohler [00:39:47]:
No. It's a it's a design decision that was made. Once you once you're down that road, you, you know, you can't really have it both. Well, you can have it both ways, but some things you can't change.
Valentino Stoll [00:40:01]:
Yeah. That's one thing I I miss from, other languages is method overloading, and Mhmm. For for this reason. Right. Right? It makes it very hard to to do it without monkey patching, to be honest.
Peter Ohler [00:40:17]:
Right. Yeah. I mean, plug for WISP and flavors is, before and after in is nice. You could basically say before this gets called, do this. Or after it gets called, do that. So you could avoid the monkey patching issue.
Valentino Stoll [00:40:37]:
This is really cool. Alright. So, if somebody wants to, benchmark their, you know, web processes, and, like, do a comparison. Like, say, I have a Sinatra app that I want to swap out. I'll go for. What what's what's your recommended path for that? Do do you have tools that you use, to do tests like that?
Peter Ohler [00:41:00]:
Well, yeah. On the web or on my, read me page for Argo, you see at the very bottom, there's Perfert, performance measure based study. And that's typically what I would use when I'm when I'm, benchmarking my my stuff or trying to make improvements on it, Tweak it here. Tweak it there. I'll use that to see if I've been successful or not.
Valentino Stoll [00:41:26]:
Do you prefer that over, like, Apache Bench or something like that?
Peter Ohler [00:41:33]:
Yeah. Actually, I don't know if Apache Bench was available when I first wrote this, but there is another tool that, escapes me. See if I can recall by looking at this. Yeah. I don't see it right now. There are some other tools out there.
Valentino Stoll [00:42:05]:
Prefer looks very similar to Apache Bench. I mean, it's you don't have to do much.
Peter Ohler [00:42:14]:
I mean, there's a only a certain number of thing that you that you really wanna do. So, you know, give a few options for how you control it, the the number of workers and request request per second and stuff like that. There's only so many things you really wanna do.
Valentino Stoll [00:42:35]:
I'm interested to know, like, because I see you have lots of middleware, set up here to make it easy to snap and plug into this, kind of framework. I don't know if you wanna call it framework.
Peter Ohler [00:42:49]:
Yeah. I'm not sure what you're calling it.
Valentino Stoll [00:42:53]:
But so I'm curious, like, because I foresee this as being, like, something you can quickly just, like, you know, hey, like, try this out, and we'll show you the benchmarks, and performance of using this over something else. Or, you know, how does is it pretty straightforward to, like, connect things to the middleware in a way like that where you can get observability or or things like that?
Peter Ohler [00:43:24]:
Well, for middleware, it just uses the rack approach. Okay. So so it's I would it doesn't have any claim to be the middleware expert on that. It it just supports rack.
Valentino Stoll [00:43:37]:
So can you use this as a RAC middleware?
Peter Ohler [00:43:41]:
As a RAC server. Yeah. It's a RAC server? RAC. Yeah. Yes. As a matter of fact, some of the examples that do show that. Yeah. And I think it really boils down to when people are trying to build a Ruby, application web server.
Peter Ohler [00:44:00]:
Comes down, are you gonna go rails? Are you gonna use rack, which is a little more low level? Or are you gonna just do it all on your own? And all of them are viable options.
Valentino Stoll [00:44:15]:
Do you recommend using this for, like, custom sockets?
Peter Ohler [00:44:20]:
Explain that a little more detail.
Valentino Stoll [00:44:23]:
Sure. Like, if I just wanna connect, on on an arbitrary TCP socket or UDP or something like that.
Peter Ohler [00:44:32]:
Oh, yeah. It does actually support that. It actually supports a couple different models. So, for example, let's say that, let's say you have a a Rails application, and you want it to be able to get data this is completely contrived. Get data, right, by using GraphQL and and hitting some other server somewhere else. And then you could or even the same one. You could even open up a file descriptor and use that as your socket and Oh, cool. Exchange data that way.
Peter Ohler [00:45:08]:
So, you know, you might set up your as your data store, and you just keep keep a great big hash map in there. You know, obviously, in memory, so it's you lose it if it goes if it goes down. But, but you can then connect with a a lower level socket than than over TCP and get your data that way.
Valentino Stoll [00:45:33]:
So I'm curious, like, you know, how how long has this been out? Like, how long are you working on it? And, like, what is you know, how do you get this out there? You know, like, how does, have you gotten, like, a a good amount of contributors to it? And, you know, is it pretty pretty straightforward to manage at this point? Like, it's very well so clean written. I'm just curious, like, you know, what your experience is, working on it as a open source project.
Peter Ohler [00:46:05]:
Generally, what I've seen with the open source projects is you get contributors that want to, like, fix a bug or identify an issue, but they typically don't get in and say, hey. I wanna add this great this brand new big feature. It's typically just a little bit, you know, you have a spelling mistake or, you know, it crashes when you when you look at it backwards. So, you know, I can fix that. But, you know, that's pretty much the extent of, contributors so far on the stuff that I've been working on. Yeah. I guess the the biggest, and it's not really a complaint. My biggest wish, right, for open source is I wish there was some way of getting feedback from people that are using it, how they're using it, how they like it, but there doesn't really seem to be any nice path to do that.
Peter Ohler [00:47:02]:
There's a nice path for yeah. Here's issues, but there's not a nice path that says I'm using this working grade or have you tried using it in this way? It would probably be useful for for folks, you know, to see how other people are using different open source packages and see if it fits any of the things that they're trying to do.
Valentino Stoll [00:47:28]:
Yeah. I have been seeing a an recent influx in, like, open I I don't know what they call it specifically, but, basically, a lot of repositories will turn on by default, like, usage analytics and metrics, in the libraries, which, you know, they maybe isn't the best solution to this problem. But
Peter Ohler [00:47:53]:
Right. And there's discussions, at least for GitHub anyway. There's a discussion category. And I've seen a few, a little bit of that, less on, OJ and Ago, but more on some of my, Go projects. There seems to be a lot more I don't know. Maybe it's just a different kind of crowd that goes to that different types of languages. I I have no idea.
Valentino Stoll [00:48:23]:
That's funny. Yeah. I mean, I'm curious, like, do you how how do you like discussions? Like, have you found certain value in it, over just, like, issues workflow?
Peter Ohler [00:48:33]:
Yeah. I have, actually. You know, with the discussion, you don't have that pressure to, I gotta fix this bug. It can be yeah. But what do you think about this? And it's like, well, maybe not that. How about something like this instead? And, you know, you get more people involved. So it's yeah, a little more friendly, I guess, or a little more, less pressure.
Valentino Stoll [00:49:00]:
Less pressure. Yeah. I could see that.
Peter Ohler [00:49:07]:
But having, having said that, some of the projects, OJ, for example, I've got a couple of folks that, contributed contribute fairly regularly and, find a little tweak here, a little tweak there, you know, help make make it a little bit better. And that's kinda nice.
Valentino Stoll [00:49:27]:
So what's the release process look like for something more along the lines of AGO where I I guess you don't have too much use case, specifically yet. But, you know, do you foresee that being, like, maybe a different release process than something like OJ?
Peter Ohler [00:49:47]:
No. It's just I I try and follow the same release process. You know, I I put in a I keep a change log and try and follow the the standards for that. I, I branch off of development to to make my featured branches, merge them into development when it's time for release. Excuse me. I'll merge into master, tag it, and then do the release. Try to use the same practice across the board.
Valentino Stoll [00:50:20]:
So I can't help but think, you know, Agos, the Japanese word for a flying fish. Right? And you said you spent some time in in Japan yourself. This seems to follow kind of the the ruby pattern I've seen more in Japan where it's, you know, very much less rails heavy and very much Ruby forward.
Peter Ohler [00:50:46]:
Right.
Valentino Stoll [00:50:46]:
It's Sinatra definitely, at least years ago when I attended, Ruby Kaigi, you know, Sinatra was more of the the rails type than it was, rails itself. Right?
Peter Ohler [00:51:00]:
Right.
Valentino Stoll [00:51:01]:
So do you see, like, maybe more people from, like, the source of Ruby, the origins of it, maybe picking up Steve Moore, and and jumping on maybe Ago's adoption.
Peter Ohler [00:51:21]:
You know, I would definitely for OJ and OX. I think there's a bigger following in Japan than there is in the US. I mean, again, this is just based on issues and and PR requests. Right. But it seems to be more active in Japan. Ago, hard to say. That's that seems to be a little more international. It may be that Japan hasn't picked up on GraphQL as much.
Peter Ohler [00:51:54]:
But, again, you know, without that feedback from users, it's kind of hard for me to say. But, I do remember so the the name, Argo, I came up with that. My wife and I were taking a vacation, on the northern side of Japan and, driving along and, you know, stop at the little place. And I like dried fish. So got some dried fish and eating it and thinking of thinking about how I was gonna do this, this browser. And that's when it was, oh, I'll go flying fish. It flies. Yeah.
Peter Ohler [00:52:32]:
It's so fast. So That's great. So that's where it came from. The epiphany occurred driving in Northern Japan.
Valentino Stoll [00:52:46]:
So I'm curious too, like, just circling back to the the GraphQL aspect of this. Does it integrate the the full GraphQL spec? Is there anything missing, or client library connection wise?
Peter Ohler [00:53:00]:
No. It's a as far as I know, it's a full spec.
Valentino Stoll [00:53:04]:
I'd be interested to know.
Peter Ohler [00:53:06]:
Somewhere. But yeah. If you find something where it is, then let me know, and I'll get it in.
Valentino Stoll [00:53:11]:
I would love to see benchmarks of this against, GraphQL Ruby because, to be honest, it it could make a great, kind of showcase of, hey. What use this over that. Mhmm.
Ayush Nwatiya [00:53:24]:
Well, there
Peter Ohler [00:53:25]:
I think I have a link. Web frameworks, the benchmarker, I believe that compares it to just regular Ruby GraphQL.
Valentino Stoll [00:53:40]:
Oh, okay.
Peter Ohler [00:53:41]:
I know that's changed a bit over the years too, so I'm not even sure where everything stands. I know I haven't been updating it to with my latest version, so maybe a little little bit long in the tooth.
Valentino Stoll [00:53:54]:
Hey. I'll give it
Peter Ohler [00:53:55]:
a try.
Ayush Nwatiya [00:53:56]:
So all these, the open source projects which you run, it's quite a few of them. Are they just, like, hobby projects? Do you take sponsorships from the community or are they, like, for businesses, that that that you work for or work with?
Peter Ohler [00:54:15]:
They're hobby projects. Yeah. Originally, OJ and OX were for a company, KVH in Japan. Actually, I don't think KVH took this anymore, but I think it was subsumed by somebody else. But otherwise yeah. Just for hobby. I like writing code. Fair enough.
Peter Ohler [00:54:42]:
My relaxing time is, I stopped working. I sit in front of the TV with my wife and write code.
Valentino Stoll [00:54:55]:
I love that. Your peace your peaceful time, you know, raking the the Japanese sand garden. You know? Just just pumping out some, super performing code.
Peter Ohler [00:55:15]:
Well, I do try to get a little base practice in every once in a while too.
Valentino Stoll [00:55:20]:
Well, we've talked about a lot here. I'm excited to try out a ton of this stuff and and see how easy, you know, GraphQL is, working with Ago because it it definitely that is a great, use case for it. Is there anything else you wanted to talk about, you know, before we go to to PIX here?
Peter Ohler [00:55:41]:
Nothing nothing comes to mind. This is kind of a whole new experience to me, so I'm unsure what it's ex what was expected.
Valentino Stoll [00:55:49]:
Yeah. I mean, if anything, get people exposure to and, you know, really lower level, server stuff because you don't need, you know, you don't need too much, to get something out there, And, Agua definitely makes it easy to do, and, you know, stick to the basics, really. Cool. So we have a segment at the end here where we kind of just, pick a couple things or one thing that could be anything. It doesn't have to be code. It doesn't have to be anything in particular. Just pick something that, you know, you wanna share with the world, share with the the Ruby community here. Could be anything.
Peter Ohler [00:56:33]:
Should it warn me?
Valentino Stoll [00:56:33]:
We we can give you some time. We can give you some time.
Peter Ohler [00:56:36]:
Ayush, do
Valentino Stoll [00:56:37]:
you have anything you wanna share? I I can go if you don't.
Ayush Nwatiya [00:56:40]:
Yeah. I got a couple of things. One is, yeah, the TV show Better Call Saul, which I'm binging again for the 4th time, I think, which is a prequel to Breaking Bad. It's my favorite TV show of all time. So because I'm binging it, that's forefront of my mind. So that's one of my picks. And the other one is a movie I saw last week on Netflix called Unfrosted, which is made by Jerry Seinfeld, which is the most unbelievably stupid, nonsensical load of bollocks I've ever seen in my life. But it was thoroughly entertaining.
Ayush Nwatiya [00:57:18]:
It's like, what are those movies where, have a couple of glasses of wine, throw your brain away, and just switch off and watch it? It is such drivel, but it is so funny. It's far from a good movie, but it's one of those movies that's good because it's so bad.
Valentino Stoll [00:57:38]:
Oh, that's funny. I'm gonna have to check that out. That sounds great. I guess I can go. I've been playing with a lot of AI stuff, as always lately, and I got I got the Revit r one, which there's a lot of flack out there about it. And there are something that it's not great at, but, it does great at transcribing stuff and, using it for that purpose and summarizing meetings and things like that. So I am finding, like, use cases for it. The vision is, a little lacking.
Valentino Stoll [00:58:20]:
But I don't know. It's kinda fun. It's, it's an interesting device. So I will I will plug it. I don't know if it's worth it for everyone, but, I'm having fun with it. And the, yeah, the other thing, there is a, large language model trainer application where you basically, can point a, hugging face dataset at it, and it'll automatically fine tune a model for you, which is really interesting. So I've been toying around with that and, playing with stuff, to train and fine tune some models, based on different conversations, which is kinda funny.
Peter Ohler [00:59:08]:
Have you been using the Apple the Apple MLX?
Valentino Stoll [00:59:13]:
I I haven't, mostly because I I just bought this giant beefy machine Okay. Running GPUs. So I'm doing doing running it on GPUs right now. But I I am interested to check that out. I I only have access to, an m 2 here. So, I don't know how much performance I'll get out of that, but have you messed around with that at all?
Peter Ohler [00:59:40]:
Yeah. I've got a I've got a studio. It's it's an older one, the, the M1 Ultra. But it seems to run just fine.
Valentino Stoll [00:59:50]:
Nice. Have you run, just inference on it, or have you tried tuning stuff?
Peter Ohler [00:59:56]:
I, I've tried tuning. I haven't tried, you know, full blown training. That's kind of it's not what it's designed.
Valentino Stoll [01:00:04]:
Almost why bother. Yeah. Yeah. It's okay. Yeah.
Peter Ohler [01:00:07]:
A year from now, it might be done. Right.
Valentino Stoll [01:00:12]:
That's funny. I'm curious what you use for your, inference. Do you use, like, LAMA CPP, or do you have some I have
Peter Ohler [01:00:20]:
used llama CPP. Still trying to figure out what's best, for what I'm doing for work, Yeah. The hospitals who are trying to we're trying to use that for, various applications that I probably can't get into. Sure.
Valentino Stoll [01:00:39]:
Well, cool. Do you wanna share some pics, Peter?
Peter Ohler [01:00:45]:
I guess, like, what kind of picks?
Valentino Stoll [01:00:49]:
Maybe, one of the bases on your wall.
Peter Ohler [01:01:01]:
Listen.
Valentino Stoll [01:01:03]:
Oh, that is really cool. If you if you're not seeing this right now, he it looks to be a a custom, travel based.
Peter Ohler [01:01:12]:
Is that actually That's actually started out with so I was, I've been taking bass lessons from well, actually, a guy named Mike McAven. He he was a lead guitarist for Gypsy Rose. But I bought this and told them I was trying to build a travel base. And he says, oh, I've got this old junker here. All kinda busted up. So that became the neck. I cut off the head, made the body, and, yeah, you can see it behind it. So the strings basically go around, come up here, and go to the tuning, the tuners so you can adjust it.
Peter Ohler [01:01:54]:
This here is for sitting on your leg.
Valentino Stoll [01:01:58]:
That is so cool.
Ayush Nwatiya [01:02:01]:
So you're bad. It's unbelievable.
Valentino Stoll [01:02:04]:
Yeah. If you can't see this, the the tuners are at the bottom of the bass, and there's like a leg rest. It's really neat. Pretty inspirational.
Ayush Nwatiya [01:02:14]:
What's the
Peter Ohler [01:02:16]:
off so that I can put it in a backpack.
Valentino Stoll [01:02:18]:
Oh, wow. That's cool.
Ayush Nwatiya [01:02:20]:
What's the wood that the the body is made of?
Peter Ohler [01:02:23]:
Oh, it's just made of oak.
Ayush Nwatiya [01:02:25]:
Oh, okay.
Valentino Stoll [01:02:30]:
That's beautiful. How long did it take you to build that?
Peter Ohler [01:02:37]:
Not that long. You know, off and on over maybe a month or so.
Valentino Stoll [01:02:44]:
Nice. Alright. Do you have any plans to make more bases?
Peter Ohler [01:02:49]:
I don't know. I've got the other 2 of our our upright basses, that you see on the wall there, and I made those. So there's there's probably another one somewhere in the future.
Valentino Stoll [01:03:02]:
That's really cool.
Peter Ohler [01:03:03]:
I also build bicycles, so that's why I could do the metal work as well as the woodwork.
Valentino Stoll [01:03:08]:
Nice.
Peter Ohler [01:03:11]:
Got it.
Valentino Stoll [01:03:11]:
You have the simple stuff. Just the simple stuff. Yeah. You know, not even bikes are optimized enough for you.
Peter Ohler [01:03:24]:
Yeah. Actually, I go the other way on the bikes. I keep it simple. Single speed. So
Valentino Stoll [01:03:29]:
Love it. Well, Peter, it's great talking to you today. You know, thank you for sharing your experience, and algo with us. You know, I definitely am gonna dive in myself and see how fast I can get things to go, and, you know, again, thanks for all the work that you do. Appreciate
Peter Ohler [01:03:49]:
it. Love any feedback. Thanks for having me on on the show.
Valentino Stoll [01:03:53]:
And if, anybody wants to reach out to you or connect with you, on the interwebs, how can they do that?
Peter Ohler [01:04:00]:
Email at peterat ohler.com, ohlar.com.
Valentino Stoll [01:04:06]:
Fantastic. And so until next time, folks. Valentino is out of here, and thank you, for listening.
Everybody,
Valentino Stoll [00:00:05]:
welcome to another episode of the Ruby Rogues podcast. I'm your host today, Valentino Stohl. I'm joined by co host Ayush.
Ayush Nwatiya [00:00:11]:
Hey.
Valentino Stoll [00:00:12]:
And we have a very special guest today, Peter Oler. Peter, why don't you introduce yourself? Maybe tell us a little bit, about how how you're so great in the Ruby community and, all the lovely stuff you're working on.
Peter Ohler [00:00:26]:
I don't know if I can claim greatness, but, I started started with Ruby quite a while ago, early 2012, somewhere maybe 2011. I was over in Japan, started working for a a a company, KVH, and, we're starting to put together a web offering and, decided the best way to or the best language would be Ruby. So we started on that, and I found out that well, first of all, that the JSON parser was really slow. So I wrote JSON parser as a bit faster. And, years later, after after leaving, Japan, I decided that it would be a good idea to make a more high performance, web server and also stick in GraphQL, which, had started to come popular at the time. And the Ruby offering, well, was pretty pitiful, actually. So I tried to make something that was easier to use and higher performance, and that's Ago.
Valentino Stoll [00:01:40]:
So real quick, just when you say, JSON parser, you mean the OJ gem. Right?
Peter Ohler [00:01:48]:
Yes. Exactly.
Valentino Stoll [00:01:49]:
I think it's it's very popular, and has kind of come the de facto, at least in my opinion, of which which JSON parser to use, for its speed and performance. So, I think that's good to clarify. So I'm super interested, in this Ago, web server. It is a web server. Right? Like,
Peter Ohler [00:02:12]:
it is a web server.
Valentino Stoll [00:02:14]:
Pretty awesome. So you have a ton of benchmarks on here, which is kind of incredible. Do you wanna just kinda, like, walk us through the high level, like, why use this over, you know, even Sinatra or something like that?
Peter Ohler [00:02:31]:
Right. Well, I mean, you've Sinatra's been around a long time, and it works fine. But its focus wasn't so much on performance as it was on maybe ease of use or, you know, getting things to work, because it was it was a an early system. It's all written in Ruby, which, you know, it's great writing Ruby. I like writing Ruby. But if you're gonna go for performance, you need to make a a c extension to, you know, to get all all you can out of it. And that's, again, that's part of the reason that, Argo came about. So in terms of Sinatra, works great.
Peter Ohler [00:03:15]:
If you want a bit higher performance, then Algo is probably the way. If you're going GraphQL, I would definitely suggest Algo, just because the the current offering with Ruby for, for GraphQL is much harder to work with. With, with Hoggle, all you have to do is basically say, hey. Here's my class. As long as it implements, the appropriate method, then you're off and running.
Valentino Stoll [00:03:45]:
Honestly, I I really love this use case. I've been looking yeah. GraphQL. I've been looking for, like, just drop in, hey. I wanna use GraphQL and Ruby. And, there's, like, you know, you can use it in a Rails context with, GraphQL, Ruby Gem. Is that what this is using under the hood, or is it, like, custom implementation?
Peter Ohler [00:04:06]:
It's custom implementation.
Valentino Stoll [00:04:09]:
That's
Peter Ohler [00:04:10]:
awesome. All all written in c.
Valentino Stoll [00:04:13]:
Yeah. So this is super appealing to me. I'm gonna have to give this a try. What, what sparked kind of like do you are you like a GraphQL? Like, you know, we should use this for most things, or what's what's part of the need to to start here?
Peter Ohler [00:04:34]:
In my work, I've I started to use GraphQL. And, again, found most of the implementations, the the APIs are very clunky, harder to use. So, I wrote something for Argo, and I also wrote something for Go. Both, I'll try to make it as easy to use as possible. Again, not not requiring a whole lot of wrappers and and build up, but, basically, just pointed at the class you're interested in, exposing as a GraphQL object and, you know, how to run its course. And that was kind of the approach. I I wanted something that I could use that was, well, that was easier to use.
Ayush Nwatiya [00:05:23]:
And, AGO is rack compliant. Right? So you can use it with any any RAC app.
Peter Ohler [00:05:29]:
Yes. Exactly. Cool. It act it actually has, some other features as well. As you may it might be where GraphQL also does push. And we were trying to myself and another fellow, we're trying to get, well, a API specific update the specification for, for Rack to support that, without much success. We went back and forth a number of times, and there were enough people that were not interested in seeing that or didn't want to have the API extended that it it never made it. But, it's available in in Agua.
Valentino Stoll [00:06:10]:
That's awesome. I mean, I was looking at the, some of the documentation here on the GraphQL stuff, and it seems to support subscriptions, which is really interesting.
Peter Ohler [00:06:21]:
Yes. That's what I was referring to.
Valentino Stoll [00:06:24]:
Yeah. So gotcha. So, like, how does I'm curious. For those that don't know, GraphQL subscriptions are a way to kind of stream responses. So how does that kind of work under the hood?
Peter Ohler [00:06:40]:
Use well, you can, use a number of implementations. WebSockets is well, my preferred choice if you're using browser. If you're if you're trying to set up, an out of band subscription, then I typically use something like, NATS, which is a a messaging system.
Ayush Nwatiya [00:06:59]:
And so is the key feature of Aggo, is it high performance or is it GraphQL or is it both?
Peter Ohler [00:07:06]:
Yes. It's both. Okay. High performance and easy to use GraphQL.
Ayush Nwatiya [00:07:13]:
Cool. So, like, if I was approaching it like an idiot and say say that I'm said idiot who builds rails apps, simple rails, grad apps, and I use Puma, What's the reason for me to drop Puma for Ago? Or is that is that not the right use case for it?
Peter Ohler [00:07:32]:
Say say that again. Maybe
Ayush Nwatiya [00:07:33]:
I So like, I guess I'm just I just build rails apps. I build Yeah. Prod rails apps. And I use Puma as my web server. So if you were explaining it to an idiot, in this case me, what's the reason to drop Puma for Igo? Or is it not the right use case for Ago?
Peter Ohler [00:07:54]:
It depends on the the amount of, well, amount of request you get. So if you need something higher performance, then you might go with Argo. If you wanted to step into the GraphQL world, again, Algo.
Ayush Nwatiya [00:08:10]:
And with Algo, then I wouldn't need, like, NGINX or carry reverse proxy in front of Pima, I'm guessing, because that's the usual setup with rails, isn't it? You have a a reverse proxy because Pima is not very performing, so you need, like, a web server in front of it to serve static assets and stuff. But I'm guessing with Igo, that's not a problem.
Peter Ohler [00:08:33]:
That's right. Exactly true. 1 of the, iGo also does support having multiple workers in in separate, in set I want to say separate threads and separate, forked applications. The, well, the advantage of that, of course, is is you get around the, the GBL lock that you have with Ruby because you can have n number of workers all churning away at the same time. The problem is if you have shared data, then accessing that shared data, typically database, has overhead in itself as opposed to having everything in the same process, like, within a a hash map or something like that. So it's a trade off, and you have to look at your application to decide. If you, if you are interested in in, in just starting out with GraphQL and want a nice example, I do I did write a paper for, what was
Ayush Nwatiya [00:09:32]:
it,
Peter Ohler [00:09:35]:
AppSignal.
Ayush Nwatiya [00:09:37]:
Oh, okay.
Peter Ohler [00:09:38]:
It's called the well, it's a song application. If if you're on the Readme page for Argo, you'll see a reference there under the news, and it's the 2nd news item. But that
Ayush Nwatiya [00:09:51]:
kind of
Peter Ohler [00:09:52]:
walks you through it. Yeah. Yeah. I've never seen GraphQL before. I don't know what I'm doing. Here we go. Here's how to do it.
Valentino Stoll [00:10:00]:
Mhmm.
Ayush Nwatiya [00:10:02]:
Yep. I see that there. I'll give that a read a bit later on. So what is it that makes Argo so high performance? Is it just the fact that it's written in c, or is there something other, something else a bit clever going on?
Peter Ohler [00:10:19]:
Of course, there's something more clever than
Ayush Nwatiya [00:10:22]:
There has to be.
Peter Ohler [00:10:24]:
Languages, languages are what give you the performance by themselves. What they do is they give you the means to make something faster. So there's less overhead in c than there is in Ruby. That doesn't mean just because you write it in c then it's gonna be faster. I've seen plenty of code that's written in c that is abysmally slow. One of the one of the things that Algo does is it uses multiple threads. Now that's a problem with Ruby because multiple threads means you've gotta keep, handing over the the GBL, which there's overhead in that. With Algo, what happens is the request comes in.
Peter Ohler [00:11:09]:
It gets put on a a queue. That queue gets picked up by n number of worker threads. They process the request, and it's only when it gets to the well, the very end where it hits the Ruby side again, where it gets to JBL, does the the work, gets the response, gives up the lock, and then sends a response back to the requester. So probably 90% of the work is done outside of of Ruby.
Valentino Stoll [00:11:39]:
This this reminds me a lot of Shopify's Pitchfork. Are you familiar with that?
Peter Ohler [00:11:45]:
I'm not. No? I know what Shopify is, but not not Pitchfork.
Valentino Stoll [00:11:49]:
They they've been trying to, I think they've actually switched to using this for maybe not for their, main monolith, but it's a bit it's like a fork architecture HTTP server. It sounds a little bit similar.
Peter Ohler [00:12:11]:
Yeah. That happens at a different level. With Algo, the it has workers, and those are basically forks of the whole process. But within each one of those forks, there's still multiple threads that are processing the HTTP requests before it basically gets to the Ruby part and says, Ruby part, do your thing. Get the response. And then, again, it leaves the Ruby part alone and jumps back into multiple threads.
Valentino Stoll [00:12:39]:
Gotcha. And what's using what are you using for the, queuing system?
Peter Ohler [00:12:44]:
Oh, I brought my own.
Valentino Stoll [00:12:45]:
You you?
Peter Ohler [00:12:47]:
Of course.
Valentino Stoll [00:12:48]:
Of of course. But performance when when performance is, of utmost importance, you always write your own.
Peter Ohler [00:12:55]:
Pretty much.
Valentino Stoll [00:12:58]:
So I mean, I would love to dive into some of these benchmarks. You have a ton of benchmarks here in your repo. You give one for a rails context as well.
Peter Ohler [00:13:08]:
You
Valentino Stoll [00:13:08]:
know, how does it stack up? Let's say, you know, at Ayesha's point, you know, how does it stack up for a rails app running Puma? Like, what are how how are the benchmarks?
Peter Ohler [00:13:22]:
You know, it's been quite a few years since I did the benchmarks. I don't remember what real what I was using with rails at the time whether it was Puma or something else. I didn't see big differences, though. The the benchmarks were done with a single well, I guess they're done in a couple different ways, but generally not with both workers because with workers then you have to worry about the overhead of the shared, data store, which is typically quite a bit higher than than what you get from the the web servers themselves.
Valentino Stoll [00:14:02]:
Gotcha. Yeah. I mean, I'm just I I keep circling back to the 100,000 requests per second. Mhmm. And then that kind of like you mentioned, you know, hitting the limitations of of Ruby's benchmark because it goes so fast. Can you elaborate kind of on on that?
Peter Ohler [00:14:21]:
Right. So, your test tool has to be faster than what you're testing. So, the original or what I was originally using for the Ruby benchmarks wasn't able to keep up with the the rates. So I I had a I knew I could go or I knew I could process more data than what was what the benchmarks were showing. So I tried out my own and was able to get a significant boost in, and throughput or measured throughput.
Valentino Stoll [00:14:57]:
How do you know that? I'm curious. Like, where do you get to your point where you're like, like, I know this thing really.
Peter Ohler [00:15:05]:
Yeah. If you start up, well, a web server is kinda unique because it can have requests coming in from multiple sources. So if I started up 2, 3, 4, benchmarking tools and we're able to hammer on, Algo and see that all 3 of those benchmarking tools were pegged out, and I knew it could handle more than than each one of those individually.
Valentino Stoll [00:15:31]:
And what what protocols does this, currently support?
Peter Ohler [00:15:36]:
HTTP.
Valentino Stoll [00:15:37]:
Just, like, h two
Peter Ohler [00:15:39]:
v two? Oh, no. It's just HTTP and and h p s.
Valentino Stoll [00:15:47]:
Okay.
Peter Ohler [00:15:48]:
It it has not been upgraded to use h p 2 yet, so it it doesn't handle some of the more, well, some of the newer features. Yep. And,
Ayush Nwatiya [00:16:01]:
what about WebSockets? I'm guessing if you have GraphQL streaming, as you said, then it supports WebSockets as well. Right?
Peter Ohler [00:16:08]:
It does support WebSockets. Yes.
Valentino Stoll [00:16:12]:
So school so what are I'm I'm curious, like, the the real life, you know, how's it how's it handled in in production? Right? Like, are you running this in production? Are you using it for, like, massive scale yet? You know, how is it faring?
Peter Ohler [00:16:30]:
Yeah. I've you know, I have to just rely on the users to tell me that. And as you may be aware, open source projects typically don't get a lot of feedback except for issues. So you get complaints, but you don't get any of the, any of the things that say, hey. It's doing great. You know? I like it a lot or something like that. It's more about issues. Often, they start out with, I really like it.
Peter Ohler [00:17:02]:
But Yeah.
Valentino Stoll [00:17:04]:
I mean, honestly, this is like a I I love like, I again, it's circling back to GraphQL. Like, I feel like as, like, you know, it's I'm torn because on one hand, most Ruby apps are are gonna be using rails in some capacity for their data connection. Right? And so, it would be nice to just, like, quickly get a GraphQL server up and running in a Rails context. It is it fairly straightforward to, like, connect all those pieces together and mount this, like, on its own, like, on its own subprocess from Rails or or for its process? What is what is, like, kind of like that path look like to success?
Peter Ohler [00:17:56]:
I haven't, I haven't done a lot with rails myself.
Valentino Stoll [00:18:00]:
Okay.
Peter Ohler [00:18:02]:
As you might as you know, it's not in a high performance system. So I I tend to steer away from it. So
Valentino Stoll [00:18:10]:
I'm curious then if you do if you're not using Rails, what is your, like, preference for, like, database in combination with the GraphQL side?
Peter Ohler [00:18:20]:
Wrote my
Valentino Stoll [00:18:30]:
I have a feeling that will be the bottleneck.
Peter Ohler [00:18:38]:
Right? Mostly, quite honestly, I'm I'm doing most of my latest work in Go. But the databases I'm typically using are either Mongo or, Redis.
Valentino Stoll [00:18:51]:
Okay. Do do you find that, either of those would be, like, probably most performant alongside of Algo?
Peter Ohler [00:19:03]:
Depends what you're trying to build or what you're trying to store. Redis is probably high performance or higher performance, but it's a little bit more restricted than what you can store or how you store it in there. Mongo gives you a lot more flexibility in terms of query capabilities and, yeah.
Valentino Stoll [00:19:24]:
I'll admit it's been a long time since I've used Mongo.
Ayush Nwatiya [00:19:28]:
I'm
Valentino Stoll [00:19:28]:
curious, you know, since since you have experience with that, like, how how is the, the pace kept up for the for, for Mongo? Is it still, like, fairly performant?
Peter Ohler [00:19:40]:
Oh, yeah. I would definitely it's it's only getting better.
Valentino Stoll [00:19:43]:
I I forget why people decided to switch off of it, to be honest.
Peter Ohler [00:19:47]:
Yeah. I'm not I don't know why they would. We're still using it quite heavily in in both in Go and in Ruby. But, for Ruby, it's kinda nice because you store everything in JSON. So you take your Ruby object, you encode it into this JSON, and you store it. You fetch it back.
Valentino Stoll [00:20:07]:
Yeah. I mean, I remember and
Peter Ohler [00:20:09]:
you're ready to go.
Valentino Stoll [00:20:10]:
I remember my first RailsConf. It was wildly popular, definitely amongst Groupon as an example, which, you know, is is much smaller now. And living social used to be a thing. I don't know if they're still around, but right. I don't know why Mongo kind of dropped off the reels, and Ruby ecosystems. It's definitely not as prominent, but, right, I I always liked it because you could just like dump data at it, and it it could handle it.
Peter Ohler [00:20:41]:
Right. Exactly.
Valentino Stoll [00:20:42]:
Yeah. It reminds me I have a friend that's, he loves CouchDB, for the same reason.
Ayush Nwatiya [00:20:49]:
He's just
Valentino Stoll [00:20:50]:
like, you know, relax. Just get just go on the couch.
Ayush Nwatiya [00:20:52]:
Drop it in there.
Valentino Stoll [00:20:58]:
That's funny. So it's interesting. So, I'm curious, like, then, do you find it kind of, like, easier to connect GraphQL, like, typing system because you're using Mongo in those cases because it's so flexible? And the I imagine the data structures are similar.
Peter Ohler [00:21:19]:
The, well, for GraphQL, the the way Argo works is the data structures are just Ruby objects. So, yeah, it's easy. It's a Ruby object. It's easily encoded in JSON, which is easily stored in Mongo and vice versa. So that's really easy to say yeah. So it makes it easy to say, hey. Get me all my all my songs, And you just do a query on the the song database in Mongo, and, there they are. You pull them back out, and they're already decoded, and your work is done.
Valentino Stoll [00:21:56]:
Oh, that's really cool. I'm looking at your example now with the artist and everything. Oh, that's really cool.
Peter Ohler [00:22:06]:
Yeah.
Valentino Stoll [00:22:06]:
Oh, man. I'm gonna play with this more. So I see you have locks in, in your mutations. It I I imagine there's a lot of, like, internals happening in AGO that requires locking mechanisms.
Peter Ohler [00:22:25]:
Actually, those locks are are simply because you have well, because you're getting HP requests, it allows you to have multiple requests in process at the same time.
Valentino Stoll [00:22:37]:
Oh, I see.
Peter Ohler [00:22:39]:
Now so while Algo can handle that nicely, the the Ruby side of things can't. So, it may it may fire off multiple requests for the same object. So unless you put a lock on the resource, then, you know, you're liable to get collisions.
Valentino Stoll [00:22:59]:
Oh, this is really interesting because one of my biggest complaints of, like, rails and, like, a background queuing processing system is, like, you have to, like, load your entire application just to run a worker, which seems so bizarre to me, which I know there are, like, some optimizations that are made, right, on the, on the constant loading side, to help ease the pain of that, which has gotten better, but still, like, not ideal. Does does Ago kind of, like, help with that? Like, have you have you had a good experience, like, with, like, background workers and AGO, like, as a as an ecosystem?
Peter Ohler [00:23:45]:
Couple with the database, it works great. You know? Yeah. You can get very high throughput, and your your bottleneck becomes a database, which is is kinda what you want. You know, you don't want your the bottleneck to be your application. The, you know, one of the things that I I did notice early on was that the rails is great for putting together prototypes and, you know, for setting up you know, I want these windows to display my data. Fantastic. But if you're looking for high performance, you know, down the road, it becomes more and more difficult to to get that performance with with the rail with the rails layers on top of it. So if you can break your application up to have a hyper orange back end and with, you know, something that does just the view, making use of rails, That seems to be a good way to make it work.
Ayush Nwatiya [00:24:49]:
So when when you're writing, web applications, with Aggo, do you use any framework or do you just write, like, rack code, directly for the server?
Peter Ohler [00:25:02]:
I I don't even use rack.
Ayush Nwatiya [00:25:05]:
Oh, okay. Fair enough. I can see you all, like, going down a level even further than that then.
Peter Ohler [00:25:13]:
Exactly. Yeah. The only thing I struggle with is yeah. I'm not very good with making per UI and some dealing with CSS and that kind of thing. So, yeah, it'd be nice to have somebody hold my hand on that, which is kinda what Rails does. But yeah.
Valentino Stoll [00:25:33]:
I'm curious what I'm I'm curious what your, like, what your bottlenecks were, like, in Ruby itself. Right? Like, what what kind of, like what when you're trying to test or, like, hit the limits of the performance, like, aside from the GBL, like, what, like, kind of bottlenecks were you hitting just like in the Ruby ecosystem, to push things further?
Peter Ohler [00:25:59]:
The so it's a web server, and the first thing it does after it gets well, after it processes as a request and get some too, which calls into Ruby. After that, you know, I'm kinda hands off. It's whatever the, you know, whatever, the the application developer has written, that's gonna be have the impact on the performance. So, really, I just care about getting stuff into and getting the results back. So if you've got a a large, you know, Ruby result and you wanna convert that to JSON, that'll take some time. Even with OJ, it's still more overhead than than just saying here's a bunch of text.
Valentino Stoll [00:26:47]:
Right. So are you not doing any, like, connection pulling or anything like that to, like, keep the connection open with, you know, for multiple requests kind of thing?
Peter Ohler [00:26:58]:
Oh, yeah. It does it does keep the connection open.
Valentino Stoll [00:27:02]:
Okay.
Peter Ohler [00:27:03]:
And it, you can set the time out so that, you know, after a minute, it'll it'll drop it if there's no activity.
Ayush Nwatiya [00:27:11]:
Do you have any, examples of, like, real world usages where, where you're using Aggo and it solved problems that couldn't be solved in a in another way?
Peter Ohler [00:27:24]:
Yeah. You're asking the wrong person. I wrote the tools.
Ayush Nwatiya [00:27:28]:
Fair enough.
Valentino Stoll [00:27:34]:
I'm curious where, like, you wanna take take this to. Right? Like, I can imagine a whole bunch of use cases for it myself, but, do you have any, like, direction that, you plan to continue, pushing it to? Or I
Peter Ohler [00:27:51]:
don't see pushing it in any particular direction right now. It's it's kind of stable the way it is. I guess the next step would be, you know, web 2, but I haven't heard anybody really complaining a lot about that.
Valentino Stoll [00:28:05]:
Yeah. I mean, all I could think is people trying to hook this up to action cable. Or, or I guess I guess, probably any cable at this point.
Ayush Nwatiya [00:28:16]:
Yeah. I think the biggest, I think the biggest advantage of having, HTTP 2 support would be just multiplexing because you can if you're requesting assets, then you can send multiple assets down the same TCP connection. Whereas with HTTP 1.1, they're all individual requests. I think for, like, web application developers like myself, that's been the biggest, attraction to HTTP 2. It's just multiplexing.
Peter Ohler [00:28:45]:
Right. And, of course, with Algo, it's connections are pretty cheap. You could actually open up multiple connections to Algo, and it'll process the request in parallel. And I I have seen some people do that.
Valentino Stoll [00:29:02]:
Is that process fairly straightforward just like opening up new connections to it?
Peter Ohler [00:29:07]:
Yeah. Just open up a new connection.
Valentino Stoll [00:29:08]:
Open up a new connection.
Peter Ohler [00:29:10]:
Yeah. So
Ayush Nwatiya [00:29:12]:
Yeah. The bottleneck with, the bottleneck that multiplexing solves is more on the client side because a browser will only, I think, open, like, 6 to 8 connections, at a time. Mhmm. So if you have a a vast number of that need to be downloaded, that's gonna be a client side bottleneck, which is what multiplexing solves because it'll just do it over a single TCP connection.
Peter Ohler [00:29:37]:
Right. Right. Yeah, I think that most, machines would have a hard time keeping up with a whole lot more than 6 or 8 connections. So
Ayush Nwatiya [00:29:54]:
Yeah. Yeah. Exactly. Yeah. That's why, I I don't know the specifics of how each the magic that HTTP 2 does, but, because it does it over a single connection, it can get all those assets down pretty fast. So that's kinda led to usage of things like import maps and things like that where you don't bundle your JavaScript. You have them as, like, 20, 30 individual files because suddenly getting multiple files down from the server isn't that expensive anymore.
Peter Ohler [00:30:30]:
Right. It's there's still still overhead. I I think where h t p two, helps on that is if you've got, well, if you don't have let's say you're downloading large assets. You're pulling them from storage. There's gonna be delay as you're pulling each segment of those. So that lets you interleave it on the the same connection. If the server can provide the data fast enough so that there's no delays, then it it really doesn't provide any advantage over, having all the connections. I mean, your pipe is only so big.
Ayush Nwatiya [00:31:08]:
Yeah. Yeah. That's true. Exactly. I haven't played a massive, haven't played around, a massive amount with, the stuff myself. So I'm just talking from blog posts that I've read and performance benefits of HTTP too. I'm far from an expert.
Peter Ohler [00:31:27]:
Right. Oh, and it's also it it depends on the on what you're on what the application's doing with the data. You know?
Ayush Nwatiya [00:31:34]:
Yeah. Exactly.
Peter Ohler [00:31:35]:
Handle it. It's handling all synchronously, then it really doesn't help. But if it can handle it asynchronously, then I could definitely see an advantage there. Oh, I
Valentino Stoll [00:31:45]:
see you have an example on server sent events.
Peter Ohler [00:31:49]:
Mhmm.
Valentino Stoll [00:31:50]:
That's pretty cool. Interesting. So, I'm curious. We talked a little bit before the show about, your heavy go use.
Peter Ohler [00:31:59]:
So
Valentino Stoll [00:31:59]:
I'm curious, just from your perspective there, kind of, you know, where where do you see, like, Go as providing more benefit over Ruby in in some of these performance, you know, metrics. Like, why use a Go over, like, a a Go web server as an example?
Peter Ohler [00:32:21]:
Yeah. Actually, the, for the the go version, I actually wrote it with, as for a company, and they allowed me to open source it. So it's called Giggle. It's a u a uhnggql. We just pronounce it giggle.
Valentino Stoll [00:32:43]:
That's pretty funny.
Peter Ohler [00:32:45]:
But on on the other the other side of that is OJ. There's OGG, OJ for for Go. And it includes JSON Path and and quite a bit of others. I guess the the reason I've been drifting more toward Go other than, of course, I'm using for for the company I work with is it it is higher performance, and, honestly, it's easier to work with a larger team with it. It's a strong more strongly typed, which helps. It it also because of the way it's set up, it's easier to set up packages and have different teams or different individuals working on different part portions of the code, and then bring it all together. So
Valentino Stoll [00:33:37]:
I see. Yeah. I always I I did like that about I only did a small amount of Go development, but Mhmm. Definitely the typing, was very helpful for, you know, for people getting up to speed with things quickly, and they just infer the types Yeah. And no
Peter Ohler [00:33:55]:
Sometimes no one's in there. Yeah. Sometimes annoying. Yeah. But I'm
Valentino Stoll [00:34:03]:
curious, like, you know, what about what about Ruby, is limiting for, like, larger teams, like, working on the same thing? Is it just, like, the packaging ecosystem of Go that's more, beneficial in that respect?
Peter Ohler [00:34:17]:
Yeah. I think, Go has a little more it's a little more structured development environment. So there's a lot of tools that help you, you know, measure coverage, measure do benchmarks, kind of enforces testing. Ruby is and rightly so. It's it's a little more free form. It's great for small projects. I like it if I'm writing something for myself. Well, I work in the US and Canada.
Peter Ohler [00:34:54]:
So, I had a horrible time finding a bookkeeping system that would work. So I broke my own, and, of course, I wrote in Ruby. I there's no way I was gonna attempt that and go. Ruby is just easier, more fun to work with.
Valentino Stoll [00:35:11]:
Yeah. It
Peter Ohler [00:35:11]:
just doesn't scale as nicely with with larger teams.
Valentino Stoll [00:35:16]:
That's fair. Yeah. So I'm curious other details. Like, maybe, if, like, about OJ specifically, is there something more performant about OJ and Go over Ruby?
Peter Ohler [00:35:33]:
Well, yeah. There's there's less overhead in creating the objects. I think that's probably the biggest difference. I actually use a similar approach, a single pass parser to do the parsing, which helps a lot. A lot of tweaking there trying to figure out. It helps you learn the language a lot when you when you test out different approaches to solving the problem. Now the overhead of a function call, the overhead of a number of arguments, a number of return arguments, passing function pointers, all those things come into play.
Valentino Stoll [00:36:10]:
I gotcha. So would you say, like, Ruby objects are bloated in comparison?
Peter Ohler [00:36:15]:
No. It's, they're different.
Valentino Stoll [00:36:20]:
What are the what are the mechanisms? I'm I'm just curious. I don't know their internals well enough.
Peter Ohler [00:36:26]:
Yeah. So so think of c. You can write c code, and and you can have structs, and, you know, you can attach functions to them, and and they kinda look like objects. You gotta go, and you don't have to do as much work under the covers to make your objects. You know, they same thing that you've got attributes. The inheritance mechanism inheritance mechanism is a little, you know, less I don't like it as much as as I do Ruby, but Ruby is, much more flexible and much more powerful in terms of inheritance and the way you can embed or in include code when you're working. Honestly, it reminds me a lot of Lisp, which is that's my first serious language. Now, you know, past basic that is.
Peter Ohler [00:37:25]:
And list list flavors are and now nowadays, Clos has a a lot of the things that you get in Ruby, And I I suspect that that's where some of that came from is probably looking at LISP and seeing, oh, yeah. That that works.
Ayush Nwatiya [00:37:45]:
So, how does the, ojgem, differ from what the built in JSON parser in Ruby. Is the Ruby one, the one in the standard library, Gina, is that written in Ruby or is that a c extension? Because I know OJ is a c extension, isn't it?
Peter Ohler [00:38:02]:
It is. Yes. There's a the JSON gem, well, originally, it was in Ruby, and then with a c extension. And I think that's still true. I I don't know if the Ruby part of it is still there. I think it's all as the extension. But we took different approaches to the problem. So, you know, one of the and I've complained about this before.
Peter Ohler [00:38:28]:
I should complain, but one of the things that, the Ruby Gem encourages is monkey patching. So, basically, if you want the feature of being able to decode or encode encode your object, you basically have to modify the class, which means that if somebody else comes along and says, well, I want this other encoding system. Oh, and I picked the same names for encode. Yeah. Now we have collisions because you try and monkey patch and one overrides the other. The approach that I took with with, with OJ was the OJ is a separate package. It'll look at any object. You don't have to modify that object to to encode it.
Peter Ohler [00:39:16]:
You basically leave the object alone. It's not yours. Don't mess with it. And that's kind of the approach I took.
Ayush Nwatiya [00:39:28]:
Yeah. Monkey patching's a bit, yeah. It it's quite divisive. I'm a fan of it in certain context, but, completely see the point you're making there.
Peter Ohler [00:39:38]:
Right.
Valentino Stoll [00:39:39]:
Yeah. I was I was hopeful that that refinements would have fixed all of this. It doesn't quite work the same way.
Peter Ohler [00:39:47]:
No. It's a it's a design decision that was made. Once you once you're down that road, you, you know, you can't really have it both. Well, you can have it both ways, but some things you can't change.
Valentino Stoll [00:40:01]:
Yeah. That's one thing I I miss from, other languages is method overloading, and Mhmm. For for this reason. Right. Right? It makes it very hard to to do it without monkey patching, to be honest.
Peter Ohler [00:40:17]:
Right. Yeah. I mean, plug for WISP and flavors is, before and after in is nice. You could basically say before this gets called, do this. Or after it gets called, do that. So you could avoid the monkey patching issue.
Valentino Stoll [00:40:37]:
This is really cool. Alright. So, if somebody wants to, benchmark their, you know, web processes, and, like, do a comparison. Like, say, I have a Sinatra app that I want to swap out. I'll go for. What what's what's your recommended path for that? Do do you have tools that you use, to do tests like that?
Peter Ohler [00:41:00]:
Well, yeah. On the web or on my, read me page for Argo, you see at the very bottom, there's Perfert, performance measure based study. And that's typically what I would use when I'm when I'm, benchmarking my my stuff or trying to make improvements on it, Tweak it here. Tweak it there. I'll use that to see if I've been successful or not.
Valentino Stoll [00:41:26]:
Do you prefer that over, like, Apache Bench or something like that?
Peter Ohler [00:41:33]:
Yeah. Actually, I don't know if Apache Bench was available when I first wrote this, but there is another tool that, escapes me. See if I can recall by looking at this. Yeah. I don't see it right now. There are some other tools out there.
Valentino Stoll [00:42:05]:
Prefer looks very similar to Apache Bench. I mean, it's you don't have to do much.
Peter Ohler [00:42:14]:
I mean, there's a only a certain number of thing that you that you really wanna do. So, you know, give a few options for how you control it, the the number of workers and request request per second and stuff like that. There's only so many things you really wanna do.
Valentino Stoll [00:42:35]:
I'm interested to know, like, because I see you have lots of middleware, set up here to make it easy to snap and plug into this, kind of framework. I don't know if you wanna call it framework.
Peter Ohler [00:42:49]:
Yeah. I'm not sure what you're calling it.
Valentino Stoll [00:42:53]:
But so I'm curious, like, because I foresee this as being, like, something you can quickly just, like, you know, hey, like, try this out, and we'll show you the benchmarks, and performance of using this over something else. Or, you know, how does is it pretty straightforward to, like, connect things to the middleware in a way like that where you can get observability or or things like that?
Peter Ohler [00:43:24]:
Well, for middleware, it just uses the rack approach. Okay. So so it's I would it doesn't have any claim to be the middleware expert on that. It it just supports rack.
Valentino Stoll [00:43:37]:
So can you use this as a RAC middleware?
Peter Ohler [00:43:41]:
As a RAC server. Yeah. It's a RAC server? RAC. Yeah. Yes. As a matter of fact, some of the examples that do show that. Yeah. And I think it really boils down to when people are trying to build a Ruby, application web server.
Peter Ohler [00:44:00]:
Comes down, are you gonna go rails? Are you gonna use rack, which is a little more low level? Or are you gonna just do it all on your own? And all of them are viable options.
Valentino Stoll [00:44:15]:
Do you recommend using this for, like, custom sockets?
Peter Ohler [00:44:20]:
Explain that a little more detail.
Valentino Stoll [00:44:23]:
Sure. Like, if I just wanna connect, on on an arbitrary TCP socket or UDP or something like that.
Peter Ohler [00:44:32]:
Oh, yeah. It does actually support that. It actually supports a couple different models. So, for example, let's say that, let's say you have a a Rails application, and you want it to be able to get data this is completely contrived. Get data, right, by using GraphQL and and hitting some other server somewhere else. And then you could or even the same one. You could even open up a file descriptor and use that as your socket and Oh, cool. Exchange data that way.
Peter Ohler [00:45:08]:
So, you know, you might set up your as your data store, and you just keep keep a great big hash map in there. You know, obviously, in memory, so it's you lose it if it goes if it goes down. But, but you can then connect with a a lower level socket than than over TCP and get your data that way.
Valentino Stoll [00:45:33]:
So I'm curious, like, you know, how how long has this been out? Like, how long are you working on it? And, like, what is you know, how do you get this out there? You know, like, how does, have you gotten, like, a a good amount of contributors to it? And, you know, is it pretty pretty straightforward to manage at this point? Like, it's very well so clean written. I'm just curious, like, you know, what your experience is, working on it as a open source project.
Peter Ohler [00:46:05]:
Generally, what I've seen with the open source projects is you get contributors that want to, like, fix a bug or identify an issue, but they typically don't get in and say, hey. I wanna add this great this brand new big feature. It's typically just a little bit, you know, you have a spelling mistake or, you know, it crashes when you when you look at it backwards. So, you know, I can fix that. But, you know, that's pretty much the extent of, contributors so far on the stuff that I've been working on. Yeah. I guess the the biggest, and it's not really a complaint. My biggest wish, right, for open source is I wish there was some way of getting feedback from people that are using it, how they're using it, how they like it, but there doesn't really seem to be any nice path to do that.
Peter Ohler [00:47:02]:
There's a nice path for yeah. Here's issues, but there's not a nice path that says I'm using this working grade or have you tried using it in this way? It would probably be useful for for folks, you know, to see how other people are using different open source packages and see if it fits any of the things that they're trying to do.
Valentino Stoll [00:47:28]:
Yeah. I have been seeing a an recent influx in, like, open I I don't know what they call it specifically, but, basically, a lot of repositories will turn on by default, like, usage analytics and metrics, in the libraries, which, you know, they maybe isn't the best solution to this problem. But
Peter Ohler [00:47:53]:
Right. And there's discussions, at least for GitHub anyway. There's a discussion category. And I've seen a few, a little bit of that, less on, OJ and Ago, but more on some of my, Go projects. There seems to be a lot more I don't know. Maybe it's just a different kind of crowd that goes to that different types of languages. I I have no idea.
Valentino Stoll [00:48:23]:
That's funny. Yeah. I mean, I'm curious, like, do you how how do you like discussions? Like, have you found certain value in it, over just, like, issues workflow?
Peter Ohler [00:48:33]:
Yeah. I have, actually. You know, with the discussion, you don't have that pressure to, I gotta fix this bug. It can be yeah. But what do you think about this? And it's like, well, maybe not that. How about something like this instead? And, you know, you get more people involved. So it's yeah, a little more friendly, I guess, or a little more, less pressure.
Valentino Stoll [00:49:00]:
Less pressure. Yeah. I could see that.
Peter Ohler [00:49:07]:
But having, having said that, some of the projects, OJ, for example, I've got a couple of folks that, contributed contribute fairly regularly and, find a little tweak here, a little tweak there, you know, help make make it a little bit better. And that's kinda nice.
Valentino Stoll [00:49:27]:
So what's the release process look like for something more along the lines of AGO where I I guess you don't have too much use case, specifically yet. But, you know, do you foresee that being, like, maybe a different release process than something like OJ?
Peter Ohler [00:49:47]:
No. It's just I I try and follow the same release process. You know, I I put in a I keep a change log and try and follow the the standards for that. I, I branch off of development to to make my featured branches, merge them into development when it's time for release. Excuse me. I'll merge into master, tag it, and then do the release. Try to use the same practice across the board.
Valentino Stoll [00:50:20]:
So I can't help but think, you know, Agos, the Japanese word for a flying fish. Right? And you said you spent some time in in Japan yourself. This seems to follow kind of the the ruby pattern I've seen more in Japan where it's, you know, very much less rails heavy and very much Ruby forward.
Peter Ohler [00:50:46]:
Right.
Valentino Stoll [00:50:46]:
It's Sinatra definitely, at least years ago when I attended, Ruby Kaigi, you know, Sinatra was more of the the rails type than it was, rails itself. Right?
Peter Ohler [00:51:00]:
Right.
Valentino Stoll [00:51:01]:
So do you see, like, maybe more people from, like, the source of Ruby, the origins of it, maybe picking up Steve Moore, and and jumping on maybe Ago's adoption.
Peter Ohler [00:51:21]:
You know, I would definitely for OJ and OX. I think there's a bigger following in Japan than there is in the US. I mean, again, this is just based on issues and and PR requests. Right. But it seems to be more active in Japan. Ago, hard to say. That's that seems to be a little more international. It may be that Japan hasn't picked up on GraphQL as much.
Peter Ohler [00:51:54]:
But, again, you know, without that feedback from users, it's kind of hard for me to say. But, I do remember so the the name, Argo, I came up with that. My wife and I were taking a vacation, on the northern side of Japan and, driving along and, you know, stop at the little place. And I like dried fish. So got some dried fish and eating it and thinking of thinking about how I was gonna do this, this browser. And that's when it was, oh, I'll go flying fish. It flies. Yeah.
Peter Ohler [00:52:32]:
It's so fast. So That's great. So that's where it came from. The epiphany occurred driving in Northern Japan.
Valentino Stoll [00:52:46]:
So I'm curious too, like, just circling back to the the GraphQL aspect of this. Does it integrate the the full GraphQL spec? Is there anything missing, or client library connection wise?
Peter Ohler [00:53:00]:
No. It's a as far as I know, it's a full spec.
Valentino Stoll [00:53:04]:
I'd be interested to know.
Peter Ohler [00:53:06]:
Somewhere. But yeah. If you find something where it is, then let me know, and I'll get it in.
Valentino Stoll [00:53:11]:
I would love to see benchmarks of this against, GraphQL Ruby because, to be honest, it it could make a great, kind of showcase of, hey. What use this over that. Mhmm.
Ayush Nwatiya [00:53:24]:
Well, there
Peter Ohler [00:53:25]:
I think I have a link. Web frameworks, the benchmarker, I believe that compares it to just regular Ruby GraphQL.
Valentino Stoll [00:53:40]:
Oh, okay.
Peter Ohler [00:53:41]:
I know that's changed a bit over the years too, so I'm not even sure where everything stands. I know I haven't been updating it to with my latest version, so maybe a little little bit long in the tooth.
Valentino Stoll [00:53:54]:
Hey. I'll give it
Peter Ohler [00:53:55]:
a try.
Ayush Nwatiya [00:53:56]:
So all these, the open source projects which you run, it's quite a few of them. Are they just, like, hobby projects? Do you take sponsorships from the community or are they, like, for businesses, that that that you work for or work with?
Peter Ohler [00:54:15]:
They're hobby projects. Yeah. Originally, OJ and OX were for a company, KVH in Japan. Actually, I don't think KVH took this anymore, but I think it was subsumed by somebody else. But otherwise yeah. Just for hobby. I like writing code. Fair enough.
Peter Ohler [00:54:42]:
My relaxing time is, I stopped working. I sit in front of the TV with my wife and write code.
Valentino Stoll [00:54:55]:
I love that. Your peace your peaceful time, you know, raking the the Japanese sand garden. You know? Just just pumping out some, super performing code.
Peter Ohler [00:55:15]:
Well, I do try to get a little base practice in every once in a while too.
Valentino Stoll [00:55:20]:
Well, we've talked about a lot here. I'm excited to try out a ton of this stuff and and see how easy, you know, GraphQL is, working with Ago because it it definitely that is a great, use case for it. Is there anything else you wanted to talk about, you know, before we go to to PIX here?
Peter Ohler [00:55:41]:
Nothing nothing comes to mind. This is kind of a whole new experience to me, so I'm unsure what it's ex what was expected.
Valentino Stoll [00:55:49]:
Yeah. I mean, if anything, get people exposure to and, you know, really lower level, server stuff because you don't need, you know, you don't need too much, to get something out there, And, Agua definitely makes it easy to do, and, you know, stick to the basics, really. Cool. So we have a segment at the end here where we kind of just, pick a couple things or one thing that could be anything. It doesn't have to be code. It doesn't have to be anything in particular. Just pick something that, you know, you wanna share with the world, share with the the Ruby community here. Could be anything.
Peter Ohler [00:56:33]:
Should it warn me?
Valentino Stoll [00:56:33]:
We we can give you some time. We can give you some time.
Peter Ohler [00:56:36]:
Ayush, do
Valentino Stoll [00:56:37]:
you have anything you wanna share? I I can go if you don't.
Ayush Nwatiya [00:56:40]:
Yeah. I got a couple of things. One is, yeah, the TV show Better Call Saul, which I'm binging again for the 4th time, I think, which is a prequel to Breaking Bad. It's my favorite TV show of all time. So because I'm binging it, that's forefront of my mind. So that's one of my picks. And the other one is a movie I saw last week on Netflix called Unfrosted, which is made by Jerry Seinfeld, which is the most unbelievably stupid, nonsensical load of bollocks I've ever seen in my life. But it was thoroughly entertaining.
Ayush Nwatiya [00:57:18]:
It's like, what are those movies where, have a couple of glasses of wine, throw your brain away, and just switch off and watch it? It is such drivel, but it is so funny. It's far from a good movie, but it's one of those movies that's good because it's so bad.
Valentino Stoll [00:57:38]:
Oh, that's funny. I'm gonna have to check that out. That sounds great. I guess I can go. I've been playing with a lot of AI stuff, as always lately, and I got I got the Revit r one, which there's a lot of flack out there about it. And there are something that it's not great at, but, it does great at transcribing stuff and, using it for that purpose and summarizing meetings and things like that. So I am finding, like, use cases for it. The vision is, a little lacking.
Valentino Stoll [00:58:20]:
But I don't know. It's kinda fun. It's, it's an interesting device. So I will I will plug it. I don't know if it's worth it for everyone, but, I'm having fun with it. And the, yeah, the other thing, there is a, large language model trainer application where you basically, can point a, hugging face dataset at it, and it'll automatically fine tune a model for you, which is really interesting. So I've been toying around with that and, playing with stuff, to train and fine tune some models, based on different conversations, which is kinda funny.
Peter Ohler [00:59:08]:
Have you been using the Apple the Apple MLX?
Valentino Stoll [00:59:13]:
I I haven't, mostly because I I just bought this giant beefy machine Okay. Running GPUs. So I'm doing doing running it on GPUs right now. But I I am interested to check that out. I I only have access to, an m 2 here. So, I don't know how much performance I'll get out of that, but have you messed around with that at all?
Peter Ohler [00:59:40]:
Yeah. I've got a I've got a studio. It's it's an older one, the, the M1 Ultra. But it seems to run just fine.
Valentino Stoll [00:59:50]:
Nice. Have you run, just inference on it, or have you tried tuning stuff?
Peter Ohler [00:59:56]:
I, I've tried tuning. I haven't tried, you know, full blown training. That's kind of it's not what it's designed.
Valentino Stoll [01:00:04]:
Almost why bother. Yeah. Yeah. It's okay. Yeah.
Peter Ohler [01:00:07]:
A year from now, it might be done. Right.
Valentino Stoll [01:00:12]:
That's funny. I'm curious what you use for your, inference. Do you use, like, LAMA CPP, or do you have some I have
Peter Ohler [01:00:20]:
used llama CPP. Still trying to figure out what's best, for what I'm doing for work, Yeah. The hospitals who are trying to we're trying to use that for, various applications that I probably can't get into. Sure.
Valentino Stoll [01:00:39]:
Well, cool. Do you wanna share some pics, Peter?
Peter Ohler [01:00:45]:
I guess, like, what kind of picks?
Valentino Stoll [01:00:49]:
Maybe, one of the bases on your wall.
Peter Ohler [01:01:01]:
Listen.
Valentino Stoll [01:01:03]:
Oh, that is really cool. If you if you're not seeing this right now, he it looks to be a a custom, travel based.
Peter Ohler [01:01:12]:
Is that actually That's actually started out with so I was, I've been taking bass lessons from well, actually, a guy named Mike McAven. He he was a lead guitarist for Gypsy Rose. But I bought this and told them I was trying to build a travel base. And he says, oh, I've got this old junker here. All kinda busted up. So that became the neck. I cut off the head, made the body, and, yeah, you can see it behind it. So the strings basically go around, come up here, and go to the tuning, the tuners so you can adjust it.
Peter Ohler [01:01:54]:
This here is for sitting on your leg.
Valentino Stoll [01:01:58]:
That is so cool.
Ayush Nwatiya [01:02:01]:
So you're bad. It's unbelievable.
Valentino Stoll [01:02:04]:
Yeah. If you can't see this, the the tuners are at the bottom of the bass, and there's like a leg rest. It's really neat. Pretty inspirational.
Ayush Nwatiya [01:02:14]:
What's the
Peter Ohler [01:02:16]:
off so that I can put it in a backpack.
Valentino Stoll [01:02:18]:
Oh, wow. That's cool.
Ayush Nwatiya [01:02:20]:
What's the wood that the the body is made of?
Peter Ohler [01:02:23]:
Oh, it's just made of oak.
Ayush Nwatiya [01:02:25]:
Oh, okay.
Valentino Stoll [01:02:30]:
That's beautiful. How long did it take you to build that?
Peter Ohler [01:02:37]:
Not that long. You know, off and on over maybe a month or so.
Valentino Stoll [01:02:44]:
Nice. Alright. Do you have any plans to make more bases?
Peter Ohler [01:02:49]:
I don't know. I've got the other 2 of our our upright basses, that you see on the wall there, and I made those. So there's there's probably another one somewhere in the future.
Valentino Stoll [01:03:02]:
That's really cool.
Peter Ohler [01:03:03]:
I also build bicycles, so that's why I could do the metal work as well as the woodwork.
Valentino Stoll [01:03:08]:
Nice.
Peter Ohler [01:03:11]:
Got it.
Valentino Stoll [01:03:11]:
You have the simple stuff. Just the simple stuff. Yeah. You know, not even bikes are optimized enough for you.
Peter Ohler [01:03:24]:
Yeah. Actually, I go the other way on the bikes. I keep it simple. Single speed. So
Valentino Stoll [01:03:29]:
Love it. Well, Peter, it's great talking to you today. You know, thank you for sharing your experience, and algo with us. You know, I definitely am gonna dive in myself and see how fast I can get things to go, and, you know, again, thanks for all the work that you do. Appreciate
Peter Ohler [01:03:49]:
it. Love any feedback. Thanks for having me on on the show.
Valentino Stoll [01:03:53]:
And if, anybody wants to reach out to you or connect with you, on the interwebs, how can they do that?
Peter Ohler [01:04:00]:
Email at peterat ohler.com, ohlar.com.
Valentino Stoll [01:04:06]:
Fantastic. And so until next time, folks. Valentino is out of here, and thank you, for listening.
Ruby Revelations: Boosting Speed and Efficiency - RUBY 637
0:00
Playback Speed: