Streamlining AI Integration - JSJ 616
Ismail Pelaseyed is the co-founder of Superagent. They delve into the world of AI technology, open-source frameworks, and the practical applications of AI assistants. The conversation covers a range of topics, from the technical and philosophical differences between AI frameworks to the importance of user-facing UI components with the power of AI. They also talk about the practical use cases of Superagent, its potential impact on the AI industry, and the challenges and considerations surrounding the deployment and monetization of open-source projects.
Special Guests:
Ismail Pelaseyed
Show Notes
Ismail Pelaseyed is the co-founder of Superagent. They delve into the world of AI technology, open-source frameworks, and the practical applications of AI assistants. The conversation covers a range of topics, from the technical and philosophical differences between AI frameworks to the importance of user-facing UI components with the power of AI. They also talk about the practical use cases of Superagent, its potential impact on the AI industry, and the challenges and considerations surrounding the deployment and monetization of open-source projects.
Sponsors
- Chuck's Resume Template
- Raygun - Application Monitoring For Web & Mobile Apps
- Become a Top 1% Dev with a Top End Devs Membership
Links
Socials
- LinkedIn: Ismail Pelaseyed
Picks
- AJ - His & Her Bidet
- AJ - Ollama (Installer)
- AJ - Home Assistant
- AJ - Chaos Walking (Books)
- AJ - Market Saturation = 98.9% - What Now?
- AJ - Keychain Pin Tool
- Charles - Disney Chronology
- Charles - once.com
- Dan - Prometheus
- Dan - Which one is the un-React?
- Ismail - Fargo
- ismail - outlines
Transcript
CHARLES MAX_WOOD: Hey, welcome back to another episode of JavaScript Jabber. This week on our panel, we have Dan Shapir.
DAN_SHAPPIR: Hey, from a nice weather in Tel Aviv.
CHARLES MAX_WOOD: I'm Charles Max Wood from Top End Devs. It actually snowed here over the weekend. So we finally have a few inches of snow outside.
DAN_SHAPPIR: Just so you know, we're wearing t-shirts.
CHARLES MAX_WOOD: Yeah, well, we go skiing, but yeah, anyway. We have a special guest this week. It's he's my, um, I didn't get your last name on here, but, uh, Oh, here we go.
ISMAIL_PELASEYED: Oh, that's a hard one. You're going to have a hard time with that.
CHARLES MAX_WOOD: Hell, let's say Ed.
ISMAIL_PELASEYED: That is perfect.
Hey folks. This is Charles Maxwell. And I've been talking to a whole bunch of people that want to update their resume and find a better job. And I figure, well, why not just share my resume? So you, if you go to top end devs.com slash resume enter your name and email address, then you'll get a copy of the resume that I use, that I've used through freelancing, through most of my career, as I've kind of refined it and tweaked it to get me the jobs that I want. Like I said, topendevs.com slash resume will get you that, and you can just kind of use the formatting. It comes in Word and Pages formats, and you can just fill it in from there.
CHARLES MAX_WOOD: So Ismail, do you want to introduce yourself real quick, let people know who you are and why we're excited to have you here?
ISMAIL_PELASEYED: Sure, sure. So as Charles said, my name is Ismail. I am the co-founder of an open source framework called Super Agent. Super Agent is a framework which allows any developer, regardless of their skill set, to create and integrate AI assistance to whatever type of application stack or environment that they are using to build their apps. So as you guys know, AI can be pretty complex and there aren't that many, you know, machine learning engineers out there. Uh, there are actually very few and super agent allows any developer to basically become a machine learning engineer and leverage all of the fantastic technology that's being developed every day without having to actually know that much. So we abstract away all of the tiny machine learning puzzle pieces that need to fit together in order to create a accurate production-ready application so that the developers can, you know, focus on their users, focus on creating amazing user experiences, and, you know, tap into the vast possibilities of AI.
CHARLES MAX_WOOD: That's so cool. And I have to say that, um, a few years ago, we started a show on machine learning called Adventures in machine learning, which is much more on the engineering side of building models and stuff like that. And I thought, Oh, this is stuff that I really want to pick up. But I figured out that I am much more interested in building things with the engines that other people built. Yeah. I am in. Oh. I've got to wrangle this data too and somehow make it jive with this other data so that I can get a model that gives me the answer I want. So anyway, this is cool stuff and this is something I've wanted to talk about for a while.
ISMAIL_PELASEYED: Cool. I mean, I see that I came from originally, I came from an open-source background. So I've been a contributor to many of the machine learning frameworks out there that people use. And one of the things that Frosted frustrated me a lot was that regular developers were having a really hard time using these frameworks because it required so much knowledge of deploying this stuff and running it in production and how to get it accurate, how to get the responses you want, how to attach data to your AI model, how to have that model interact with third-party APIs. Everything like that. It was just a mess. So I just decided one day, I think it was in May. And my co-founder was in Vegas playing poker and I had a week off and I just said, fuck it, man, let's sit down and do this. And so I took a week and I just coded everything like the first version in a week and when it was ready, I was like, this is this, this feels good. You know, perhaps other people would want to use this. So I just decided to slap an MIT license on top of that and open-source it. And the feedback I got was amazing and it just blew up. And now we are taking it basically to the next level. So that's how it started. It wasn't meant to be, but it was meant to be, so to speak.
CHARLES MAX_WOOD: YAAeah. I love those weekend projects. So-
DAN_SHAPPIR: The best things in life, you know?
CHARLES MAX_WOOD: Yeah. I'm trying to decide if we should start with the chatbot side or the AI engine, LLM or whatever side.
DAN_SHAPPIR: I think the best thing to start with actually would, first of all, I think it's worthwhile mentioning that before we started recording, you had a bit of interesting news to tell us about the project.
CHARLES MAX_WOOD: Oh yeah.
DAN_SHAPPIR: And I think it's worthwhile to highlight that upfront.
ISMAIL_PELASEYED: Yeah, let's do that. So I'm happy to announce, you know, that Superagent will be a part of the Y Combinator Winter 24 batch, which is like an amazing thing. One could say a dream come true. I think it was like 24,000 applicants or something. And, you know, we were chosen as one of I don't know how many but but not that many. So it's an amazing thing and I'm situated in Sweden. I've lived there almost all my life and it's a lot of snow here as well as you might imagine. And I will be relocating to the States and San Francisco on Thursday. So I'm packing, I'm packing. And I just found out that my passport is going to expire in two months. So I have to fix that as well prior to leaving. But that's on my side. So really excited about that and really excited. You know, one of the great things with Y Combinator is that you can focus a lot on building something that's actually viable and can become something big and viable in the future. And I'm really excited about that, you know, sitting here all the way, you know, across the Atlantic, basically and only chatting with people on our Discord channel or having like Zoom calls is great. But I think that being where in the mosh pit of AI in person would be even more great and I could achieve more and have a better talk to people who do this stuff all of the day and all of the all of the developers and machine learning engineers and stuff that are working on this and, and, and basically, you know, building the future that would be that that's my dream, you know, to talk to those guys and learn and, and adapt and, and try to incorporate whatever works into our framework. So I'm stoked. Oh yeah.
DAN_SHAPPIR: So obviously congratulations and kudos. Uh, and I wanted to add that one of the great things about being part of something like By Combinator is that it's definitely not as smart. It's not dumb money. It's as smart as you can get in that regard. So it opens a whole lot of doors and it's not just getting the funding. It's also being part of the program, really pushing your project forward in terms of recognition and adoption and stuff like that. So I I'm really excited for you guys and as I said, kudos. And the second thing that I wanted that I think it's a good starting point is to kind of understand what a super agent might be used for. So if you can give a concrete example of one or two or maybe even three things that you know, that super agent makes makes easy to do that would otherwise be, you know, much more challenging.
ISMAIL_PELASEYED: Yeah, sure. I'll, I'll, I'll give you two examples. I'll give you a personal example and then I'll give you like a enterprise example as well. So personally, I use Super Agent every day. And, uh, as you, as any like open source maintainer will tell you. The community is everything. When you're building an open-source project, the community is everything. And that's, that's one of the main reasons why you open source. You want feedback, you want contributions. You want a bunch of stuff from the target audience that you're trying to, you know, build a product or a, or a framework or whatever it is for. And so when you come to, when you get to a certain stage and when you have a critical amount of contributions being made. It's like having 50 employees. It takes a lot of time to go through contributions, to talk to users, to see what developers want, how we should build whatever they want, and what ideas they have, and what code they've written. All of that is really time-consuming. One of the most time-consuming things is doing code reviews. So reviewing other people's code, they're not actually a part of your team, they're just open-source contributors. And so if I would estimate, that takes me around 10 to 15 hours a week to only do that, to review other people's code. So what I did was that I set up a assistant, as we call it. An assistant is basically an agent or a language model that has a bunch of tools connected to it and can be trained on different data. So I trained my assistant, which I call Shuriken. It's a Japanese name, because Super Agent has this ninja thing and a Japanese aura around it. Everything we do, we call it Japanese names, basically. So Shuriken, she's a assistant that I built using Super Agent, and her sole purpose is to do code reviews for contributions that we get from open-source people. And she does that so well that even the contributors don't know that it's an assistant. It's an AI assistant. So it has passed the Turing test multiple times, I would say. And the way we do that is that basically it's super simple. You take with a couple of clicks in our UI or with code in our SDK. You can set up an assistant and attach as many files or data sources that you want to that assistant. And that assistant will learn all of that data and you can instruct it to do specific tasks. So in this case, I've taught her everything there is to know about our code base. And so when a contribution comes in, she knows exactly how our code base is done. She knows how we like to write code. She knows all of our rules, all of our internal setup. And she can give feedback to contributors on what they could do better or what they should change in their code review. And then the developers do that. And then I can go in and merge that in. That's just one simple example of what you can build. So if you would generalize that that's like building a team member. So to answer your question, then one of the most, uh, you know, exciting use cases to augment your team and increase productivity of your team. That's one of the main, I think, unique selling points that AI has, which I believe will be a huge thing in the future, even more prominent than it might be today. So that's one. Yeah. Yeah, so before we move to the other one, few questions about that. So first of all, what model do you use for Shuriken? So in the Shuriken case, we use an OpenAI model, the GPT 3.5 model that is fine-tuned on our code base, basically. So that's a proprietary model that we use, an OpenAI proprietary model for that specific instance.
DAN_SHAPPIR: So the starting point was OpenAI 3.5, and then you further trained it on your own code base.
ISMAIL_PELASEYED: That's correct. Yes. Fine-tuned it on our, on our own code base. And because this is definitely not my area of expertise. Um, is there like, uh, I would imagine there's some sort of a lower limit of how, you know, how small your code base could be before you would actually get some value after training on that specific code base, no.
ISMAIL_PELASEYED: So, uh, yes and no. Generally, I can say that the more data you have doesn't necessarily amount to a better-performing model. It's the quality of the data that's important. You can actually have quite small training data sets, but if the quality is high, and the quality is, of course, depending on what task it is the assistant or language model is supposed to solve. But depending on the task and depending on the data quality, you can actually get away with just a couple of thousand rows in a spreadsheet to train your model to be very effective on a specific task. So I would say quality is a better thing to focus on when it comes to data sets than quantity. And that has also been proven by open source model developers that have, you know, basically generated synthetic data, small synthetic data sets that have a really high quality and have been able to train models that are as equally good as GPT 3.5 with less data, less compute, less money. So quality is super important when you do that. And there are a bunch of, you know, bunch of stuff that you need to learn before you can train your model. Of course, how should the data, what should the data look like? What type of data for what type of task, et cetera, et cetera. That's all of the questions that regular developers ask themselves every day, but they don't have any, you know, they don't have any prior experience working in this field. So that's also one of the reasons why we decided to build a framework for the mainstream regular developer and abstract away all of all of this stuff that we are discussing now, just abstract that away and Make it as easy as uploading a file like okay. You want you want your assistant to be trained on 10 PDFs Cool upload them and we'll do the training in the background. You won't notice We'll send you an email when it's done, then you can use your assistant. That's it on your own
DAN_SHAPPIR: So in your case you kind of pointed it at the github repo and that's how you trained it
ISMAIL_PELASEYED: Exactly, to multiple repositories. We have multiple repositories appointed it to those multiple repositories. And then we have a bunch of technology that, you know, extracts the necessary data, chunks it splits it and fine tunes a model for that specific use case.
CHARLES MAX_WOOD: So one thing that I'm curious about with this is Um, I see a lot of people add what you're talking about, at least to me, sounds a lot like continuous integration or, you know, some of the, the steps that people put their, uh, into their GitHub actions where it basically says, you know, all the tests pass, you, uh, you pass through the linter and Hey, you know, your style matches our style or Hey, will you accept, you know, the automatic cleanups that come out of the linter and stuff like that. So I'm a little curious as to what Shuriken gives you that you don't get from just kind of a standard CI setup.
ISMAIL_PELASEYED: Yeah, so CI usually looks at syntax and code conventions like linting. It's basically a code convention. You might write sloppy code, but it will pass the linter. So that's the thing with shuriken. A linter will catch bugs. It will catch, you know, could be type errors or something like that, you know, but it won't catch sloppy code if it works. Even if you build it, it won't catch that because it won't throw any errors. So what shuriken does is that she actually looks at the code itself and, you know, gives feedback on the actual code. Like you should not write this function like this, write it like this or use this package instead of that package. Okay. You know, that kind of feedback, the ocular feedback that a code reviewer usually gives his or her employees, that's what Shuriken can do in our repository.
DAN_SHAPPIR: So I'll actually push on that. Can you give a really super concrete example of the type of feedback that you might get.
ISMAIL_PELASEYED: Like, I can, I can show you if you want to, if that's possible.
DAN_SHAPPIR: You can. And that's nice, but we are primarily audio. Most of our listeners, most of our audiences, you know, just listening. So it would be better if you just describe it as much as best you can.
ISMAIL_PELASEYED: Yeah, sure. So let's say that you are writing a piece of code that should, you know, do a specific thing. And the code works, the code that you wrote works, but it, and it follows the code conventions and it's passing all of the tests and all of that stuff that we, you know, run on the code base, but it might not be optimized. It might be really slow. It might be poorly written, so to speak. And that happens all of the time. So what Shuriken does is that she makes sure that the code is not only, you know, follows our code conventions and the linters and all of that stuff, but she actually makes sure that the code is as effective and, you know, fast and proficient as it could be by giving you small tips, hints, pointers on what you could do better when writing that specific piece of code. Let's say it's a function for logging in. You can write login functions, you know, in a hundred different ways, but usually there's only one way that's efficient, effective, and proven to be like the way to do it. She will, if you haven't written it in that way, she can, you know, give you feedback that that's the way you should write it, please make this necessary change and then, you know, the developer goes in and does that. And then I can feel confident that, okay. Someone has looked at this and I can merge it into the main branch basically. So very specific feedback on the actual code, which is something a human being does today, is done by an AI assistant that I built in like three minutes in our repository. And for me, you know, when I look at it, I'm amazed that this actually works. I'm amazed by it. I can sometimes, you know, I get emails when Shuriken mentions me in a comment on GitHub and I get blown away. Like this is not actually a human being. This is just AI that's trained to do a specific thing. But in that context, in that small, you know, narrow vertical context, she does an amazing job as well as any human might be able to do it.
CHARLES MAX_WOOD: Is there a GitHub repo for sure. Again, that we can look at or
ISMAIL_PELASEYED: Yeah. So super the super agent repository that you posted here in the chat. If you go to the issues tab and, uh, or sorry, the pull requests tab, which is where people, you know, contribute code and you open up a pull request, you will see that there is an contributor that's named sure it can bites and she, uh, basically gives comments on you know, different type of, uh, depending on what type of code you have written, she'll give you different types of comments. If it's, if she doesn't find anything weird, she'll just say, good job, you know, good job, Charles. Thank you for the contribution. Thumbs up love. And that's that.
CHARLES MAX_WOOD: Gotcha. But what I'm wondering about is, um, you know, is there like a walkthrough to set up something like this or?
ISMAIL_PELASEYED: Yeah. So we have a, I have a YouTube channel actually, uh, that has like all of the, I have a bunch of like videos and stuff when I set up a bunch of different type of assistants and she's one of them. And actually the code is open source. So anyone can grab Shuri can make her their own code reviewer if they want to.
DAN_SHAPPIR: Does she also like, have you seen her occasionally block full requests, like not approve full requests?
ISMAIL_PELASEYED: No, I trained her specifically to only give feedback and not block anything because, you know, I wanted to try it out. I didn't want it to be so definitive. Uh, so she only gives feedback, you know, it's up to the user to do the change or the, the contributor and up to me to make sure that the code, you know, works still. Uh, when I merge it in, but, but, uh, she only, she's, she tries to be good vibes. I can, I can say as much as that. Uh, so, uh, no blocking, no, you know, this is a bad. So she's trained to be humble, nice, and give feedback.
DAN_SHAPPIR: So it would be amusing to train her on Linus Strowval's responses. Yes.
ISMAIL_PELASEYED: Yeah. And that's actually a good use case. If you want the language model to act as a specific individual, if you have data on that specific individual, Uh, you know, you could train the model to, to try to resemble, uh, the way like Linus would, would, uh, answer a code review, I'm guessing it wouldn't be as much of a good vibes that I have. They sure. Yeah.
DAN_SHAPPIR: So basically what you're saying is if you use the entire, uh, Linux kernel code base as the training data set, you will get very snippy comments whenever anybody tries to do a pull request.
ISMAIL_PELASEYED: You could get that or yeah, you could absolutely do that. And I know that there are companies out there doing exactly this training models to resemble, you know, Arnold Schwarzenegger or whatever it might be. Right.
CHARLES MAX_WOOD: So, right. You go pull Elon Musk's tweets and say, this is how you're supposed to respond. Right.
AJ_O’NEAL:: That would just be so awesome.
CHARLES MAX_WOOD: Like, what is the interesting to see stylistically like okay, go pull this person's tweets, right? As long as it's not like a corporately handled, right? They don't have PR people running their account, right? It's them straight into the phone.
DAN_SHAPPIR: Yeah, I recall like a few years ago, even before the whole machine learning craze. I think Microsoft or somebody released some sort of a chatbot into Twitter. And she became a pornographic neo-Nazi within like two days or something like that.
ISMAIL_PELASEYED: Yeah. And they had to pull her. They had to pull her off. They got so much heat that it basically killed the whole AI thing due to the heat they got for, for that. So that's the downside of, of it could be, could be the downside, but I'm a positive guy. I don't like, you know, yeah. I don't like to be like a doomer on this kind of stuff. I believe that it's good for humanity. I believe that, you know, it's a must. It's a must.
Are you under increasing pressure to ship code faster than ever before? Then it's time to work smarter with Raygun performance and error monitoring tools. Raygun gives you instant visibility into the health of your software. What makes it so unique is that it not only tells you when something's gone wrong, but it shows you exactly where it's gone wrong and how to fix it right down to the line of code. Made by developers for developers, Raygun Suite of monitoring tools are used and loved by thousands of software teams every day. Monitor your entire tech stack with widespread language support and native integrations with Microsoft Teams, GitHub, Jira, Slack, Bitbucket, Octopus Deploy, and more for even greater visibility. Visit raygon.com to resolve issues faster and to deliver flawless digital experiences for your users. That's raygun.com to get started with your 14-day free trial.
CHARLES MAX_WOOD: Well, and I think you can intelligently. Because I don't know what parameters they put onto that Twitter bot from Microsoft. They kind of said, well, we just turned it loose on all of Twitter, right?
ISMAIL_PELASEYED: Yeah.
CHARLES MAX_WOOD: And yeah, I think it probably fed off of a bunch of other bots, right, that in some corner of Twitter that I just never see. But I don't know. But this is interesting because you can specifically pick the data set that you want to emulate and then do it that way.
DAN_SHAPPIR: So you said that you have two examples. You gave us one.
ISMAIL_PELASEYED: Yeah, right.
DAN_SHAPPIR: What?
ISMAIL_PELASEYED: So the second one is more of a enterprise use case. And so if you think of a regular company, let's say a producing company or a company that is not a tech company, just a regular company. And so they have a bunch of data. They have a bunch of data in their management systems. They have a bunch of files. They have a bunch of PDF, Excel sheets, they have a sales or CRM system, they have a help desk system, they have a bunch of data about their users, they have a bunch of other type of data, and there is a lot of people working with extracting knowledge from this data. So if you take like a marketing manager as an example, one of the key tasks of that marketing manager, manager is to look at different data and try to figure out what works, what doesn't work. That's one of the jobs that they have to do in order to be successful. So the second use case is that you can take, like in my case, I took our GitHub repositories, but in an enterprise use case, you could take all of your enterprise data and do the exact same thing. You can feed it to the model, you can train the model on that data. And you can start asking questions about your data. You can ask questions like, what are my top five best customers? And it will give you an answer within seconds. You can ask it to plot charts on sales and it will plot a chart on your sales for whatever time period that you might have in mind. Usually how this works is that you have some a person at a company wants some data. They go to another person at the company that has, is tasked with like extracting that data as so-called analyst. And so what you can do is that you can increase productivity for all your employees by just training a model on your data and making it available to whoever should have access to that data. So they can ask questions without having to go to a Analyst or without having to wait one week to get an answer. You can just ask and the model will answer that question. And if you think about how much data a company of thousand employees of that size. Which is not a big company like a medium-sized company. How much data they have and how much knowledge they can extract using this technology? By just training a model on it. So that's the second use case that we see a lot of. Basically, that's the use case that people use it for, Super Agent, that is.
DAN_SHAPPIR: So two questions about that, really. Question number one. As we've seen with general machine learning models, chat, GPT, and whatnot, occasionally they quote-unquote lie. They provide misinformation and they do it in a way that seems really reasonable and self-assured, which can really lead people down the wrong path. So my first question in that context is when you let's say train the agent on your internal corporate data, how likely is it that when you ask the question, you'll potentially get misinformation.
ISMAIL_PELASEYED: Yeah. So, uh, it is likely, and we call that hallucination. So the model can hallucinate, you know, and give a full false answer, false answer to any query or question. Now the question is, how do you solve that problem? Cause if you think about it, if I ask you something, Dan, if we were working at the same company, I would ask you a question. How do I know that you are correct? The only way to know that you are correct is to see the sources from which you derive the answer to whatever question I ask you. If I ask you how many customers do we have today and you just come up with a number, you know, that could be wrong. So usually what you do is that you make a PowerPoint presentation and you show the actual data and then you you answer that question by showing the underlying data. And so what SuperAgent does is the exact same thing. Instead of just blindly accepting a text that is generated by a model, we also make the underlying data transparent. So we actually show. If you ask a question about a contract. Not only do we answer that question, we also show you that specific contract, not only the specific contract, but also the section that was used to answer the question. So you can ocularly look at the source, look at the answer, and, you know, make up your mind if this is, you know, makes sense or not. If it does make sense, you have the possibility to rate the response. So If it's a good response, you give it a thumbs up. If it's a bad response, you give it a thumbs down. Every time you give it a signal, our engine fine-tunes the model on those signals so that the responses get better and better over time. But I think the main, the main thing is we are all used to using like, I'm, I am at least using chat GPT. One of the issues with chat GPT is that it doesn't actually show you the underlying source. So if you could visualize that, you know, with the charts, with data, with tables, with, with deep links to documents, then that's a completely different game. And that's the way humans are used to communicating data and stuff to each other. Like if you go on Twitter right now, somebody writes something. How do you know it's true? You don't until you actually check out the data or the source. And that's what we are trying to visualize. Not only like the answer from the model, but also the underlying data that is used to answer that question by the model.
DAN_SHAPPIR: Cool, that's really insightful. And my second question in this context is, how do you provide, or what safeguards do you provide against inadvertently leaking private or secure data? I mean, if I train something on a model and it's quite possible that somebody asks a question that will reveal data that I don't want to reveal.
ISMAIL_PELASEYED: Right, and so this is a big question. And this is the primary reason to why SuperAgent is open source. So when it's open source, it allows you to do two things. The first thing is that you can deploy Superagent to your own infrastructure, completely isolated in your own environment. Nobody else has access to your data or anything like that. The second thing is that we allow you to run Superagent with your own language models. Usually these language models are open-source models like Lama2 that you've deployed to your own environment or Mistral, which is a new model which is very effective, which you have deployed to your own model. Uh, I myself am not a believer of proprietary models at all. I don't believe that. I don't believe that a, a company, if the end game is to get enterprises to adopt this technology, uh, I don't believe that they can adopt it if it's black box. I don't believe that, that, that can happen. And I don't see any other examples actually where enterprises give away their data to a black box and don't have any control over that data. That very rarely happens. And me as a developer, if I'm developing some kind of app, it's very, you know, it's very rare that the underlying core technology is outsourced to some third party black box company. It almost never happens. Almost never happens. One example is like Google maps. I don't know if you guys remember, but initially when the iPhone was launched, we only had Google maps. There were no Apple maps. There was no app. What happened? Well, Apple found out like, like any same company would do. Like we can't give away all of the, you know, traffic and data Google on our platform. So what are we supposed to do? The only way to solve that is to build your own maps, you know app and deploy that as an alternative. So super agent allows anyone to take the platform. It's open source. It's free deploy it to your own own environment. Use your own model to run it without having to leak any data to any external party including us. So that's like the way to mitigate that issue, which is a big issue. And I believe that you can build, you know, small hobby projects on top of open AI and GPTs, which they have just released. But I don't think that an enterprise grade like healthcare company could leverage that. I don't think a legal firm or law firm could leverage that due to the nature of it being black boxed and proprietary. So that's why I believe in open source.
DAN_SHAPPIR: So to clarification or follow up questions on that. First of all, when you're saying you use your own data or your own model, what you're actually, from my understanding, what you're actually saying is that you start with a general model, but then you refine it with your own data and that refined model stays within your organization and never leaves. Is that, is my understanding correct?
ISMAIL_PELASEYED: Yes. That is correct.
DAN_SHAPPIR: And you also probably make a distinction between um, which data you use for internal versus external services. So for example, if, if, you know, in, in our cases, next insurance, let's say we might train, I work at an insurance company, you know, we do a lot of stuff with machine learning and whatnot. That's one of the things about us that we do a lot of stuff online and, you know, use machine learning. Um, so there would be a difference, you know, you might use that internal data about the policies that people have for the internal operation of the company, but you won't expose that to, to external users, whereas you might train a model for external use based on, you know, the questions that you might ask, uh, and various, you know, general terms that are in a public domain about types of insurance and stuff like that, that would be externally available.
ISMAIL_PELASEYED: Right, and the way we do that is that every agent or assistant that you create with SuperAgent gets its own little brain, its own little memory, its own data. It's completely decoupled from other assistants. So you can actually deploy hundreds of these assistants with that are trained on different data sets that some of them might be for internal use. Some of them might be for external use a good example is we are working with the ISP in the UK and and they have one assistant that's trained on internal data that they use to you know educate employees on different things inside of their company and then they have a customer support assistant, which is only meant to help their users or customers with general queries like how do I reset my router? How do I do this? How do I do that? And that's only trained on, you know, public information that they have on their website. So the answer to that is that it's very easy to deploy a lot of these assistants. And I would say like in average, each user has around five different assistants that they run on our platform.
DAN_SHAPPIR: So if my understanding is correct, you basically deliver three main things. If I understand correctly, one is the actual agent itself, like clone, clone our repo, run our agent on where, what, wherever you want, you know, run as many instances of it as you like. The second thing that you provide is the way to is the ability to take an ex an existing quote unquote standard open source language model and then refine it using your own data. And the final thing that I understand that you're giving is the way to attach those agents to various API or, or input output sources is my understanding correct? That that is the stuff that you provide.
ISMAIL_PELASEYED: That is exactly three things. So the three things we usually, you know, we, we, we have a different way of explaining them, but I would say it's perception. That's the first thing. That's the model itself. The brain or sorry, this is the brain. The second thing is the perception, which is the data that you feed to it. So we have the brain, you have the perception. And the third thing is what we call tools and a tool could be an existing API. It could be a third-party service such as, you know, Salesforce, but it could also be code that you want the assistant to run. So you could build, you know, automation workflows with assistance that run code, predefined functions or whatever it might be. We call them tools. So it's the brain, the perception, and then the tools. These are the three main things that we, the three main pillars that SuperAgent is built on. And we make it easy for any developer to orchestrate them in order to create an assistant that can do basically anything that you would want it to do.
CHARLES MAX_WOOD: So if I wanted to build an assistant, I guess the things that I'm thinking about here are, you know, one, it sounds like, yeah, I can hook it up to OpenAI, or I can, you know, pull in my own model. That's probably, you know, beyond having it set up and being able to access it through standard methodologies. You know, you handle all of the stuff as far as like providing it more data or providing specific data and then extracting the the responses I guess the other part of a chat bot though is or an assistant is The delivery right so whether it's in some kind of user interface I'm thinking like code assistance right where you have them plugged into like Visual Studio code and little highlight code and give you feedback. On the code and things like that, but I'm also thinking like chat bots right so maybe a discord bot or a you know an embedded bot on your website that people can ask questions of and things like that So how do you interface with the delivery of these systems?
ISMAIL_PELASEYED: So we do that in two ways we do that in a so-called no code way which is basically you create your system and will give you a embeddable chat or a embeddable, you know user interface which allows your users to interact with this assistant. Similar, very similar to how chat GPT is formed basically. Super simple, but still powerful. So that's the first way. That's for developers that are, you know, just starting out trying to prototype something, trying to get a feel for how, how their assistant, you know, actually works and how it, how it, uh, how accurate it is and, you know, prototyping. The second way, which is the most used way, is that we give you three set of SDKs and a REST API. So you can use either of these SDKs or REST APIs to orchestrate your own assistant and then build whatever type of UI you want for the delivery part. So you know, the thing with, I think that's something that people are missing, you know, is when you look at ChatGPT. And when I look at it and I come from a design background, I've designed and built, you know, front-end type of applications all my life, basically. And the one strange thing is that when chat GPT launched, it was like all of the other UI components that have been refined for, you know, a century just got thrown away and got replaced by a chat markdown box. And, you know, that's in my opinion. You know, when I think about that, that's really like, that's really, uh, not the way the internet usually works. You know, when you consume a software, it has a bunch of different components, which makes it, um, feel valuable where you can extract more value than just text. You know, one good example of this is a company that's called perplexity AI. I don't know if you guys know about them, but they have built basically the new Google search. And it's not just the chat. It has a bunch of other type of UI components that makes that user experience 10 times better than what Google searches today. So I believe that if you are going to deploy a chat you know, an AI assistant chatbot, you need to use existing, you know, UI components that people are used to, not only chat. It's very limiting in what you can accomplish with only chat. And it doesn't work well for all use cases. It might work well if you're trying to chat with an assistant, then it works well. But if you're trying to extract knowledge, then chat might not be the best way to do that alone, stand alone. There might be other components that you would want to visualize for the user in order for them to be able to quickly extract information. Simple example would be a chart, you know, that is, that a user can interact with, which is very common in any other software, right? All dashboards have charts, but ChatGPT doesn't, you know. Uh, and so I believe that in the future we will see, you know, if you think about all of the UI component libraries on NPM, like the registry for Node.js, there's a bunch of awesome UI components out there. I believe that AI and these UI components will merge eventually so that you get the power of AI in those user facing. UI components. So having an AI dynamically generate the user interface needed for the user to absorb or extract information that they are looking to extract from the assistant. I believe that's the future. That's something that we are working on actively as well. Making it more dynamic and making it more rich, the user experience, not just only chat. I believe that's a big thing for adoption, especially if you're looking at enterprises that are trying to adopt this technology.
DAN_SHAPPIR: By the way, what is SuperAgent implemented in?
ISMAIL_PELASEYED: Back in this Python, concurrent Python, so async Python, which was quite a mess, but eventually worked out. Uh, and, uh, it runs on fast API, which is an open-source framework for running concurrent Python threaded Python, basically. Um, and then we have some services that are built in Rust specifically for memory. So, you know, if you just take, uh, this is interesting, but if you just take a language model off the shelf and try to chat with it, it won't actually remember. Your previous conversation. And it's like talking to someone that doesn't remember stuff, you know, and that's just not feasible. So you have to build memory. And that memory usually is some kind of key-value store, some kind of Redis database, you know, and then you need some way of integrating your model into that key-value store. So that part of our service is built in Rust. The memory, it's for a short-term memory, but also a longer-term memory. So you can ask questions about stuff you chatted about, you know, a month ago. So that's built-in Rust. And then the UI is built on Next.js, which is a framework, open source, React-based framework, TypeScript. And yeah, so that's the stack, basically. Infrastructure-wise, we rely heavily on GCP and AWS, of course. And so that's where the infrastructure is at this point.
AJ_O’NEAL:: Okay. So I may have missed this because I came in late, but how is this different from Olam? How is the difference philosophically and how is it different technically?
ISMAIL_PELASEYED: So technically Olam runs locally on your machine. Uh, super agent.
AJ_O’NEAL:: It runs anywhere you put it.
ISMAIL_PELASEYED: Uh, no, it doesn't actually, because you need to put the model somewhere, right? So it runs the model locally.
AJ_O’NEAL:: Um, right, right. What I, what I mean is wherever you put it, right? Like I can put it on digital ocean. If I want to pay a billion dollars a month, I can put it on AWS. But like, yes, yes.
ISMAIL_PELASEYED: Yeah. So, uh, it's similar to a llama in the sense that you can run different type of models, it's open source and that kind of stuff. Uh, what differs is that we focus on a specific type of agent the knowledge assistant, as we call it. So we feed your model and fine-tune your model on the data that you want the model to have access to. We both fine-tune the language model, but something that people miss a lot is that there is actually another type of model involved in, you know, fetching data from third-party sources. That model is called the retrieval or encoder model. We also fine-tune that model on your data. So there's actually two models in play usually when you use chat GPT or anything else and you upload a file to that. Uh, so these models, we fine tune on your specific data, making them super accurate for the specific use case that you have for your assistant. And so it's an orchestration layer.
AJ_O’NEAL:: Is that what's referred to as long chain? Cause I was looking into this cause I wanted to figure out how to do with a llama. Yeah. And what, what was coming up was long chain and it's like, it's not part of a llama, but it, it's like you, you, you, there's an API, you feed in more data. Is that right? Is that what that is?
ISMAIL_PELASEYED: Right? Yeah. So, so, so Lang chain is a open source framework, which allows Lang chain is an open source framework that allows you to, uh, I'm actually a contributor there. That's where I started off contributing to that framework that allows you to build these type of assistance, any type of assistance. It has similarities to Superagent and we actually utilize Langchain in parts of our infrastructure as well. The main difference is that Langchain is built for machine learning engineers, people that know what the heck they're doing, people who know how to accurately Fine-tune a model how to accurately orchestrate the whole assistant, you know Super agent
AJ_O’NEAL:: documentation seemed like they were using very very technical terms that once I understood it seemed like things that could have been explained in a sentence.
ISMAIL_PELASEYED: Yeah, so super agent is like You know the the version of lang chain that's meant for the mainstream developer people that don't have any skillset or any knowledge or background in these types of technology in the, in the, in the short, you know, if you want to explain super agent easily, it's like Stripe for payments, but it's for building AI assistance instead. Think, you know, prior to Stripe, you know how hard it was to, you know, set up payments, recurring payments, subscriptions on your whatever, you know, service that you were building. It was a pain in the butt. Uh, most of these, uh, open source frameworks are, you know, give you the building blocks to build whatever you want, but you need to know what you're doing in the super agent case. You don't need to know anything. You're just instructed to do specific things with texts, give it a prompt. As we call it, you feed it with data and we take care of the heavy lifting on our side. So as to make sure it's accurate,
AJ_O’NEAL:: use the chat bot to generate the other personas?
ISMAIL_PELASEYED: Yeah, we use, we use language models to generate training data and, you know, generate all of the stuff that needs to go into the model in order for it to be accurate for the specific use case that you have. And you don't have to think about any of that as a developer. That's on our side. That's like the value we bring to you as a developer. Or, you know, if you talk to a super agent user and you ask them, why do you use super agent? The number one thing they tell us is that super agent allows me to focus on my product. I don't need to become some other kind of engineer and learn something new. I can focus on the stuff I'm building like an iPhone app. And I can just integrate all of this wonderful technology with a simple SDK, like in 20 lines of code. So that's the that's the value prop that we bring to developers.
AJ_O’NEAL:: Cool.
CHARLES MAX_WOOD: All right, well, I-
AJ_O’NEAL:: That's nice.
ISMAIL_PELASEYED: Thank you.
CHARLES MAX_WOOD: I don't wanna shut down the conversation, but I do have another podcast scheduled in like 18 minutes.
DAN_SHAPPIR: Ha ha ha ha ha. And our- You're living a busy life, Chuck.
CHARLES MAX_WOOD: Yeah, well-
DAN_SHAPPIR: So I'll just ask one last final really quick question. Yeah. Do you usually run models locally or in the cloud in most cases?
ISMAIL_PELASEYED: 100% cloud. 100% cloud.
AJ_O’NEAL:: That's so expensive though. How can you?
ISMAIL_PELASEYED: But it isn't. No, it isn't. It isn't expensive. It doesn't have to be on our cloud. It could be on your cloud. You know, that's the thing. If you have, if you, you can deploy your model to your own cloud, you can deploy your model to these serverless like infra providers that are out there and you can get a model running for, you know, the price of running a model. It's going to go to zero in a very short time. That's the thing.
AJ_O’NEAL:: So with the cloud yet so far, so far, everything they introduce is more expensive than the previous thing. Prices have not gone down in a decade. They've gone up.
ISMAIL_PELASEYED: Yeah. But, but, uh, if we, if you think about, you know, uh, it might not be zero now, but if you think about the trend, even open AI. You know, they slash their prices with like, yeah, you're talking about open AI.
AJ_O’NEAL:: Yes. Yeah. Yes. Cause they are going to optimize it and they may eventually be able to get the price down, you know, not that they're not having to pay what you would have to pay for Azure, right? You're getting, they're getting a very different rate.
ISMAIL_PELASEYED: Yeah. And, and that's the thing that the other providers, Azure AWS, they are right now, as we speak, deploying technology. And this has already happened where you can basically host your model in a serverless environment and only pay the couple of 0.0025 cents per token that you would pay OpenAI. So this transition is already happening. It's a requirement. Otherwise, nobody will be able to run this technology in production. So if the price is high. No business can run it. You cannot have a chat bot that, you know, costs thousand, $10,000 a month and three people are using it. So the whole industry is pushing this for the prices to go down. And we already see it now, even the prices of the hardware is going down GPUs.
AJ_O’NEAL:: So if I, if I wanted to run super agent, I, you said it's open source. Like, so I could just pay a flat $40 a month. Get it eight cores, eight V CPUs and run it on that. And that would be good enough for, you know, a few people using it at a time.
ISMAIL_PELASEYED: Yes. Right. And completely free. You don't have to pay us anything since it's open source.
AJ_O’NEAL:: No. What I mean is like I'd have to host it somewhere.
ISMAIL_PELASEYED: Yeah. You have to pay the, yeah.
AJ_O’NEAL:: It's like 40 bucks a month for, you know, and then you don't have to pay per token. You just. Yeah. Okay. I would love, I would love to learn how to host this because I've got some people that are, I was actually going to build something with a llama for some people. But, um, yeah, I, let's see.
ISMAIL_PELASEYED: Let's do that. And I think Ulama is great. And the reason why I think it's great is because it's completely open source. It allows you as a developer to have complete control over what you're doing, where you're deploying it, what models you're using. It's up to you. It's not up to some, you know, company to decide, uh, where your data is, how you can extract the data and, uh, All of that stuff that comes with like proprietary black box models. I don't, I don't, as I said initially, I, I am strongly opposed to that type of, uh, uh, model and I don't think it will work. I don't, I, I don't think it will work.
AJ_O’NEAL:: How do, how do I pay you for support and to make sure that super agent is still around because like the number one reason I pay for things is like, I want to pay because I want the project that I'm paying for the product. I want it to be successful so that when I wake up tomorrow, the webpage is still there and the download button is still there and the support email is still there.
DAN_SHAPPIR: Because they don't need your money. They've got Y Combinator money.
ISMAIL_PELASEYED: Yeah, I agree with you, AJ_O’NEAL:ay. Yeah, the bill comes due and we don't want to live off of VC money. That's not the plan.
AJ_O’NEAL:: Good.
ISMAIL_PELASEYED: We want to create and we talked about this initially, the thing with open source is that it's great to build a community. You can get a lot of users, but it's very hard to create a business out of that. A good example of that is, you know, let's take Django, the Python framework. How many businesses have been built on Django? A lot of them. How many people pay for Django? Zero. Same thing with FastAPI. It's one of the best Python frameworks out there. How many people pay for it? Zero. How many people use it? A lot. So it's really hard to take something open source and make a commercial business out of it. That's what we're working on now, to be able to do that. There are examples of people who have succeeded with that by offering, you know, support plans and stuff like that. There are ways of doing it, but it is- I'm not, I'm not, I don't think that, I don't think that that will help because the whole premise here is that you would want to host it yourself. You don't want to give your data to us to, to Ismail. You don't want me to have all your financial data. You want to have all your financial data. Uh, so hosting might work for the absolute, you know, smallest companies, but if you're working with a healthcare company. They won't host with us at this point. So what we are doing there is to give them services such as AJ_O’NEAL: mentioned, support packages. You know, we can run instances for them on their cloud. There are a bunch of different business models you can run, but it is hairy and it isn't, you know, as straightforward as just having a B2B SaaS company that is proprietary. You charge per month, people trust you, et cetera, et cetera. Uh, that was what I initially said is, is, is that it is a hard thing to monetize and a lot of people have failed. I mean, the great this have failed at it. So I'm just hoping I
AJ_O’NEAL:: buy button on it and I'm going to buy this thing in whatever form, because I have a project where I have to build something I'm going to be using some mix of either Olamma or open AI. There's still a little bit of, you know, what am I going to do your solution? I, I can choose to switch between hugging face, which is where I get the Olamma models and open AI, which is where our prototype is deployed. So like to me, this provides a layer of abstraction that for the things that we want to do for this project, this is the right tool. So you put the buy button there, I'm going to buy it. It doesn't even really matter what you charge. I'm going to pay it because
CHARLES MAX_WOOD: I have a time crunch and I've got to get a story. But yeah, no, I, you know, in a lot of ways I agree with AJ_O’NEAL:. And, uh, this reminds me of something that I'm going to throw out my picks, but yeah, let's go ahead and do picks. And, uh, yeah, I've got like 10 minutes and then I've got to roll into the other show.
Hey, this is Charles Maxwood. I just wanted to talk really briefly about the top in Dev's membership and let you know what we've got coming up this month. So in February, we have a whole bunch of workshops that we're providing to members. You can go sign up at topendevs.com slash sign up. If you do, you're going to get access to our book club. We're reading Docker Deep Dive, and we're gonna be going into Docker and how to use it and things like that. We also have workshops on the following topics, and I'm just gonna dive in and talk about what they are real quick. First, it's how to negotiate a raise. I've talked to a lot of people that they're not necessarily keen on leaving their job, but at the same time, they also want to make more money. And so we're going to talk about the different ways that you can approach talking to your boss or HR or whoever about getting that raise that you want and having it support the lifestyle you want. That one's going to be on February 7th, February 9th. We're going to have a career freedom mastermind. Basically you show up, you talk about what's holding you back, what you dream about doing in your career, all of that kind of stuff. And then we're going to actually brainstorm together, you and whoever else is there and I, all of us are going to brainstorm on how you can get ahead. The next week on the 14th, we're going to talk about how to grow from junior developer to senior developer, the kinds of things you need to be doing, how to do them, that kind of a thing. On the 16th, we're going to do a Visual Studio or VS code tips and tricks. On the 21st, we're going to talk about how to build a software course. And on the 23rd, we're going to talk about how to go freelance. And then finally, on February 28th, we're going to talk about how to set up a YouTube channel. So those are the meetups that we're going to have, along with the book club. And I hope to see you there. That's going to be at topendevs.com slash sign-up.
CHARLES MAX_WOOD: Dan, what are your picks?
DAN_SHAPPIR: I'll make it short and sweet this time. So my first pick is going to be Prometheus the monitoring solution. I've been using it a lot. As people who have been listeners to this podcast know, I do stuff related to performance, to analyzing how applications execute. And Prometheus has been an amazing tool for this purpose. So collecting all sorts of performance data and execution profiles, getting them in there, building dashboards using Grafana, running all sorts of from QL queries. I'm now actually looking to try to solve a really hairy challenge within our organization. We've got something like 50 microservices, some of them having hundreds of endpoints, and I want to be able to analyze the entire system to catch performance degradations without having to manually configure and specify limits manually for each and every endpoint and dependencies and whatnot. And it'll be really interesting to see if I can get something like that up and running just based on the capabilities that are built into Prometheus. And, but you know, so far, so good. So we will see. But I have to shout this out as a monitoring tool. And by the way, I also contributed back into the PromQL Node.js client, which was really cool. And I have some additional ideas of some additional contributions that I want to do back to that project. We probably should have the owner of that project on our show sometime. It's a really cool project. So that would be my first pick. My second pick is if you've listened to our past episode, uh, you know that, uh, I, we, we did it around a bunch of polls that I ran off my X or Twitter account. I've recently run another one, which is getting a lot of votes, even as we speak, which is basically trying to see, uh, which framework is the unreact. Like if you think about react. What's the opposite of react in, in, in, in, in front and frameworks and the options that they gave were, uh, well, you know, uh, X only allows like four options like max. So that that's kind of limiting, but the ones that I gave are HTML, Svelte, solid slash quick and other, uh, and ask the people who, you know, who answer other to like specify what they meant. And I got some really interesting responses. So I'll probably share the link to that tweet. Like people, like people like Jack Harrington jumped on Ryan Corneato, um, Carson Gross himself, uh, and others. It's, it's becoming a really interesting discussion. I'll just throw it out there that Ryan's Corneato choice for the most unreact, uh, framework out there. You know what it was. Can you guess? No. React. It seems like he'd pick solid. React is the most unreacted one. Of course. Yeah. He basically made a distinction between the current React and where React was 10 years ago and said that current React is the most unreacted compared to React of 10 years ago and vice versa. So it was a really interesting choice. Anyway, it's a fun poll. We'll see how it comes out in the end. I won't spill the beans about who's in the lead. Uh, although you might guess, uh, and those would be my picks for today.
CHARLES MAX_WOOD: Awesome. AJ_O’NEAL:, what are your picks?
AJ_O’NEAL:: Well, first and foremost, I got a his and her bidet. Well, I think all bidets are his and her bidets. I think they all have the female button as well, but, uh, it was only 30 bucks. I, I don't know why I didn't get one years ago. I guess I thought it was going to be complicated or something. The only thing complicated about it was the people who built my house, of course, couldn't spend the extra $2 to put the adjustable pipe on there. So I had to go down the street to Lowe's and grab a $4 and 69 cent connector piece to put it on. Like it came with everything in the box that it should have come with in the box is that the toilet was installed in a way that it was, it was permanent. I had to replace a fixed-length tube that could not be moved in any way because it was solid rather than flexible. But anyway, yeah, so now instead of being like an idiot American and spending a hundred billion dollars on toilet paper to do the job that the good Lord figured out how to do for us millions of years ago. Yeah, yeah. That's, that's the exciting news.
DAN_SHAPPIR: But it's like warm water everywhere like like going on.
AJ_O’NEAL:: Oh, it's just cold water. It's it's not fancy. It's just like a cheap $30 thing. It seems to work perfectly. It's not leaking as far as I can tell. At least not yet. I mean, it's day one, but it's the reason I was late because it was going to be, I had an hour. It's a 10 minute project. It's like undo this one thing, put this thing in place of it. You're done. I mean, it gets a little cramped depending on your bathroom size, trying to reach around. But then cause the whole thing with the tube and it's like, oh crap, I can't put this back together. I have to make a run to Lowe's. I have to wait.
DAN_SHAPPIR: And oh crap being the operative word here.
AJ_O’NEAL:: Yeah. Yeah. Yeah. And then I, and then I had a nice crap after it all and was able to confirm that it works. So anyway, yeah, don't be a dumb American. Be like the, like the people in the rest of the civilized world. And use the same thing that people have been using for thousands of years to wipe your hiney. Okay. Get some water in there. Goodness knows. And then I'll. There's a couple other things, but I'll try to just keep this one short for today's purposes. Short and clean. And touch on some other stuff later. Huh? Short and clean.
CHARLES MAX_WOOD: Yeah, I've got to go in four minutes. So fast.
AJ_O’NEAL:: Right. So the other thing I'll pick is Olama. Cause I have used Olamma and I've gotten value out of Olamma and it's super easy to get some of the models that automatically downloads them from hugging face for you. You don't even have to know about hugging face, but I've used Mistral and I've used CodeUp and I've used a couple of other models. And depending on the model you pick, they're better than chat GPT is for a specific purpose, cause they're the benefit of Olamma is that rather than getting a, you know, billion, billion, billion data point model of everything. You're getting models that are more fine-tuned for specific things. And then there are ways to land chain to load stuff in. It automatically does the history for you between the different requests and whatnot. Um, and so I'm just going to put the web install.dev slash a llama link there because that's where the installer is that makes it dead simple, so you don't have to think about it. Um, and then, uh, yeah, I will just mention that I I started using Home Assistant and I now have a thermostat that is connected to a Google calendar-like calendar. And I'm going to, I'm going to put this in my wall soon and I'll, I'll, I'll pick that next time. So that's all.
CHARLES MAX_WOOD: All right. Uh, I'm going to jump in here with a couple of things. Now I always do a, um, a board game or a card game. Uh, this one is one that my wife, uh, got us for Christmas technically Santa brought it for Christmas, but my eight-year-old’s not here to split hairs on that. And yeah, so Disney Chronology, it's a really simple game. I think we play it in like 15 minutes, 20 minutes, especially if my daughter goes before my wife, because my daughter can't get them right, so my wife steals them. So essentially you pull the card, it has a year, and it has like Disney release.
Steamboat Willie to blah, blah, blah, blah, right? And then the answer is like 1928 or whatever, right? And it has the month and the year. And so you have three of those cards in front of you and you pick either before all the cards in between two, you know, any two of the cards that are next to each other or after your last card. Right. And so, uh, yeah, if you don't get it right, then the person next around the circle gets to, gets a chance to, you know, put it into their chronology and steal it. That's the whole game. But if you're kind of a Disney nut like my wife is, then it's kind of fun to see if you can guess them. And if you've got the 14-year-old going before you that can't get them right, then she wins because she steals them all. Anyway, it was fun, a lot of fun. So I'm gonna pick that. It's super simple game. And yeah, it's fun because Disney, not because the game, because the game's idiotically simple. Another pick that I'm going to throw out is, so a lot of you know that I spend a lot of my time writing Ruby on Rails. And so I followed David Heinemeyer Hansen or DHH and he and Jason Fried are doing something a little bit different going forward in kind of the SaaS space. I'm just going to put the link in for once.com. The idea is, is way back in the day, you used to buy software and then you could install the software wherever you wanted and use it however you wanted. And the way you guys were talking about Superagent was kind of the same idea in some ways where it's, hey, you can take this code and you can run it wherever you want and just own it and love it and whatever. And so they're pushing forward this idea out there on the internet where they deliver effectively what would be SaaS apps, right? So you can install as many instances of Basecamp, I don't know if they're doing it with Basecamp but as many instances of whatever as you want because you now own the code. You own the version of the code you bought. So, um, I think that's cool and I think it'll be interesting to see how much of a difference it makes out there in the market. I think, I think it could be disruptive. I think, uh, in other ways, it may take a while for people to catch on and go, Oh, this is a one time payment and this good deal. But yeah, I, I really love the idea. So I'm going to pick that. Um,
DAN_SHAPPIR: Next thing you know, people will want to purchase and own music and movies.
AJ_O’NEAL:: No, some of us still do. Yeah. Anyway, that one people just have to, they, you have to unlearn what you've learned because the whole thing is people think that, that running a command to start a service on Linux is like something that requires a four year degree. It's like, it's a five minute thing. If that.
CHARLES MAX_WOOD: Yeah, I think the other thing that you have to run, that people are going to run into is they will have purchased movies and had them on some streaming service that they, you know, they got access to them on, right? So you enter the code or whatever, and then it's not going to be licensed to whatever service that is and they'll lose access to it. Then they're going to go, what the heck? And then, um, right. And then they'll start looking at, okay. How do I own this again? I think flex servers might take off for some of that. I've also seen another, and I'll see if I can find it because I've been wanting to play with it, but I've seen a service out there that will attach to your Audible account and download all of your audio books.
AJ_O’NEAL:: Open Audible. Open Audible, it's like 20 bucks, it's the best thing ever, I love it.
DAN_SHAPPIR: I have to cut in, I actually have to drop off guys. So, it is. Ismail, it was great speaking with you. I learned a whole bunch. Best of luck. Thank you. And with you too. And it's an awesome thing that you're doing and I'm really loving the approach. So, so again, and it was great having you on and bye.
ISMAIL_PELASEYED: Appreciate it. Bye.
CHARLES MAX_WOOD: I'm going to send an email to RubyRogues real quick and just let them know it'll be a few minutes late, but Ismail, what are your picks?
ISMAIL_PELASEYED: Uh, so first off. TV show started rewatching Fargo. I don't know if you guys have watched it, but it's an amazing show. It's like five seasons and they are not interconnected in any ways. You can jump into any season, pick, pick the season you want and the actors you like the most. And it's just amazing. Um, so that's on the show side, the fun side, the, uh, death side. I have this, uh, project. Open source project that I would like to highlight and you should bring that guy on. His project is called outlines.dev. He's a French guy called Remy. And what he does is that he allows you to create models that can answer in other formats than just text without having to write a bunch of prompts and stuff. So it's a technique that you use to, um, you know. Basically, an extension you plug into your language model, which allows you to control its output, which is interesting, because it gives you ideas of what other type of extensions you might want to plug in there that would do other things. So that's really interesting. It's state-of-the-art stuff, outlines.dev. I would pick that one.
CHARLES MAX_WOOD: All right, cool. All right, well, let's go ahead and wrap it up. Thanks for coming, Ismael.
ISMAIL_PELASEYED: Thank you for having me, guys. It was a pleasure. My Twitter is, my Twitter is homanp. So H-O-M-A-N-P. Thanks. All right, have a nice day, guys. Next up. Bye.
Streamlining AI Integration - JSJ 616
0:00
Playback Speed: