Making AI Accessible for Developers - JSJ 641

In this captivating episode, they dive deep into the world of AI, hands-on learning, and the evolving landscape of development with Steve Sewell from Builder.io. They explore the misconceptions about needing deep AI expertise to build AI products and highlight the importance of rapid iteration and practical experience. They discuss everything from the financial implications of AI, and strategies to manage cost and value, to the innovative tools like MicroAgent that are shaping the future of code generation and web design. Steve shares his insights on optimizing AI use in development, the rapid advancements in AI capabilities, and the critical role of integrating AI to enhance productivity without the fear of replacing jobs. Join them as they unravel the complexities of AI, its real-world applications, and how developers can leverage these powerful tools to stay ahead in a competitive market. Plus, stay tuned for personal updates, user interface innovations, and a glimpse into the future of AI-driven design processes at Builder.io.

Special Guests: Steve Sewell

Show Notes

In this captivating episode, they dive deep into the world of AI, hands-on learning, and the evolving landscape of development with Steve Sewell from Builder.io. They explore the misconceptions about needing deep AI expertise to build AI products and highlight the importance of rapid iteration and practical experience. They discuss everything from the financial implications of AI, and strategies to manage cost and value, to the innovative tools like MicroAgent that are shaping the future of code generation and web design. Steve shares his insights on optimizing AI use in development, the rapid advancements in AI capabilities, and the critical role of integrating AI to enhance productivity without the fear of replacing jobs. Join them as they unravel the complexities of AI, its real-world applications, and how developers can leverage these powerful tools to stay ahead in a competitive market. Plus, stay tuned for personal updates, user interface innovations, and a glimpse into the future of AI-driven design processes at Builder.io.

Socials

Picks

Transcript

Charles Max Wood [00:00:05]:
Hey, everybody. Welcome back to another episode of JavaScript Jabber. This week on our panel, we have Dan Shapiro.

Dan Shappir [00:00:12]:
Hey. From a still very hot and muggy Tel Aviv.

Charles Max Wood [00:00:16]:
I'm Charles Max Wood from Top End Devs. Yeah. It it yesterday, we had a high of 95, which was nice and cool compared to what it's been. So, Yeah. I feel that. We also have Steve Sewell from Builder. Io. Steve, do you wanna say hello and remind people who you

Steve Sewell [00:00:34]:
are? Yes. Hey, everybody. I'm Steve. I'm our cofounder and CEO at Builder. We make cool AI, design to code, design to live, you know, website you know, website CMS stuff, which I'm sure going to. And it is only 58 degrees Fahrenheit in San Francisco. It's not hot at all. I wish it was a little hotter, to be honest.

Charles Max Wood [00:00:51]:
Wow. Is that why you're sitting in the server room? Get

Steve Sewell [00:00:53]:
a little Get a little warm? Yeah. Exactly.

Charles Max Wood [00:00:57]:
Yeah. Good deal. Well, we we got you on. I've I've seen a whole bunch of videos from you about AI. I know you, you know, you run builder. Io with with Mishko and a bunch of other folks. And and I guess what I'm wondering and where maybe we should start is, okay. So what is the CEO of this, you know, hey.

Charles Max Wood [00:01:23]:
Build your website with us. What you know, why do you care about AI? Like, how does that fit into the life that you're running and, you know, how will that fit into the life of somebody who's going, hey. I'm a JavaScript developer.

Steve Sewell [00:01:40]:
Yeah. No. It's a it's a great question. So, you know, I actually as the word it sounds, I'm a little bit bare embarrassed in retrospect, but I think it's probably still the right idea. I was excited by LM Progress when I was seeing it. You know, g p t 3 was interesting, but very hard to get good results from. 3.5 was a big breakthrough. Like, oh, it's easier.

Steve Sewell [00:02:01]:
It was better. It was noticeably better and easier to work with. You could talk to it like a human, not like you and weird. Yeah. I remember g p t 3. Somewhere it would say something even as simple as ending a, your, prompts to get a completion off of with like, a new line or something, it would, like, break the whole thing. So they'd be, like, just don't do that. And, like, 3.5 was, like, let's assume humans are humans, and we will, you know, make sure it works with whatever you give it.

Steve Sewell [00:02:23]:
That was pretty huge breakthrough. But even then, you saw, like, all these companies rushing to add it on to their product or, like, we even have VCs who invested in us. Like, every CEO is talking about redoing their whole road map to be focused on AI. People are rebranding their company's AI. And I was like, you know, I'm thinking of, like, crypto and all this stuff. And I'm like, no. We're we are not doing that, at all. We are dabbling.

Steve Sewell [00:02:46]:
We're not changing significant plans. As we doubled further, the the potential hypothetical potential became really clear. You know, if you work in that space of, like, you know, you've got a design and a program like Figma on one end, and you've got a website or app that you probably have a mix of developers coding on, because we focus kind of on larger businesses generally. So you've got developers. You're not like some small mom and pop shop or something. So you've got developers writing code and you have people who are not developers trying to put out pages or update pages or something through a CMS. Where can AI help most? You know, we've seen all these cool demos, and this is where things get confusing too is there's a lot of demos that are not representative of the average user's experience. So, you could, for instance, go into chat CPT and have it summarize a long piece of text, and every demo will do well, and every user's experience will probably be pretty good at that.

Steve Sewell [00:03:37]:
It's something that the l m's are good at. Take a large amount of information and condense it down. And if you don't like the style of how it condensed it, you know, like the language it used, like, when I paste a huge amount of stuff and I tell it, like, turn this into an email, it assumes that I'm, like, this corporate person emailing a a 1000000 people. Like, no. No. We're startup or we're what we're now 70 people. I use a pretty chill vibe when I talk. Here's some examples of how I usually talk emulate that.

Steve Sewell [00:04:01]:
TechGPT is still pretty bad at that, but, Claude is much better than my experience. But anyway, those are good. But then you see these other demos of, like, you know, there's one off cherry pick things of, like, hey, I built me this whole program. That's awesome. In fact, Claude with artifacts is great at generating, like, a snake game or something. But that gives you a look into, like, what people would like to have happen. You know? People would like to if I'm a developer that has a Figma design coming from a design system and it's got behavior implicit in it, this is a new dashboard mock up with we we've got APIs for this data. We've got components for these charts, and it's, you know, it's got a layout in Figma.

Steve Sewell [00:04:37]:
If I could just turn that into, like, almost finished code to just start me off, there's always nuances I gotta do. But if you can connect it to APIs, assemble the components, create the layout in tailwind whatever I'm using,

Dan Shappir [00:04:47]:
and

Steve Sewell [00:04:47]:
then let me work on it from there, that's pretty cool, especially when you know those things can actually work fairly reliably. That becomes, like, a why not type of thing. Why am I writing this all by hand if the AI can do that pretty effectively? On the flip side, if you're, because part of what we are is is headless CMS. So if you're a user trying to create new pages within a Next JS app or or whatever, same thing. You've got this mock up of a page. If I could just click a button and make that become real, and then maybe use natural language and say, actually move the button over there, or when I click the button, it should trigger the off flow or whatever. Rather than learn this complicated tool of, like, yes, we have the off components registered in the tool. And, yes, we have our APIs connected over here.

Steve Sewell [00:05:28]:
And if I know how to click a 100 buttons, I could do it. But if I could just say it and have it happen, that's the pipe dream. That's the obvious reason to care. The question obviously is just how well can it do that? And then more importantly, how can we make a reliable path to make as many of those dreams come true as possible without being full of foot guns, and that's been kind of our focus in in research and development over the last year or 2.

Dan Shappir [00:05:50]:
So it's basically the the domain that I'm the the space that you're working in is essentially Figma to code.

Steve Sewell [00:06:01]:
That's a way to think about it. It's not every customer's use case. You don't as you can imagine, if you could turn prompts into real life stuff using the react components you have and stuff, you don't need Figma. You can just tell it to make me a thing and it can make you the thing. But Figma is probably one of the most common ways you represent in great detail what you want to happen before it happens. So you could think of us as like the if you saw TL draws old make or not old, like their make real demos, you know, diagram, you hit make real, becomes real. You can think of us as like an entire make real application or or platform. Most of the time those are in Figma designs already.

Steve Sewell [00:06:34]:
Sometimes they're just in a Jira ticket or a Slack conversation. We we we imagine a future world that's not too far where we could have, like, a Slack bot where you tag the builder Slack bot. It looks at the thread. It summarizes your idea, implements it, sends you back a link. How does this look? Turn that to code, sync it to your code base, or just hit publish, run it as a 5% test, and see how well that does. Stuff like that, you know, can be

Dan Shappir [00:06:56]:
So Figma to code without the Figma?

Steve Sewell [00:06:58]:
Yeah. With or without the Figma. That's one way to put it. Totally. Right.

Dan Shappir [00:07:02]:
To be fair, though, you're not it's it's it's not really greenfield. I mean, Figma themselves, as I recall, are are looking at ways of of transforming their designs into code. Mhmm.

Steve Sewell [00:07:15]:
That's correct. Yeah. I think the the greenfields of LLMs are what you know, I think the the hype is real. I was very against making any deviations to our plans or marketing or anything just because AI looks cool. I feel like there's a lot of startups who just, AI looks cool. We wanna be cool. Let's just do that. We put a lot more thought into it before investing behind it.

Steve Sewell [00:07:36]:
And to be honest, in most cases where I saw startups just say AI is cool, let's do AI for lack of better terms. Some would see spikes in sign ups, some would do kinda cool things, but where I saw many fail completely were when they were adding AI for AI's sake as opposed to solving major problems that only large language models could solve. So there are random ideas of things we could do with AI for the sake of doing it, but one of the biggest problems that Figma has in their code generation more ideas than they can get into code in a high quality, and any tool has tried to turn any Figma or idea into code in an automated way is bad. It's the simplest way to put it. They do a very, very bad job, and we found a few techniques that work pretty well that can only be done with LLMs. We found that we at the end of the day, like, it's funny because I don't wanna over hype LLMs. I think they're a critical missing piece, but they are not the full solutions. What I mean by that is Right.

Steve Sewell [00:08:42]:
You know, I've got some videos about, like, use AI as little as possible, and I still firmly believe in that. Most of our AI solutions that just look like, for instance, a Figma design is the input or a prompt and the output is code. It looks like we just fed that into the LLM, and we got that out the other side. There's a ton more code than that, and that's good because we have control over that. It makes the product differentiated. There's not just an enormous amount of code preprocessing the heck out of everything, post processing the heck out of everything. But also, we've trained our own models for specific parts too, and so you can take you can take various approaches here, but the one that's worked well for us is solve everything without AI as much as possible. Break it down into the tiniest problems that are just not really solvable with just typical, you know, conditions in code.

Steve Sewell [00:09:32]:
For us, that was those were much smaller than you think, especially if you really grind out the problem. It's really small, but you can't do without AI. But there's probably gonna be some point, like, if you're turning designs to code or or prompts to code or whatever, that what you can't decipher with most models you could train yourself, you know, you could do training as simple as things like decision trees, which can work great. Mhmm. If you just don't know what value to put into a a box or set of values, decision tree training with, like, x g boost or something could be phenomenal to wipe out a bunch of crappy conditional code into 1. Essentially, the AI takes example data and kinda writes conditional code for you in a sense. Fantastic. We use that for certain things.

Steve Sewell [00:10:13]:
You could do fancy stuff like random forest, you know, a bunch of decision trees to help you make a decision. Awesome. But what nothing can really do well outside of an LLM is understand meaning of things, and that's where, again, the LLM in our experience should not be treated as this opaque box where very raw inputs get out and very finished outputs get out the other side. But when you start identifying, here's a basic example. You've got a responsive design, and it's got desktop and mobile. And on desktop, you've got a nav, you know, with all the links horizontal, and on mobile, we've got a hamburger menu. There's no world where we could solve that without an LLM, but again, we're not passing the designs in and saying figure it out. We're not doing screenshot code, which is kinda awesome sometimes, but always not quite right.

Steve Sewell [00:10:58]:
People want, you know, you designed with position. You used the specific design tokens, a Figma components. So it's mapped to design tokens come up in your code when it's a screenshot that's the conduit. You lose all that information and that's you try not to lose that unless you're, like, you know, again, you're just some random you're building some random project on the side, maybe you don't care. It's it's just beginning. But if you're doing work or company, you need a lot more than that for it to be useful. You don't wanna have to get code and rewrite it all. And so you need to use some type of LLM to understand the meaning of that and say, oh, this should become an interactive hamburger menu.

Steve Sewell [00:11:29]:
It did not have to figure out all those other things, how components map, how how design tokens are mapped, all this stuff. We figured that out in advance, and we've found a format to pass that into the LMM heavily preprocessed, you know, when all we're saying is here's baseline code that's almost done. It's just ugly. It's poorly named. It's poorly structured. And it has might have some semantic misses. Like the accessibility might not be quite right because we programmatically generated this with old school code and old school models, which are fast and reliable and best to use when you can. But all you wanna do is do some clean up of the code, refactor this, name the components better, give it some props and better class names, and then horizontal nav, maybe our default logic turns it just vertical because that's kind of a rule that works pretty well in responsive design.

Steve Sewell [00:12:16]:
Mhmm. Stuff side by side becomes vertical on the, you know, narrow screen. When that's clearly didn't work well and you actually wanted a hamburger menu, have the LLM take a pass at that. Maybe that's the area that's least accurate, but still pretty much there, and then you're not having the developer at the end of the day have to tweak everything. They may just have to make some small tweaks to that hamburger menu. That's a bit rambly, but that's maybe an example.

Charles Max Wood [00:12:38]:
I I I kinda wanna back up a little bit. One thing here that I'm just gonna throw out there, we talked to Obi Fernandez on, Ruby Roads, and he's got a book about, you know, building and working with the LLMs and, you know, it talks to APIs and, you know, it does he's he's building chat assistance. Right? So we're we're kinda talking about a different problem set, but I think a lot of the ideas are the same where he basically breaks it down and says, yeah. So you have a, like, a GitHub GitHub AI chatbot. Right? And so it knows about its APIs and its special things that it can do. And then you might have some other bot that knows kind of the next level up that orchestrates things. Right? So he he was advocating too to, like, break things down into really, really granular things and and have it come together that way. Of course, his stack is kind of a stack of AIs, and you kind of figure it out and set up context and things like that.

Charles Max Wood [00:13:33]:
But I I imagine a lot of people and and there are some comments to this effect on here too. And I I think I think this is kinda where I wanna start with what you've talked about is, you know, Jack Harrington on Twitter said, what are the wrong ways to integrate AI into your application? What are some of the right ways? And you kinda got into, you know, breaking the problem up and things like that. But then, yeah, Charles g said, AI is only reliable for prototyping and search. It feels like vaporware for most other stuff. It's very

Dan Shappir [00:14:06]:
think that the beta

Charles Max Wood [00:14:07]:
used models, you know, gotten off the hype train,

Steve Sewell [00:14:12]:
and

Charles Max Wood [00:14:12]:
and I think some of the stuff's gonna get better. But, yeah, so so you you talked about these specific instances, but what what are the problems that it solves well? Like, where do I look at my stack and go, okay. I want I want it to hit here. And, you know, maybe I'm gonna hit, you know, this and this and this and this and this and this, and I'll have different models or different LLMs that I hit or whatever. But how do I know that this is a good fit? And then how do I start putting that in there? Because it does feel like some things are moving this way. Other things, yeah, it's not there yet. But

Dan Shappir [00:14:48]:
So be before Steve answers, and I'll definitely let Steve answer, I just wanted to mention that one of the catalysts for this entire conversation is an excellent blog post that Steve wrote and posted on, on the builder, blog, which is titled how to build AI products that don't flop. So I think that in a lot of ways, what Steve what I assume you'll be saying and and and kind of also addresses a lot of the issues that were brought up or what the stuff that you actually wrote in that blog post. So first of all, I I you know, we will post that blog post here in the chat. So the I and and I highly recommend for people to go and check that out. And and I it also I have to say that it also resonated a lot with me because of stuff that we are doing at, Sisense, which is the company that I recently joined, which turns out has a very similar philosophy in how we're using AI in our own products. And maybe I'll touch on that after you kind of answer the questions.

Steve Sewell [00:15:56]:
Yes. No. This is a great question. And so, you know, at the end of the day, maybe it starts by covering, like, what are the AI's bad at and why? And that can help us to still down what they're good at as an alternative and and why. So what they are bad at is the biggest problem that LLMs have is is what people call hallucinations. I don't love that term, but it's the term people use. Where they just make things up, and they make it up confidently. And it's probably a result of how they're trained.

Steve Sewell [00:16:27]:
They're just trained on lots of information of people saying, you know, they're just saying things. You know? They're saying things as if they're Actually,

Dan Shappir [00:16:33]:
my kids say that they it's the same approach that I take with most everything.

Steve Sewell [00:16:39]:
Just say what you think as if it's certain

Dan Shappir [00:16:41]:
I say what I think and I say it very confidently.

Steve Sewell [00:16:44]:
Exactly. Yes. I I know many people in my life, I probably are in one as well who will do that as well. Whatever you think, whatever kind of sounds right, I got complete confidence in it when saying it out loud.

Charles Max Wood [00:16:55]:
I've I've never done that ever ever.

Steve Sewell [00:16:58]:
So you think about it. What's the training data? It it is if it's an AI trained on emulating the training data, and training data is human saying things as if they know everything. So the AI just says things as if it knows everything. In the however many 1000000000 parameters they use, they can't store all the information of the world. And it seems like to date, people have still not figured out a reliable way of having LOM say, sorry. I don't know the answer to that. They will just make up dates and times and answers and and stuff like that. It's very annoying.

Steve Sewell [00:17:26]:
Well, let me

Charles Max Wood [00:17:27]:
I just wanna chime in here because this has always been a problem with the with AIs. Right? Is you have a certain probability of not getting your right answer. The difference is is that the answers we're looking for now are a fully written out email or code or things like that, right, where you can be mostly okay except for these couple of things where in the past, it was generally something like AI vision or something like that. And so if it Mhmm. If it didn't always identify the dog as a dog, people would just kinda you know, as long as it was generally accurate, it was useful. And now it's it's problematic because it's generally accurate, but that's not good enough.

Steve Sewell [00:18:11]:
Exactly. Or especially if you think about it, yeah. A good example is, we have, like, one of those apps that lets us know when an animal goes in front of the camera at home. So we think it's our dog, and turns out it was just a shadow. But it's like, who cares? Like, whatever the AI is wrong, who cares? When it's not a who cares is when it's an essential part of a product. The product has a flow from start to finish, and the LLM is wrong 5% of the time. That's a huge problem, especially when you use LMs maybe from multiple steps.

Dan Shappir [00:18:39]:
Or if it's wrong a 100% of the time, but to a 5% extent.

Steve Sewell [00:18:44]:
Yeah. Exactly. Exactly. That's a problem. Those add up. Yeah. If you think about everybody loves the idea of AI agents that could complete multi step tasks end to end. Well, those little errors compound to become big problems.

Charles Max Wood [00:18:55]:
If you've

Steve Sewell [00:18:55]:
ever used something like auto g p t, it derails. As a big problem. It starts generating a mess of data. Has no clue it's been off the rails for the last hour. It just makes it worse, and and that's a huge issue with with this kind of future we want. But there are solutions to this. So actually, there's 2 solutions that we found to be extremely effective. One has to do with actually that blog post you mentioned, Dan, and one has to do with some, like, micro agent techniques we've been, really investing behind both in an open source project we've we've recently open sourced and, some work we're doing internally in the product.

Steve Sewell [00:19:30]:
So solution one that works fantastic. Let me give an example. Builder has lots of docs. People don't know the answers to the questions and it's tedious to try and comb through all the docs to find your answers. No matter how we try and restructure the docs or surface the right information at the right time, it's never good enough. And I I know that when I use other people's products too.

Charles Max Wood [00:19:49]:
I was gonna say you're not the only ones.

Steve Sewell [00:19:51]:
I promise. It's everybody's problem. It's it's Yes. It's it's very difficult. And so we're like, okay. Let's feed all of the information of our docs into, you know, the context window for an LLM, and that's the big change that's happened recently. The context one is windows have gotten freaking massive, and that's huge. I've even seen papers on using, the large context windows is more effective than fine tuning.

Steve Sewell [00:20:18]:
So rather than fine tuning with 100 or thousands of examples, just fit 10 examples in the large context, and you'll outperform. But they didn't even mention or maybe they did and I missed it. The biggest thing

Charles Max Wood [00:20:28]:
that the things that Obi, talked about in the Beautiful. Episode 2.

Steve Sewell [00:20:32]:
Yeah. And so what's what's practical, because I assume if we're talking JavaScript devs, we're talking practitioners. They wanna use the AI. The practical benefit of that is you don't need to assemble those thousands of informations. You don't have to make a separate fine tune model instance for every use case. You can assemble things on the fly or make changes and experiment at a faster rate. It's it's really opens up a lot of, benefits. And so what didn't work well and so here's if you go to chat gpt and ask it ask it a question like, how do I add a user in builder dot I o via API? It'll tell you.

Steve Sewell [00:21:02]:
Go to builder dot I o slash API slash users and the post. That API doesn't exist. There's not an API to add users to your account and builder through APIs. So it will tell you that it exists. That's a huge problem. So then if you then try and augment this with, okay, well, we're gonna take Chat JPT or anthropocloud APIs, and we're gonna supply our API documentation into it. Maybe we'll do some fancy, based on embeddings and and semantic search to find the right docs to include in that context and send it. It still will it'll get better, but it still will tell you to add a user, go to API v one slash users.

Steve Sewell [00:21:38]:
And so the thing that works extremely well, so what the elements are really good at in my experience is condensing information down. So if you tell it very clearly, here's a set of information, you are only allowed to answer questions using this information. And if that information has the answer, condense it down and provide the answer. If it does not say, I don't have an answer to this, it probably doesn't exist. That works wildly well in our experience. Whether that's here's a lot of code, simplify the code. Whether that's here's a transcript, tell me the key points discussed. Whether that's here's the API documentation, answer questions about the APIs.

Steve Sewell [00:22:11]:
As long as you firmly say you can only use this information and nothing else, it works. I mean, almost a 100%, and I mean almost in terms of like LMS still can surprise you from time to time. Use a good one. So like don't use I hope people aren't using GPT 3 or 3.5 anymore for most things. There's better options. But if you're using something like anthropoclock 3.5 SONNET and you're giving it lots of information, more than it needs, and telling it to reduce and and only use that information, it works great. Another example was so we have this assistant in our docs that answers your questions, and it works a lot better. Another thing it started doing though is making up links.

Steve Sewell [00:22:44]:
Like, oh, we I I added to the instructions at one point, like, include links as much as possible because a lot of what you're doing or a lot of what we hope AI could help with in navigating docs is just knowing what for my use case, what docs I need to read. And so I told it, fill the answer with links. Maybe just a way to quickly find the right links is a good good example. But I started making up links. So what did I do? I added to the prompt. I took our site map essentially and some additional context on each link and I fed into the prompt that said use links, only these links, nothing else. And then fantastic. It always links to things that are relevant.

Steve Sewell [00:23:21]:
The more you do that, the more you tell it to do nothing more than cadets information and use nothing more than the information provided and just add a copious amount of information because these contact windows are massive. They can work phenomenally well. So that's a use case that can be great. Another example of that is how our AI designed to code works is we have old school code and old school models generate fairly accurate code for a design. It's just verbose and ugly. It's just too much. It's one massive component of div soup and the classes are names like div 1, div 2, div 3, 2 4. If you take that and pass it to an l m and say just reorganize this into multiple files and components, well named, rename the classes according, etcetera, again, give it a model like the latest 3.5 Sonnet or gpt 4 is pretty good, but Sonnet's really good.

Steve Sewell [00:24:06]:
It does a fantastic job. And so we've spent an enormous amount of time in the past trying to take generated code and try and make up class names. How the heck you gonna do that? That's back into that bucket of meaning. We don't know meaning in any type of code or model before an l l m really for this type of use case, but if you can distill down that way, it works fantastically well. And then the last piece, the last piece we've learned here and I'd love to touch on the agent piece because there's one other technique that's great. The last piece we've learned is the user interface matters a lot too, and there's a user interface pattern that is just almost always wrong, yet we see almost everyone jump to all the time, including us, which is you want to assume that the AI is never perfect. And especially because when we're talking about meaning and summarization and stuff like that, it will not get everything right the first time. Even if it's accurate, it may not be quite exactly what you wanted.

Steve Sewell [00:25:00]:
So what you don't want is a UI like we used to have, which is like if I wanted to get builder to update my content in some way, it used to be, like, click a button, then you get a box, then you type in it and hit submit, and the box goes away and it makes your update, and it, like, acts like you're done. Most likely you're not done. Most likely it got you closer, but you're not quite there at least some Right. Portion of the time. The we've actually been landing on a chat interface is almost always the right interface for an l l m, even if your use case is not chat. So if your use case is imperfectly designed to code and here's code, you should have a chat interface next to it saying, what would you like to change about the code? Oh, I forgot to mention I'm using tailwinds. We'll update it. Mhmm.

Steve Sewell [00:25:37]:
There's too many components or there's not enough components. You should be able to constantly iterate and the chat format lets you maintain that context and never assume it has to be done the first time or what's coming soon for us is importing Figma designs and then like a Figma prototype where you have like, you know, you have this mock up of like clicking this launches this model and does this thing and updates this data. Well, we're gonna suck that in and make that real, but not every, you know, Figma is not a spec. It is a suggestion. Right? And so we need to treat it that way. I mean, we shouldn't. It's a description. So we're gonna treat it that way.

Steve Sewell [00:26:12]:
We're gonna make some assumptions. And then when actually, you want clicking this to do that, you should be able to say it and see that happen. So that's one big bucket of learnings. The other big bucket of learnings is if you want to use LLMs on a loop, you know, I you could describe an AI agent as just an LLM on a loop of do a thing Yep. Analyze the thing, do the next thing, decide when you need to stop.

Charles Max Wood [00:26:35]:
Right. The the only thing I would add is is that whatever you're telling it gets added to the context window and things like that, so that it knows what you've already done and what you've already told it.

Steve Sewell [00:26:44]:
Correct. And you could also add that it's taking actions at each step. So Right. The the prior thing feeds into the the next thing, the next context, and it takes another action and it continues. Right. Those are, I think, people firmly understand how we will be able to do even more magnificent wild things with AI at some point. You know, like I mentioned, we envision a world where another example is like assign a Jira ticket to builder and have it just implement that thing and then you take a look at that or the Slack example too. Here's our idea, implement it.

Steve Sewell [00:27:14]:
Because builder has an API and could hook right up to your live app, you could have it if it powers your home page like it does for lots of customers like, J Crew or Zapier or whatever, you could just tell it, like, hey, update the home page with this and I'll just go do it. Maybe it to be safe runs it as a 1% test, shows you in a day that the data is good. The AB test is winning some, you know, metric and then you great. Scale it up, you know, stuff like that. Awesome. Just be be my homie that helps me do tedious stuff that I didn't feel like doing. The problem though with agents that that that have to take a sequence of actions is, how they derail. Those small errors compound.

Steve Sewell [00:27:49]:
So you need some type of mechanism to bring them back and analyze are they off track or not. And so this is a technique we've been using. We call it like a micro agent technique because it's about being specialized and I think this is a topic that has to do with product development in general, which is don't try and boil the ocean. Like, don't try and build Devon, the world's do everything software engineer. Then suddenly you're solving everyone's problems simultaneously. It's it's an impossible task. It's not a good idea. Rather, if you're an agent for one specific type of thing, starting there and then building up over time through feedback, through iteration, through all that stuff.

Steve Sewell [00:28:24]:
Mhmm. That's how I I always believe in building products that work work better for me. So we have this open source project called MicroAgent where the technique we've we realized works really well is if on each step, you can have something that's not AI. We're we're exploring if AI can be the step, but I haven't had good results yet. But something that's not AI can essentially test if that step was successful and if it was not feed feedback in and then let the LLM run again. So it has the feedback from the last iteration. So, like

Dan Shappir [00:28:54]:
So for example, if you said that, you had the LLM output, doc, explanations that were link heavy, And you said that you verified the links by limiting it to your site map. But another thing you could do is obviously just look at the links of the response, test them out, and if you get, I don't know, 500 or something, then you can feed that back and say, okay. That link is broken. Don't provide that as an answer, something like that.

Steve Sewell [00:29:28]:
Spot on. No. That's a perfect example, and that's what we've seen. So I'll give you a couple more use cases for this. So one, how the micro agent project works is the CLI, and instead of going to like chat g p t and saying giving me, give me some code that converts markdown to HTML, You instead will run the CLI micro agent and describe that. Convert markdown code to HTML. Instead of just giving you code and it's your problem if it doesn't work, instead it generates a test first, and it'll have all these input output examples of markdown to HTML. And then it'll say does this look good.

Steve Sewell [00:29:59]:
You can get feedback on the test like this is wrong or add more or less or whatever, or just say okay. Then it'll it'll, write the test and then it'll write code. And every time it writes code, it'll run the test and any test AITDD. Feedback. It's AITDD. And it works really really well, especially for certain use cases that AI generally wasn't good at before. And so you get this sort of guarantee that with AI spitting out code, you don't have to any longer, hope that that code works because it'll look good and it might work on one example, but not others. The test will ensure it works on all the examples.

Steve Sewell [00:30:38]:
And when I run micro agent and have it generate big complex things for me, I rest easy that if it passed all the tests and tests are pretty thorough that it generates, it works. And then if we find any issue subsequently, we'll update the test accordingly. So it actually is a much more like high confidence solution. One other example is Figma has this, concept called component sets where it's it's like how you represents a button with all kinds of variations, you know, like color primary,

Dan Shappir [00:31:03]:
size, or It's a design system sort of thing?

Steve Sewell [00:31:05]:
It's a design system feature. Exactly. It's how you have something similar to react components in Figma. So you can place the button and change the the color from primary to secondary, you know, error state true, you know, whatever. It's cool. It's weird as hell how it works in Figma. In Figma, you actually design. So if you have, like, 3 different props with 3 different options each, you have to make 9 designs.

Steve Sewell [00:31:24]:
You have to manually code up or design every possible combination. That's this weird it's funky, but it's it's effective. So how you translate that to code, what we do is we generate baseline code for every single variance, and then we tell the LLM consolidate down this all down into one piece of code with props, you know, and react. And what we do is to verify the l m did that correctly, we will run it through tests. So because we know what the end state of every combination of prop should look like, we then take what the LLM provided and test it against every end state. We give it every combination of props and make sure that the, essentially the snapshot is correct to the original spec, and if it's not, we feed the feedback in. And if we've actually found you can play with it, you can either feed this to a slow high quality model like before 3.5 SONNET with anthropic, there was a Claude Opus, which, it was not obvious. I've had way better experience with anthropic models in general than open to have models.

Steve Sewell [00:32:22]:
That's becoming more popular now and more discussed, but it was a little bit of a hot take in the past, or less common knowledge. But anyway, you can play with the knobs. You could either say we're gonna do the big expensive model and probably takes less iterations, or you can scale down to, like, we actually had good results with anthropic haiku, their smallest model, and it would take 4 or 5 iterations, but that would run faster than the 2 and 3 on Opus, and you're still guaranteed an accurate result at the end. So if you have that automated check that can feed into the the AI, it could work really well. And that's kind of my point again about hopefully this illustrates good examples of AI can't be the entirety, the brains for your product, but you can isolate these different techniques and use it to accomplish things that just would have been kind of impossible previously so to speak.

Dan Shappir [00:33:07]:
Related to what you just said, one of the the salient points that you made in in the blog post, and it's kind of related to what you just said right now, is that in all cases, it's not about building your product to be wholly centric with a thin layer around it. It's taking some existing service product that solves a real problem and then sprinkling in AI to make it better. And so it's it's the reverse of what the VCs might have liked, but but, but it's much more grounded and down to earth. And like I said, we're I'm I'm seeing very similar thing at, Sisense. We kind of build dashboards and stuff for BI. And so in the past, it was like, you know, you basically need to either, in best case scenario, drag and drop till you build your dashboard or you might, you know, do some coding to build your dashboard. Well, it's now we're look working on, like you said, the sort of a chat thing where you can describe the dashboard that you that you like and you get it. But it at the end of the day, it uses all the components and know how and capabilities that we've already built for constructing dashboards.

Dan Shappir [00:34:39]:
So it's so it's it's adding AI to existing infrastructure to make it better and to make it more approachable, I might say, and easier to configure and without, you know, specialized knowledge.

Charles Max Wood [00:34:57]:
It what what you're talking about, Dan, it sounds like it's a different level of interactivity. Right?

Dan Shappir [00:35:04]:
Look. At at the end of the day, I I you know, even, like, 10 year when Google came out, I I I said what was it? It's 20 something years ago. I said, why aren't are you are UIs like that? Instead of pull down menus and stuff, just give me a text box where I can say what I want and the software does it. Well, we're finally kind of getting that in a way.

Charles Max Wood [00:35:32]:
I think I I have a question related to this because, you know, Dan says we're we're finally starting to get to this. And it seems like, you know, we've talked about, hey. We had GPT 3, which was we had GPT 3.5, which was okay. GPT 4 is pretty good. I mean, are we gonna continue to see this kind of thing too where, you know, we we get more of the text box or I mean, the the demos and and the demos kinda made me upset because, you know, then you'd see people trying to do the demo where they were talking to GPT 4, and it it wasn't working for them like the demo worked. But, you know, we're we're a lot closer. You know? How does this continue to advance?

Steve Sewell [00:36:21]:
Yeah. I mean, the the assumption we're making is that it's hard to perfectly quantify this, but malls will get something like 10, 20% better year over year. They'll get, you know, some percent less hallucinations or weird hiccups or weird problems. If you have a 95% success rate, maybe next year you have a 96, maybe next year you have a 97. And these things tend to slow down, though they also tend to be little s curves as well. Like, I think Mhmm. One one not counterargument, but counterpoints to the idea of it slowing down completely is things like I don't know if you've used grok, g r o and I'm forgetting what spelling. 1 is Elon Musk thing, but one is like a hardware company.

Steve Sewell [00:37:02]:
A hardware company, I think it's g r o k, is super interesting because they can run llama 3 at insane speeds. You type in their box, you just get tons of text quickly, and they were one of the first to realize at a full ability to commercialize level. Like, you can make hardware optimized for LLMs. And it's not just, like, a small percent better. It's, like, 10 times faster and cheaper. Right. Which is crazy. And so we are not currently using Grok because we need larger context windows and Llama 3 and all the others don't support the huge context windows that, like, OpenAI and Anthropic have.

Steve Sewell [00:37:37]:
I'm hoping. I know Medi still training llama 34 100 b, I think it is. And so that, you know, maybe magic has a larger context window. I don't know. That could be huge. But breakthrough innovations like that could accelerate things. One example of that is, like, hey, people love the idea of, you know, as an example, because builder has an API and SDK, it can dynamically render your react app and components when you drag and drop or use AI to modify it. You could see it all in real time.

Steve Sewell [00:38:04]:
So it feels like Figma, but it's actually your react or quick or view or whatever app, so that's cool. And then changes can turn to code or hit publish and they're live to your users, so you can get people who are not even developers making changes, pushing updates, marketing pages, or whatever you want. But what people really love the idea of is, well, why are we hard coding apps at all? Can't we make components and understand things about our users? And when I jump into, you know, jcrew.com, can it just dynamically produce an experience that's 1 to 1 personalized and fitted for me? Now that's like, wow. That's cool. That's no we're not even close to that. I've I've watched the code generate slowly enough times to say, we're not even remotely there. But then when things like Grok come out, which is probably the first commercially successful commercially successful as in, like, I see people using it. I'm using it myself from time to time.

Steve Sewell [00:38:56]:
Breakthrough in LM performance, that was a big step towards that. You know, you see that paper of, like, what is it? 1.5 bit yada yada for LLMs. It's like, hey. You might take another leap forward as well. I don't think Grok is using an architecture thing. He uses something differently, so that might be a subsequent innovation to add on. Now it's like, okay, if we can do these in real time, maybe that is possible. Only for certain parts of your app.

Steve Sewell [00:39:18]:
It's not, you know, it's not like it's not like you deploy an empty repo and suddenly applications build themselves in front of users in real time. But, like, you could start small. Like, we were working with this very large company on a use case where they wanna just be able to type in a very common use case. They wanna type in a query and and, fetch the data associated to whatever that is. So, like, I just wanna see this data, and it could pull the data and visualize it for you. And the way they're doing visualization is code generation, but just like any tool, this is very common, whether using anthropic, Claude, artifacts or, TL draw make real. You always get just like funky plain old raw HTML or react. It's not like using your APIs, your components, your you're using tail, whatever.

Steve Sewell [00:40:02]:
It's not using your it's not code that's gonna go to production. It's all throwaway code. But if you could instead be able to in real time either offline or online, assemble what you have, your pieces. Offline use case would be like, you know, I'm just gonna import a page to Figma, and then I'll hit publish, and then I'll go online when I'm done editing. So that's ultimately offline use case. The online use case would be show me these things on demand. So this large company that wants to shoot or surface your data wants to generate code effectively with your components. Well, what can do that we already have these SDKs where you if they're aware of your components, they can dynamically render out in real time your stuff.

Steve Sewell [00:40:42]:
We're exploring, like, hey. Yeah. You query your stuff and you've it's aware of your, just the components you want it to be for this use case, charts, diagrams, pie chart, you know, table, etcetera. I can just dynamically produce UIs based on your specific query in in pretty real time using llama 3 on Brock that actually works pretty well. So that's sort of like your first, like, online on demand generation. And that makes sense for logged in users, maybe paying a certain amount per month. You know, that could be justified by the cost of of serving that. But if we keep making these innovations on cost and speed, etcetera, you could get to a world where online generation of parts or larger parts I mean, let's take if you're J.

Steve Sewell [00:41:21]:
Crew, maybe you want to manually merchandise that hero. You wanna promote this new product line, so hero is this thing. But maybe down below, whatever. Amazon highly personalizes the products you see. What if you could just throw that at the l m and the whole UI is based on what you'd be interested in seeing? Those are kinda interesting. I can't remember if there was a question here or what I was answering, but I wanna throw that in because I think it's interesting direction that we might get to.

Dan Shappir [00:41:44]:
I maybe we'll also get to a world where it's not, just, what's their name? The only company actually making money off of AI is, I'm blanking out.

Steve Sewell [00:42:00]:
So Microsoft is Azure powering everything?

Charles Max Wood [00:42:03]:
I was gonna say Microsoft.

Dan Shappir [00:42:04]:
No. Who who makes the hardware?

Steve Sewell [00:42:08]:
NVIDIA.

Dan Shappir [00:42:10]:
NVIDIA. NVIDIA is the only company actually making money off of the off of the AI revolution.

Steve Sewell [00:42:16]:
Their stock price, definitely. Yeah.

Dan Shappir [00:42:17]:
Yeah. Yeah. Because you raise money as a as a start up and then spend spend all that money paying NVIDIA.

Charles Max Wood [00:42:24]:
You know, this this kinda gets into one of the other questions I had, and you you mentioned this. The the previous question that you were answering was, you know you know, how does this continue to advance? And I think you pretty well answered that. Beyond that, you you kinda got into more of the arena of cost and speed, right, as opposed to, capability. And, of course, cost and speed kind of play into capability. Right? Because if it has infinite cost, then I can't provide it to my customer unless they have an infinite bank account. And Yes. Similarly, you know, if it's not fast enough, then, again, you know, it it lowers the utility. So one of the things is I've talked to some people who are beginning to adopt AI features into their stuff is, yeah, up to a certain point, it, you know, it's it's great, and then it gets expensive.

Charles Max Wood [00:43:22]:
Right? So so how do you start to how do you start to manage some of that?

Steve Sewell [00:43:29]:
That's a great question. That's something we've looked a lot at in in a few different kind of findings of ours. First one, I could tell you just from a consumer point of view, I hate the idea of all these different applications trying to charge me another $20 per month per user for their AI features. Hey. Right. You know, part of me is like

Charles Max Wood [00:43:46]:
You solved my problem. Thank you.

Steve Sewell [00:43:48]:
Yeah.

Charles Max Wood [00:43:49]:
I don't wanna pay more for it.

Steve Sewell [00:43:51]:
Exactly. And it's like the part that kills me is, and I know no product should ever do this. It just doesn't make sense. But I'm, like, I know you're charging me that amount because the LOM is expensive. Can I just supply everyone my OpenAI or Anthropic Key, and they can just bill me based on usage, and and you can just make sure the feet the the the way it works with your product works really well? I don't imagine anybody doing that. Obviously, when we make open source AI projects, we have several. Yes. You just supply your key and then it works that way.

Steve Sewell [00:44:19]:
But, yeah, those things really add up. And in a lot of cases, I think the companies are just trying to make sure that their bottom line is taken care of. They know if they don't charge $20 a month. If you use this heavily, they might be underwater on you, and that's a big problem. And a SaaS company usually wants to have 80% margin. So if if if you're gonna cost them Yeah.

Dan Shappir [00:44:36]:
But we're back to the Uber model. We're back to the Uber model. Like, raise a whole lot of money and then basically subsidize your your users.

Steve Sewell [00:44:45]:
That's how Yeah. Get a copilot started out. They were losing money on a per user basis, but aren't anymore. At least that's what was reported. And so, yeah, I think we're definitely seeing that, across the board. Also a lot of start ups losing money on the training costs and and all the other costs as well, you know, VC money. But, you know, you could probably put a value on it doesn't even take that much math or mental exercise to put a value on various things. For instance, we just have our AI in our docs for free for anyone, any way can prompt and use it.

Steve Sewell [00:45:20]:
In a lot of cases, you can 1, just ship it and see what it costs. It doesn't cost a ton. And 2, we actually know, you know, our platform is probably in the bucket of, it's not trying to be like a Vercel, which is like it's one button and you never do anything. You never have to know anything. It's definitely, there are things to learn and more power to get as you learn.

Dan Shappir [00:45:41]:
Also, your model is different. I mean, at the end of the day, your model is is a customer, every once in a while, needs the AI in order to translate their Figma designs into code. I assume they don't do that every day. Right. Whereas Verstel is you're not even a paying customer yet. Here, we'll show you how we take your whatever and use AI to turn it into whatever. Yes. It's a totally different model in turn in terms of of the finances.

Steve Sewell [00:46:22]:
Exactly. And that's where you need to make sure you just have to work make sure that, the value you're providing and the rate that the user needs AI to get that value is less than you're charging them. And in an ideal world, you know, what we've done in a lot of cases is it's kinda back to that point of making sure you do as much as possible without the LLM. It's going to be faster, cheaper, better for 99% of the flow between whatever input, whatever output is needed. And so that's how we cut down on cost dramatically as well. Use our own trained models when we can. If you are in a world where, you know, Charles, you were mentioning the example of, like, you might use multiple LLMs and we've done that in some cases. Like, there's various ways you can do this, but, there's certain use cases where you might want an LLM to plan the work first and then another LLM to execute on the plan.

Steve Sewell [00:47:15]:
That can be a really good one. It can be not expensive if you don't need to feed too much context in multiple times. The output of a plan is usually not a lot of tokens, but it can be let's say you needed a whole chain of LLMs or let's say you had a situation where l m one takes lots of context and provides a plan, and then you have different LLMs executing on each step of the plan, but they each need the large amount of context, Right. Then we can start adding up and it's like, you know, can we do that without elements, or can we do other things? In the design to code flow, there's a lot of different steps. And for us, some of the steps that can't be done with code, I mentioned we train our own models. Those are, it's hard to underscore how drastically faster, cheaper, more reliable training your own models than an LOM if it's a fitting use case. And the fitting use cases might be more out outside of the box than you might think. Like, yes, one of our use cases is image detection.

Steve Sewell [00:48:07]:
When you have a Figma design, certain things that are like a 100 vectors should actually be one image when it gets to a web or app and then the text around it should stay text and UIs. And so that is a good sufficient use case for like an object detection model, which uses a convolutional neural net. Very common. It's hot dog, not hot dog. It's identifying dog, but it's identifying an image from training data, which in our case, you can generate that training data from the web. Scrape web pages, see what our images, screenshot it, give it the screenshot plus the bounding boxes where the images were, ta da. You've got infinite training data. You can train your models.

Steve Sewell [00:48:39]:
But when it comes to things like decision trees, I think are way more interesting than people realize. You can break a lot of problems down to decision trees where you basically provide a for a very specific problem. In our case, it can be things like, here's a whole mess of of layers in Figma. Which of these should be considered grouped together, like, you know, into a flex row or column? Things like that, you can actually generate decision trees and and be able to figure out a random forest or you can get you can start simple like conditions and code, upgrade to decision tree. If you need to upgrade to a random forest, if you need to upgrade to a neural network, if you need to upgrade to an LLM at that point, and you can play the cost knob as much as you want. And there's definitely cases of products I see sometimes online where I'm like, the economics are just not gonna work on that product. I know you're charging 20 a month now, but that's not gonna last. You have to find one that will.

Dan Shappir [00:49:32]:
So I I have to ask. I mean, I assume that what is it? When did you found the Builder. Io?

Steve Sewell [00:49:39]:
Oh, goodness. 2019, I think. End of year 2019, something like that.

Dan Shappir [00:49:43]:
Oh, 5 years ago. More

Steve Sewell [00:49:44]:
or less.

Dan Shappir [00:49:44]:
Yeah. More 4 or 5 years ago. I assume that 4 or 5 years ago, a lot of the stuff that you're talking about today, you didn't know. Correct.

Steve Sewell [00:49:52]:
Yes.

Dan Shappir [00:49:53]:
So a lot of what we're talking about today are things that you learned during the past 4 years, maybe even 1 or 2 years. How did you go about learning all this stuff?

Steve Sewell [00:50:07]:
Great question. I do have an answer.

Charles Max Wood [00:50:09]:
Internet into his brain model? Oh, sorry.

Steve Sewell [00:50:14]:
No. It's, it's more of an agentic approach. I hate that word, by the way. People keep saying agentic. It's like I don't know if that's a real word. I I hate it so much. It means agents like. It's like referring to, like, approach like an agent, agentic.

Steve Sewell [00:50:28]:
It just feels like a VC talk term, but whatever. It's the more agents some

Charles Max Wood [00:50:33]:
shades and use code words.

Dan Shappir [00:50:34]:
I mean, because AI AI was this academic thing a couple of Yeah. Not just a few years ago. And all of a sudden, you it feels like you need to be an expert. I'll and otherwise, you're kind of potentially left behind. So it's a whole lot of stuff, right, that you need to learn very quickly.

Steve Sewell [00:50:53]:
Well, so in in my opinion, you know, a lot of people naturally say, I wanna get good with AI, so I need to learn how AI works. I need to get a book on machine learning. I need to take a course on machine learning, like how the neural networks are trained or how transformers work under the hood, etcetera.

Charles Max Wood [00:51:10]:
I've done that. It is hard.

Steve Sewell [00:51:13]:
It's hard.

Charles Max Wood [00:51:14]:
It's so hard.

Steve Sewell [00:51:15]:
In my opinion, it's like trying to learn physics to answer questions about biology or psychology. It's not usually a good idea. It's not gonna give you that high of ROI. Right. And so and I'll I'll give you an example. I majored in cognitive science, in college, which is like the closest thing to an AI specific major that existed at at Berkeley. And, I dropped out after 2 years because it's actually horribly impractical. It was a cool information, but it wasn't as practical as I just wanna get hands on

Charles Max Wood [00:51:43]:
Right.

Steve Sewell [00:51:44]:
And build stuff. And still today in my opinion, the way you'll learn more about how to build effective AI products with, LLMs, etcetera, is to build AI products with LLMs and run us all the issues. I don't think the academics could tell me all the strengths and weaknesses for our type of customer use case that an LLM will or once have if they haven't actually gone through it all. Now if you look at, like, the research papers coming out, there actually are different categories of research papers. Some are actually useful. Some are literally sort of, like, the analysis of the usage of these things, like the one I mentioned comparing fine tuning to comics. That's actually useful. But that's not necessarily big brain stuff.

Steve Sewell [00:52:21]:
Even if they have some weird calculus equations, you could feed it into you could upload a Chachapiti and ask what you want to ask and it'll tell you what you're looking for from the paper, which is nice. But all the ways I have learned anything, is just rapid iteration. So that applies to learning AI, that applies to building a company in general, building a product, that applies to learning how to market. I didn't know how to market anything. We made a ton of mistakes in the past. We still make mistakes now. Selling it's just I'll take it. Builder needed its first customer.

Steve Sewell [00:52:52]:
What did I do? I I asked around, like how what is sales like? And people like, I don't know. You go talk to people and see if they wanna buy. So I just started talking to people, seeing what they wanna buy. And I made a lot of mistakes, and now I could tell you a 1,000 things not to do, a 1,000 things to do instead just from trial and error. And that's where I think a lot of people, whether they're learning to program, build more complex software, build, you know, get users for their software, whatever it is. Too frequently people fail to just say, I'm gonna be bad at this. I'm gonna do it anyway. I'm gonna do so many times that I'm gonna learn infinite things not to do, which leaves a smaller and smaller window of what to do.

Steve Sewell [00:53:28]:
And once I have that small window of what works and that long knowledge of what doesn't, at that point, you could say I'm actually kinda good at that thing, and all it takes is doing and and just like the agents example, it just means lots of feedback. So doing something getting no feedback, you're not gonna learn anything, But I think, like, a big learning of mine is, like, hey. You know, we'll be, like, a 100 person company by the end of the year. I've never run a 100 percent a 100 person company. I think the biggest team I managed previous to this company is 8 people. So it's like, okay. How do I know what to do? What do I do? I get as much feedback as possible. People tell me what's wrong.

Steve Sewell [00:53:59]:
Tell me what's wrong. We'll fix it. You know, that that loop works. The product too. Get the product in people's hands. Let them tell you what's wrong, fix it, try stuff. I think that abstracts really well to so many things.

Dan Shappir [00:54:09]:
Do you have dedicated AI people in your company?

Steve Sewell [00:54:14]:
Yes ish. We're not large enough. We really do have an AI team. Yeah. We do. About 3, 4, 5 people. Some people dip in and out of it. But even then, they that team probably wouldn't self identify as AI people.

Steve Sewell [00:54:29]:
They would just be, like, people who are working on AI here and getting really good at it quickly by working so hard at it.

Charles Max Wood [00:54:35]:
Yeah. I I wanna chime in and just back up half a step, and that is the way that you described the way you learn this stuff. What I find is that's the way that the people who really note something have learned it. Right? Yeah. And so if you're feeling like, oh, well, I don't want I don't know if I want to learn AI because it looks hard. It's just like, look, you know, I mean, I learned web development by hitting my head on the wall a zillion times. Right? Oh, it's broken. Oh, it's broken again.

Charles Max Wood [00:55:05]:
It's broken again. It doesn't look right now, but it works. Okay. Now it doesn't work, but it's you know? You know? And you you just you just do it. And so I just wanna encourage people, like, if you're looking at this and you're going, boy, you know, it it sounds like kind of a slog and kind of a journey. It's just the way it is. Right? I will point out that, you know, I have a computer engineering degree. And so, you know, I I got into, you know, writing code, and I still bang my head against the wall a whole bunch of times.

Charles Max Wood [00:55:35]:
I just had a little bit deeper foundation than maybe somebody else. But if I wanted to build the castle, all the all the all the head start I had was that I had a little bit deeper foundation. I didn't have any of the walls up. I didn't have the boat dug. None of that stuff. Right? And so, you know, you you as you're looking at how you break it down, I just I really wanna encourage people to just just look at this. I'm working on putting together a boot camp that, you know, goes through a lot of this stuff, do it in October. Right? And that's the same thing.

Charles Max Wood [00:56:04]:
Right? Is, you know, I I kinda wanna be there so when you run into the wall, you know, you're not stuck trying to figure out how to get off the wall and go do something, you know, do the next step. Right? You kinda get there faster, but you're still gonna run into this even if you have somebody holding your hand. So, yeah, just just be aware that that that's kind of the way that a lot of this goes. I kinda wanna pivot this. You were talking about, you know, getting left behind, right, on some of this stuff or, you know, it moves ahead quickly and and yeah. One of the questions that I get from people is, okay. Well, do I have to learn it? And then the other question I get is, because you're talking about specifically so when when I we talked to Obi Fernandez on Ruby Rogues, he was talking about the the AI systems that replace, you know, services like copywriting and ad optimization and things like that. Right? You know, aspects of running your business.

Charles Max Wood [00:56:59]:
Right? And so, you know, programmers, you know, would listen to that, not feel super threatened. Right? Because they're gonna be in a place where they're going. You know? That's not something I do anyway. But you're talking specifically about, hey. I'm gonna take this Figma, and I'm gonna have working code at the end. Right? And, yes, you've kind of let us know that there are various stages of effectiveness to this. But is it eventually going to get there where if I'm writing React or Quick or something else that, you know, my job is going to be, you better write a really good prompt for the system so that it'll give you the right code. And is it gonna make it harder for people to get in? Know, is it gonna cut salaries because the AI does a bunch of my work? I mean, how how vulnerable are we to this stuff?

Steve Sewell [00:57:51]:
Great question. Let me give you an example. So the short answer is I don't think developers have anything to worry about. I think they only have things to be excited about. And I mean that genuinely. One of the

Charles Max Wood [00:58:02]:
I like that. That's a really good way to put it because that's what I keep trying to tell people.

Steve Sewell [00:58:06]:
Yeah. It's it's think about it this way too. I'll make a statement and give an example. These tools are gonna give you amazing amazing superpowers. Companies want more people with superpowers than less. They don't want, let another example, and this is this is a misleading example. I'll explain why misleading, come back to it. Let's say AI only worked for front end tools.

Steve Sewell [00:58:29]:
Would you rather have more superpowered front end developers because they have these tools that make them superpower or more, let's just say, back end developers that didn't have superpowers. I want more than superpower people. Yep. Now, realistically, there's different tools for the front end superpowers and the back end superpowers. You're gonna want both. If you're a developer and you're not playing with AI tools to see how it can help your workflow I don't mean you have to be exhaustive. I mean, like, you know, if you wanna start with the basics, use chat gpto. I personally recommend Claude instead.

Steve Sewell [00:58:56]:
It costs the same and it does better especially with code. And if you're not using get a Copilot or whatever most anonymous thing you have for your IDE, I would highly suggest you do those things because most importantly, what they do is they build an intuition of what's AI good at and and not. And when it's auto suggesting, there are certain things that I just know that AI is gonna do a good job on every time. And I have that intuition through repetition of just seeing it happen in real time with no effort from me. And I know things it will not do good at, and those things build a mental model. It's like training your brain, your neural network to know what the AI is good at or not. So when you're building products with it or when you're trying to find productivity games for yourself, you have that intuition of where it's gonna work or not. I see people every day with the wrong intuition, like, oh, can AI just do everything for me? It's like, have you have you used AI tools? No.

Steve Sewell [00:59:43]:
No. It's not gonna do that. It's gonna that that's gonna go badly. So if you think about, like, how it affects jobs, let me give you another example. Builder as a tool has always been a tool that takes the code and components you have and brings and takes certain work that you have as developer that generally you'll find tedious. And I think if you're thinking calmly and dispassionately, it's the tedious work you don't wanna do, like marketing wanting to move buttons around the homepage or change the color of this to see, you know, and realize the test failed, so they undo it. It doesn't feel good to be a developer doing that. You don't want to be like a middle person in between like say marketing and your homepage.

Steve Sewell [01:00:18]:
That sucks. Objectively it sucks. And so with builder sometimes people before adopting the product, they have a concern, like, wait. Is this gonna take my job away? Is this do I have to be worried about this? I would say only, 1 to 5% of people have that concern, that concern happens. I've heard it 0% from real customers, and I keep really close to our customers. Nobody's ever adopted the product that makes their development more efficient and actually worried about their job security afterwards. When businesses can do more, they just want more. Their appetite grows faster.

Dan Shappir [01:00:48]:
It it kinda it kinda reminds me of, you know, when people said, like I I'm trying to remember who's the stand up who said it that if you're worried about illegal immigrants taking your job, then you've got the wrong job. It's it's it's you don't wanna work in a in a in in those kind of jobs anyway, but is is what I'm saying. So what I'm understanding from you is that it's that superpower to, to do away with a lot of the repetitive, and let's let's call it less thoughtful parts of the job. It's is that kind of what you're saying?

Steve Sewell [01:01:38]:
Yeah. I can I can I have to check?

Charles Max Wood [01:01:41]:
Yeah. I was just gonna say, you know, I did a couple of job interviews, and and I've been using GitHub Copilot for a while. And and one thing that it does for me is, right, occasionally, it'll try and fill in the whole class in Ruby or the whole, you know, component JavaScript. But most of the time, it's just filling in the part that's sort of an atomic piece of the thing. And what it does for me that kind of makes my development better is that it gives me enough of the pieces that, you know, from my experience, I can look at it and say that's what I want or that's close. Right? And so from there, then, you know, maybe I hit tab and I have it drop it in, and then I go and I modify it to be what I need. And, you know, sometimes it's 75%, sometimes it's a 100%, and sometimes it's no. That's just not what I want.

Charles Max Wood [01:02:33]:
Right? You don't understand what I'm doing. But, yeah, it it totally opens those doors. And the other thing that I wanna jump on here is, you know, you mentioned that, yeah, if you have superpowered engineers, people are thinking that these companies are just gonna cut their costs down to the bone and just have bare bones development, AI generated code. This is generally not how companies work. Generally, the way that they work is they're gonna spend what they spend to move forward with all the things that they think they wanna provide to their customers, and so you're just gonna be able to provide more of that.

Steve Sewell [01:03:09]:
Correct. And you'd be, I recommend people don't underestimate how much when capacity explodes, ideas and and needs and desires explode. And I I think it's not just proportionate, it's it's above proportionate. And suddenly when you think you're introduced to do magic, you wanna do more than magic, and you're you're excited because remember, companies live in a competitive environment. So I guess I could put the CEO hat on for a minute and say, what am I paranoid against competitors? I'm paranoid that we're doing more faster than them, which means I want more superpowered AI devs who are more accelerated by the AI tools than my competitors. We leave them in the dust. They're I'm not thinking, oh, we can do more, cut people. I'm saying I'm looking at we can do more, get ahead of the competition, maximize the resources we have, even hire more because now the ROI makes more sense.

Steve Sewell [01:03:57]:
I can hire another developer and pay them a developer salary and get way more ROI than I did for that money. So it's an obvious investment that kind of has happened. As we got more productive, we started hiring more of these developers. And the way I would suggest thinking about it is, here's an example. Has anybody ever hand coded a PDF? Heck no. You go to Chrome and you say exports PDF and it generates the PDF for you. In my opinion, Figma designs are are similar. If you get to a point where you can generate it and then you can work with it from there, you're not gonna go back to hand doing it.

Steve Sewell [01:04:27]:
It's just a waste of time. So the way you could think about your work is AI won't solve everything. You have a massive multimillion line of code base. You have a very esoteric bug. You're gonna have to look into that, and you're gonna have to actually dive into it. A agents are so far from handling things like that, but when you're producing and getting ideas out and stuff like that, the drafting iteration page can move so much faster. If you can just suck in this dashboard and you see it, but you're like, I wanna move this. I wanna reconnect this.

Steve Sewell [01:04:53]:
I'm gonna get on the code for this. I'm gonna get on the prompt for this. You become like this orchestrator. And I think, you know, a lot of engineers don't like to step into a lot of engineers might like the idea of management, because they like this idea of, like, parallel execution. You know, like, you have people working under you. And so by you can be in charge of a project. You have 10 engineers working on it. You can do theoretically 10 times as much as if you were just the only one working on it.

Steve Sewell [01:05:21]:
Cool. But you realize people are difficult. You spend your work not solving cool problems. You spend some of the personnel issues. This person's in like that person. This person is not convinced of the road map, and

Dan Shappir [01:05:30]:
Mhmm.

Steve Sewell [01:05:31]:
All this stuff happens, and people realize, wait, I like building stuff. This isn't building stuff, and I get out of management. The best part of I could describe using AI effectively. It's like all the best parts of management. You have these minions or these tools generating things for you, but without the downsides. They don't have any types of emotions about anything. You can work

Dan Shappir [01:05:49]:
through them. So far.

Steve Sewell [01:05:51]:
So far, I don't think that's gonna change because, like, why would it? Why would we program that? Or at least go back to the older AI that doesn't have that problem.

Charles Max Wood [01:05:59]:
GitHub co personality.

Steve Sewell [01:06:02]:
I'll be back.

Charles Max Wood [01:06:03]:
Well, you know, the other thing is you're talking about these iteration cycles that, you know, that it speeds up. And I think one thing that a lot of people don't understand is that those iteration cycles happen now. Right? Yeah. You know, and and if you've been on a project long enough, what typically happens is you'll get to a point where either you've accrued so much technical debt that it's impossible to work on, or, what what'll happen is you'll you'll kind of get a couple of pieces in place, and then all of a sudden, everything else gets easier. Right?

Dan Shappir [01:06:34]:
Yeah.

Charles Max Wood [01:06:34]:
And so then the sky is the limit. And and what the what you're saying is is a lot of the the foundational pieces and a lot of the foundational thinking, you know, that that's mostly just you just have to grind through it until you're done with it. That stuff goes away. And so then all of a sudden, it's we have these greater capabilities, these greater opportunities. And so we can instead of the grind taking 2 months, the grind takes 2 weeks because we get that much further ahead, and then we can modify what we got from the AI or work with the AI for the 2 weeks to get where we needed to go. And then we could turn around and that iteration cycle happens faster.

Steve Sewell [01:07:13]:
Exactly. You can think of it like also you're no longer the flutist in the band. You're the entire orchestra. You're the conductor. Yeah. You can make these happen.

Dan Shappir [01:07:20]:
But here's the thing that I wanted I know we're we're running long on time, and we'll probably finish soon, but the the there's this one question that I've I really need to to get off my chest as it were. So you've been describing, and I totally agree, with the fact that, you know, we need to learn how to we need to learn these, to use these AI tools. Obviously, we learn need to learn how to best use these AI tools, and these AI tools will make us more productive. And I totally agree with that. But it's still it there's still the question of how knowledgeable do I need to be about AI. Because there's a difference between being able to, I don't know, drive a car and be the person who can pop the the hood and and and starts fiddling with with with the stuff there. So how knowledgeable do I need to be about AI in order to be effective as a develop or to be desirable as a developer in the upcoming 1, 2, 5, 10 years, do you think?

Steve Sewell [01:08:31]:
Yeah. It's a great question. The, a couple quick thoughts on this. The first one is I personally or, like, the high level one is I personally think people just need to know how to use AI to be more productive for their own work. In doing that, you'll actually will learn a lot about the underlying pieces, so to speak, and how the technology is is suitable for certain use cases versus not. A simple example there is like you might be writing docs in markdown and you see how well it auto completes certain docs. Let's say you make a writing product. Well, by learning how Copilot has worked well or not or TatchaBTs work both or not.

Steve Sewell [01:09:08]:
And now suddenly you work on some type of writing product, a Google Docs thing, and the company wants you to build some type of AI feature. Well, you already have some baseline knowledge and intuition built around this. So the 2 categories probably are, 1, using AI tools to be more effective developer. You should always try and be more effective, because if you're not, your peers are. And in a certain sense, you you don't want to be just so behind the train. Everybody's at a whole new level than you because you had just hopped on the train. You don't want that to happen. You wanna be using the best tools, the best ability to be as good as your peers, be one of the best of your peers.

Steve Sewell [01:09:44]:
On the other side, there's building AI products that is socingly similar. And if you really wanna be if you think in 5 years, every product will be an AI product. I think that's extreme thinking, but you could think of it differently as more products will have more AI features. And if my company might want me to work on AI, it's nice to know how to work on it. Again, I don't think the way to do that is to go read a bunch of white papers, build your own neural network from scratch, understand how transformer works under the hood. Again, I think that's like trying to solve psychology problems with physics. Knowledge of f equals m a. It's not gonna get you that far.

Steve Sewell [01:10:15]:
Solve psychology problems by learning about psychology and practicing psychology in some form, studying it directly and learning that way. And I think that the beautiful thing about AI is you can solve your own problems. You know, when we talk about, like, the types of work devs don't wanna do. You have this really exciting opportunity. Let's say, I think we've talked about code debt, like could you make an AI workflow to help you with managing code debt, refactoring, cleaning, etcetera code. That's a problem that you right now as an engineer might say I hate dealing with this. I can play around with AI to try and solve my own problem. Engineers love over automating things.

Steve Sewell [01:10:49]:
You know, let's spend a week automating something that could have taken a few hours of grunt work. Everybody. Everybody's guilty of it. If you're probably any good, you probably this is how you think.

Dan Shappir [01:10:58]:
And you know what? You could probably use an AI tool to help you build your AI tool.

Steve Sewell [01:11:03]:
Exactly. Make your own projects. Get up Copilot, MicroAgent, Chat GPT, whatever can help you build it, and that's a good way to learn the stuff too. So I always think that side projects are awesome. You can learn your day to day using AI tools, try new ones. It and, again, I don't think you have to make an exhaustive list. Everybody keeps saying cursor, AI is great. I don't care that much.

Steve Sewell [01:11:23]:
I looked at the features again today. It's the same features as get Copilot with some nuances. So, no, I don't think you need to waste tons of time adopting every tool. There is a learning curve. But just adopt the basics and use your own time to do your own side project sometimes, to use AI to solve your own problems. I think that'll make you equipped to be a very future proof, very capable developer 1 years, 5 years, 10 years from now.

Dan Shappir [01:11:44]:
Very interesting in this context is, I think it will see how far it goes. But, the fact that, Google, for example, is actually building their own, micro model into Chrome itself. That will that will give an interesting opportunity for people to play with the technology.

Steve Sewell [01:12:03]:
It's exciting. It's fast too, and that opens up new worlds too. This is also why you should probably get your hands on the stuff and try and solve real problems as you start realizing why that matters. For us, one of the most common things that comes up is giant enterprise bank wants to use our AI features, but they can't send their code over the wire to a back end. So that's why you start exploring local models. Like, let's make the AI work with Ollama, run it all locally on your machine. Honestly, if we weren't building AI products, I wouldn't know why people care that much about that. Sure.

Steve Sewell [01:12:31]:
Cost is a thing. And so we start exploring it, and we tend to find that the local models aren't powerful enough yet to solve these use cases without computers.

Dan Shappir [01:12:39]:
So they're they can be they can be just too big to download.

Steve Sewell [01:12:44]:
Exactly. That's what I found. The the models you need to be effective with our products right now, they're too big to download and too big to run on a local computer. The running is the hard part. You just run out of RAM. But again, when you're kinda immersed in the ecosystem, the other stuff that's happening makes more sense. Like, why people care about this? Doesn't work yet. Window.ai will not replace chatgpt.

Steve Sewell [01:13:05]:
It's gonna be a much smaller model, which are a lot dumber. But for certain use cases, it's good enough, and that's where getting your hands on is the best way to know those types of things.

Charles Max Wood [01:13:13]:
Yep. Alright. Good deal. Well, I'm gonna kinda wrap this up. I I recommend you go check out the the article that, Steve wrote. There's also a video I saw that has the same name. I don't know if you put that out or if it was somebody else.

Steve Sewell [01:13:30]:
It's the it's the same thing. The same thing that's in the blog is based on covering the video and vice versa.

Charles Max Wood [01:13:35]:
Yeah. But I highly recommend that. I mentioned the Obi Fernandez episode on Ruby Rogues. He actually has a book that he wrote on how to use models like this. I think some of the code samples are in Ruby, but for the most part, it's kinda language agnostic. It's just, hey. You know, if you understand these APIs, the capabilities, then here's what you can build. And, yeah, I mean, I I think I think this really is going to continue to change the way that we're we're working, and and so, you know, the more you can get into it ahead of the curve, I think the better off you're gonna be.

Charles Max Wood [01:14:11]:
So thanks for coming, Steve. Thanks for having me.

Steve Sewell [01:14:15]:
This is a ton

Charles Max Wood [01:14:15]:
of fun. We're gonna do our picks, and then we're gonna wrap up. Dan, do you have some picks for us?

Dan Shappir [01:14:21]:
Yeah. It's not exactly a pick. So I've I've not been speaking about the current situation in in Israel and in Gaza in the recent months maybe because it was just too painful for me to keep to just think about all the time. But, this week, actually, a person is gonna come to, to our company to talk about the fact that his son is kidnapped in Gaza. People tend to forget that there are still a 120 Israelis kidnapped in Gaza, including 1 child and 1 baby. And it's it's a situation that I've hoped would have been long resolved and still hasn't been, and hopefully, it will be. What can I say? It's gonna be very, very difficult to even look this person in the eye. Let's put it this way.

Dan Shappir [01:15:12]:
It's it's I I'm it's it's it's just such a sad situation. Anyway, sorry for for bringing in such a bummer, but that's, the only thing that I really had that I wanted to mention. So so, yeah, hopefully, you can pick up the mood.

Charles Max Wood [01:15:31]:
Yeah. Hopefully, I can. But, yeah, it is sad and, you know, I I think sometimes we lose track of these things after a certain amount of time when they're not in the forefront of, you know, what we're listening to or watching or things like that. So, yeah, keep in mind that these are people that we should be thinking about, praying for, and and and looking for solutions here and encourage our our leaders, whether it's in, you know, congress or president or whoever, to to help figure some of this out. Yeah. We we've had some stuff going on in this country too, but I'm I'm not gonna go into it. I Yeah. You know? We

Dan Shappir [01:16:13]:
spoke a little bit about it before the show.

Charles Max Wood [01:16:15]:
Yeah. But be kind to your neighbors. Right? I think a lot of this just comes down to the way we demonize each other, and we don't need to do that. I'm gonna jump in and do a game pick. I'm gonna pick a game that I've picked in the past. I I just didn't get together with my buddies this week to play board game, and this is one, my so my nephew's here from Illinois, and his his parents are going and doing job interviews in Wyoming. And so they dropped him off here in Utah instead of dragging him around Wyoming and having him be bored with them. He can play with my kids.

Charles Max Wood [01:16:57]:
I say play with my kids like he's 5. He's 17. But, the game I'm gonna pick is Mysterium. I always do a board game kicks pick Steve at at the beginning of my picks. And so Mysterium is a game where you have one person giving clues, and then you have other people trying to figure out what the clues mean. And so you have all the players, but one are psychics, and then the person giving the clues is a ghost. And there have been murders in this house, and so they're they hand cards to the psychics, and the psychics try and determine, you know, who the person, the place, or the, murder weapon is. And then there are expansions where you you instead of doing a murder weapon, you do a motive.

Charles Max Wood [01:17:47]:
But it's it's a pretty fun game. Takes an hour or so to play. Sometimes it's really hard because, you know, you've got all kinds of interesting things on this card. It's just this, you know, this picture. And so, anyway, yeah, you you go through the rounds. You you pick the first person first. You guess the the place next and then the object, and if you get all 3 and everybody gets if all the psychics solve their murder, then the ghost has one final round where they give 3 cards, and depending on how well you were doing at guessing whether or not the other psychics were correct or not and how early you got yours done, you get to see 1, 2, or 3 of those cards. And, those are hints toward one of the murders that the psychic solved, and that's the the ghost, the the person who's been giving clues, that's their murder.

Charles Max Wood [01:18:42]:
And, you know, it's kind of a majority wins thing on that one. It has a board game geek weight of 1.9. I keep telling people that kind of the average, you know, friendly board game that's approachable to people who don't play board games is about a 2. Right? And so this is this is right in there in that area. It's it's kind of a fun social game, and I really enjoy it. The the kids were playing it. They helped my 8 year old play it. I don't know if an 8 year old could pick up on all the nuances of it without help, but they just helped her play.

Charles Max Wood [01:19:22]:
And so, anyway, fun game. Like I said, there are expansions for it. It came out 2015. So I'm gonna pick Mysterium. I was talking to Dan before Steve got on. He's asked me how I was, and I was, like, tired because I had just gone for a run. I'm training for another marathon. And, so I'm gonna just do a couple of shout outs on that.

Charles Max Wood [01:19:49]:
I have been, doing the trainings off of a Training Peaks training that I bought. So Training Peaks is free, and you can buy, training that you can stick on your calendar. And those are just fixed costs. So I think I bought this one for 20 or $30. And so you just tell it when your race is, and it you know, mine's a, what, a 16 week program. So, you know, I had to figure out when it started and just stick it in, and it it puts the workouts on my phone on my Garmin app, which syncs to my watch. I have a Garmin Forerunner 235, which is not anywhere near the nearest mo newest model, but it works great. So, anyway, that's what I'm doing for running.

Charles Max Wood [01:20:29]:
And then finally, this week, I'm working on getting, AI for JavaScript dotcom up. I'm also doing AI for Ruby dotcom. And so you can go and you get on the email list, and I'm gonna be emailing out, hey. This is what I'm doing with AI this week. These are the APIs I used. I plan on doing all my examples in both languages, so you'll get the JavaScript ones on the JavaScript list and the Ruby ones on the Ruby list. And, I'm also looking at putting together a summit at the end of September or the end of August, beginning of September. We'll probably have people like Steve.

Charles Max Wood [01:21:04]:
I've been talking to a bunch of the other folks, in the Ruby community that I know that are doing AI. And, really, I want to be hitting it at this level, right, Where it's, it's not, hey. Here's how you math your way into models that work. This is, hey. You've got a model that works or, you know, here's how maybe you modify a model that you have, to train it a little bit. But I want it to be approachable for people who want to add AI features to applications, not how to solve whatever thing by, you know, managing a data lake and then feeding it into a a system that generates the model and then how to test the model and all that stuff. We're we'll get into some of that at a high level. But mostly, we're gonna be talking about, hey.

Charles Max Wood [01:21:48]:
Here are the APIs for Claude or GPT 4 or Midjourney or, you know, whatever. You know? I do podcasts, so whisper. You know? So here's how you use these to get what you need. And then, you know, here's maybe how you tie a few of them together to get a more complicated result than you want than you can get from any one of them. And so that that's what I'm looking at. And so, anyway, I'll be emailing people on the email list. Right now, I'm just finalizing my system because I've kind of had to rework bits of the email stuff that I've been doing. So, if you're interested, go to ai for javascript.com.

Charles Max Wood [01:22:29]:
I'm also gonna be doing weekly calls for JavaScript Geniuses. You can find that at javascriptgeniuses.com, and that's gonna be sort of a back and forth, ask questions, get answers. If you have feedback or ideas for people, you can help out, you know, and and help people get where they wanna go with that as well. But, kind of more of a mastermind or group coaching sort of setup is kind of a blend. But we'll be doing calls for that every week, and you'll be getting some AI stuff in there too. So, anyway, those are my picks and my self promos. Steve, what are your picks?

Steve Sewell [01:23:04]:
Yeah. I think probably just the obvious ones. Check out builder, check out Visual Copilot, and I mentioned briefly, but MicroAgent is a pretty interesting CLI tool that if you want to categorize AI tools into sort of, like, commonly used today, you know, chat gbt, GitHub Copilot, and then where things are going, agents. You know, I I do very much believe in agents in the medium to long term. I don't see many practical use cases for them today, but micro agent is the only one that I've, used. Obviously, I'm biased here obviously, but, it kinda solves some of the fundamental challenges. I've got a blog post talking about this in better detail, but some of the the challenges of agents, and I think that I

Charles Max Wood [01:23:45]:
just posted the link for, micro agent blog post.

Steve Sewell [01:23:50]:
Like many things, the, you know, I believe in technology growing in layers incrementally as opposed to, like, you know, the v one is supposed to solve everything for everyone, like a Devon or whatever, a super micro or super AI agent. This is everything. And so what's interesting about micro agent and and why I suggest people try it out is it is very, I I think, new technique and can lead to a lot of interesting things. So people trying it, giving feedback, I think is really interesting because I think that's how we solve these problems over time is how to make agents work, make it work really well for something. And I think this tool works really well for a certain type of problem, and I think I have ideas and experiences that suggest what's really good at. But I'd love to get more feedback from people, what they found success and not success with, and just more input, GitHub issues, pool requests, People have added some cool new features to it. So I think it's an interesting area of of both a practical tool and, like, an an avenue for research and development that you could be part of in in AI and AI agents, stuff like that that, I would value. I want the the project to be as community driven as possible.

Steve Sewell [01:24:49]:
So check it out, get feedback, create issues, send pull requests for improvements or fixes or whatever, and let me know what you think.

Charles Max Wood [01:24:57]:
Awesome. People wanna find you on the Internet. Where do they find you, Steve?

Steve Sewell [01:25:01]:
I am Steve at 7 o Steve 8708 on Twitter, YouTube, TikTok, LinkedIn. I don't know. All the places, but those are the ones I'm probably most commonly, active on.

Charles Max Wood [01:25:13]:
Awesome. Well, thanks again for coming. This has been awesome.

Steve Sewell [01:25:16]:
Thanks for having me. This is really fun.

Charles Max Wood [01:25:18]:
Alright, folks. We're gonna wrap

Steve Sewell [01:25:19]:
it here.

Charles Max Wood [01:25:20]:
Till next time. Max out.
Album Art
Making AI Accessible for Developers - JSJ 641
0:00
01:25:25
Playback Speed: