AJ_O’NEAL: Well, hello, hello and welcome to the new team episode of JavaScript Jabber. Today on our show, we have with us the illustrious, magnificent, flamborius Kyle Simpson.
KYLE_SIMPSON: That was an amazing intro. I've never been described that way. T
AJ_O’NEAL: hat's because I didn't use real words, but they sound awesome as it is almost. We also have with us Amy Knight, who's gonna be popping on and off.
AIMEE_KNIGHT: Hey Hey from Nashville in the broken water heater.
AJ_O’NEAL: And Dan Shappir.
DAN_SHAPPIR: Hey from Tel Aviv where we have awesome weather. You do need to be jealous. It's like 80 degrees and it's just wonderful.
AJ_O’NEAL: I'm your host, AJ O'Neil, and I'm yo, yo, yo, coming at you so live right now, let's get into it.
Leveling up is important. I spend at least an hour every day learning ways I can improve my business or take a break and listen to a good book. If you're looking to level up, I recommend you start out with the 12 week year as a system to plan out where you wanna end up and how to get the results you want. You can get it free by going to audibletrial.com slash code. That's audibletrial.com slash code.
AJ_O’NEAL: So Kyle, we hear that you know of a thing called Tab9.
KYLE_SIMPSON: Yes, I am glad to be on here. It's been a while since the last time I joined, and I'm glad to be back. So thanks for having me. I recently partnered with a company called Kadoda, who also makes Tab9. So if you hear Kadoda or Tab9, they're basically mostly interchangeable. And I am lead of developer empowerment for them. So I wanted to come on and chat about the idea that tooling is getting smarter and smarter and smarter and helping us developers do our jobs better. So what Kodota is, is a plugin. So either the Kodota or the Tab9 plugin, depending upon which editor you have, but it plugs into your favorite code editor. And it uses a machine-learning model to make semantic suggestions in your autocomplete box. So rather than just completing based upon a token or based upon a TypeScript definition, the Tab9 and Kodota plugin is trying to understand what you are trying to do and make a more intelligent suggestion based upon what it understands. And their goal is to improve developer productivity. They actually have a specific mark where they wanna double developer productivity through the usage of this tool. So I have joined to help them kind of get the word out, but also to help find out from real developers what this tool needs to do, how it needs to improve. And in particular, they haven't done a lot in the web and the JavaScript space. So they came to me asking me to kind of spearhead their efforts to expand into the web JavaScript CSS space. And so that's why I'm here today. So thanks for having me.
DAN_SHAPPIR: So does that mean we don't need TypeScript anymore?
KYLE_SIMPSON: That's a really good question. And I'm glad you asked it. No, that's not what that means at all. So TypeScript is going to answer a certain set of questions that might be asked about your program. For example, it would answer if I'm, if I type an object name and I type dot, and it knows specifically what type signature that value has or that variable has, then it knows what properties it can suggest for me to complete. So it knows something very type aware about the thing that I'm doing, about the operation that I'm doing at that exact moment. That continues to be an important set of tools for those who write TypeScript. It continues to be an important set of tools where TypeScript is making those kinds of suggestions. Bigger than that though, is that the goal of a tool that is using machine learning to understand the semantics of what you're doing, is that it wants to understand bigger picture even more like at least the line of code that you're writing, but perhaps even the several lines of code that you're writing is kind of the goal here. It wants to be able to understand, for example, that you're about to loop over an array, just to take an old-school example. It wants to see that you're about to loop over an array based upon what you've typed before and what you're starting to type, and then make an intelligent suggestion to complete that. And it could either be completing just the whole for loop for you based upon what it thinks you're about to do. Or it could be suggesting an alternate way of doing that. That's even better. For example, if it thinks that you're going to map over something, it might suggest a map call instead of the for loop call. So it's trying to understand what you're doing and make a suggestion based upon that understanding so that generally speaking, you shouldn't have to type that the rest of that line of code. It should know what you're trying to do. Now, obviously it's machine learning. It's non-deterministic, so it's not ever going to be perfect. But the way that it works is it understands the semantics, and it's sort of like a template where it's then going to pull in the local variable names that it sees around in that part of the code, say, immediately before the code. If it sees variable names that it thinks it is that you're going to be looping over, like in my running example. Then it will actually include those in it. So you can think about the semantic meaning as like a template. And then it pulls in what identifiers it finds in that, you know, in that local scope. And so sometimes it's like creepy good. It's like, wow, how did it know that I was trying to do that? And sometimes it's totally wrong. And a lot of times it's kind of in between, like it's, it's partially finishing out the line or it finishes out the line mostly, but then you need to like make a quick little edited to it, that sort of thing. But that's the goal is to understand a bigger picture. So TypeScript would continue to be an important part of that. For example, the TypeScript definitions could inform what property name it ought to be, including in its template. And depending on the editor you're using, they do actually use, literally, the TypeScript language server. So they, at least in theory, could be incorporating type information into the suggestion-making. But that would be a smaller piece of what they're attempting to do. But at the semantic level, hopefully that helps answer.
AJ_O’NEAL: So has anybody else used it? Yeah, I'm sure you're using it.
KYLE_SIMPSON: I am definitely using it.
DAN_SHAPPIR: I'm using it and I'm enjoying it.
AIMEE_KNIGHT: I'm going to start using it.
DAN_SHAPPIR: I know AJ that you're using it and you actually created a video, which I guess we're linked to, which actually has become sort of an ad for the company, I guess.
AJ_O’NEAL: Well, yeah, I have no other way. Well, there's actually something that said that it was a promoted video. So yeah, it's there. Someone is using it as an ad and I assume that it's tab nine. But it's so that it's only 30 seconds. It's just. It was the first day I used it and I saw something in their little preview video that looked interesting to me with. So basically think of it like this. If a human could look over your shoulder and guess what you're about to do next with like a hundred percent certainty. Like you just edited one line that had a variable name say well I'll give you the exact example. That was the video. So in Golang, you can have tags for a variable that are basically like semantic comments, almost like they, they influence, they, they give, they give you some like pre-built in it's like, like changing the names and go to how they're reflected in a SQL table or how they're reflected in JSON or how they're reflected in other formats. And so you put these tags and I had all the JSON tags and I started adding SQL tags and I just did one or two of them. And then I just move my cursor down and hit tab, down tab, down tab, down tab, down tab. And it, it saw what I did and knew that I was going to do it for the next 10 lines and just made it so that. All I had to do was hit down and tab to do it. And then I find with other things too, like if you've written a similar loop somewhere else in your code recently. And then you're modifying an array. It has some sort of information that it, like it uses the Go language server or the TypeScript language server, whatever. So it has some sort of information about what is underneath the cursor. And it retains that even when your code isn't currently fit to compile. So if you just type gobbledygook on the keyboard with, you know, like semi-colons and brackets without any regard for what you're actually typing, it just kind of ignores the stuff that's garbage. Oh, or it seems like it doesn't get saved. Like if, if your code isn't in a state where it can be parsed, it seems like it doesn't save those things into its memory. But if your code is in a state that it does get parsed, then it will. And so even if your code is not parsable, you can still type stuff out and it'll do the auto-complete, which is an advantage that it has over the other. That like the ones that require your code to be parsable will only give you suggestions when your code is in a parsable state. So it's nice to be able to do that. And it's written in Rust, so it's really fast. Like a lot of these things that I tried in the past, I didn't stick with them simply because they're slow. And it being written in Rust, and it only kind of like checking in with the language server rather than running every single thing through the language server, makes it very snappy. So, I mean, I would say if I'm coding for two hours, I might actually only use its completion a handful of times, but when it does, it just feels nice. You know, it's it's cute and it saves you a little bit of typing out the boilerplate boring stuff. And then, you know, like Kyle said, you go back and you just edit something if you need to. But I mean, still, disclaimer, I don't think that it's like absolute magic that's going to revolutionize the world in the stage that it's in right now. I think that it's convenient, it's worth trying out and you probably could adjust the way that you code to match the way the editor makes prediction. Like with me, I'm bouncing back and forth all over the place. Like a lot of times I'll write the end of a line and then I'll write the beginning of the line just based on stream of consciousness. And I think it doesn't help in that case because it's not like backwards predicting or deleting things. It's only added.
KYLE_SIMPSON: Yeah, yeah.
AJ_O’NEAL: Okay, like, yeah, I mean, I could see how I could do that. And I'm also in VIMS. So there's some limitations there that I think in VS code might be able to do a little better because with them it really is only working on one line at a time It seems whereas I saw in the previews and vs code and some of the other editors It does fill out like the whole for loop all at once
KYLE_SIMPSON: You touched on a number of things there that I wanted to kind of dig into and provide a little bit More behind the scenes and disclaimer. I've only been you know partnered with the company now for a couple of months So i'm still in my firehose phase of learning how it all works under the covers. I'm by no means even remotely well-versed in machine learning and all the deeper things of that. So the smart folks there know all of the ins and outs of it, but I'm getting kind of a layman's understanding of how the system works. So some of you may have heard a few months back, kind of made a bunch of a big splash in the media was this new AI natural language model that came out called GPT-3. And it was done in the form of a tool where you could type a couple of characters and it basically started generating your whole program for you. And everybody's like, Oh my God, you know, that's, that's the future of software development is that we're going to, we're going to do this. So GPT-3 is a, a licensed model that's based on human language and it has some like 175 million parameters put into it or whatever. The pro the predecessor to GPT-3, GPT-2 been out for a number of years and that was also trained on natural human language using just a far fewer set of parameters, but still a pretty good solid model and that one was open source as far as I understand. And so what Codota and Tab9 are is they took GPT-2 as the base model. And they then trained further trained that model against essentially as much of the open source code as they could get out in the world, like, you know, from GitHub or whatever to become more programming language aware in addition to human language aware. And that is the starting point model that when you just download Tab9 or Codota and fire it up in your editor, that's the default model that it has. It's based off the natural human language and then further trained against essentially open source code. So once it finds a project in your code base, which as far as I can understand, it basically just attaches to the nearest git directory. So wherever your git directory is, that's what it's going to treat as a project generally. What it's starting to do then is it's starting to ingest bits and pieces of your code from that project. It does not do all of it all at once. It does that as you go along, as you open up more files and use more files. But it is ingesting the pieces of code that you're accessing in your project. And so it is actually creating a local layer model on top of the inherited model that came in from kind of the open world and that local model is per project. So each time you would switch to a new project, you're going to be like its own context of that local learning. And all that local learning is entirely local to your machine. By default, none of that is going off of your machine anywhere. That does mean that it's actually running machine learning algorithm on your system, which is quite resource intensive. So if you're running on battery power, you might want to select the option to turn off the learning model while you're on battery power, it is going to churn a bit of your CPU and your Ram and your battery power. And so they have a pro license that you can purchase. I think it's $15 a month or something like that. What that does is it actually shifts the machine learning off of your local machine onto a set of GPU accelerated cloud servers that they run. And so what it's doing is it's taking even smaller bits of your codes and not your whole code base, but smaller bits of your code, kind of anonymizing it, sending it out to these cloud servers and getting sets of suggestions back and then it's popping those up to you. And that sounds like that would be slower, but it's actually even faster to do it that way than to do it on your local machine. It's already pretty fast as AJ said, but it's even faster if you want to do that. So assuming that you're OK with bits and pieces of anonymized code being sent out to the cloud, they have a very strong privacy policy about how they're only in memory and then discarded, so they're not storing any of that, but they're using those GPU accelerated cloud servers so that it's not zapping up all of your system resources. The other layer to this is that in addition to the machine learning model, there's a set of rules. And these rules can be configured per language and even eventually even per project. But these rules kind of layer on top of the semantics that the machine learning is trying to figure out and can kind of piece things together. So if it figures out two different kinds of ideas about what it thinks is happening, it might be able to piece, a rule might be able to come in and say these two things mean this. So there's an example that there that their CEO has put out, which showed it was in the Java language instead of JavaScript, but basically they were kind of making the equivalent of an Ajax call. And it under there's like three or four lines of code that you have to do to do it, whichever line of code you write, it understands all of the other lines that need to be there, even the lines that might've needed to show up before it. So you basically start writing and it kind of fills in the rest of the template with its suggestion. That's kind of the eventual goal here is to understand more than just the next few tokens or the rest of the line is that it wants to understand that whole chunk of code that you're working on. So there's rule based on top of the machine learning and that's learning what you're doing in your project. And the last thing I wanted to say, cause I know if you had raised your hands or questions, AJ mentioned kind of learning to use the tool. This is actually one of my favorite parts about it I didn't really expect this, but the tool has started to adjust a little bit. My coding activities are my processes in coding, because I'm usually typing a lot quicker than tools can pop stuff up. Typically, autocompletes are in my way and I'm like, just leave me alone. But I've started to get to the point where I realize, I know that there's this long line of code that I'm about to write, and it's following a pattern of the previous lines of code this is probably something that the tool is going to be able to suggest some or all of. And so I will intentionally slow myself down a little bit, wait that extra moment, you know, brief moment for it to pop up the suggestion and then pick the closest one and then edit. And so I started to adjust my coding habits and styles to take better advantage of the tool, even to the extent where you can actually write a code comment, not unlike what the GPT three demos that were, that came out, we're doing, you can actually write code comments. That can help those semantics. So I sometimes will write a code comment if I know that that might help what suggestion it's about to give me? So it's kind of it's interesting that the tool is learning from you and you're learning from the tool It's an interesting symbiotic relationship.
AJ_O’NEAL: What would a comment that is helpful to the tool look
KYLE_SIMPSON: like this is not a real example and obviously not deterministic but imagine back to my example of writing a for the it's not necessarily super obvious from the first line of the for loop that what you're going to do is map over a set of items and convert them. But you might write a code comment that says map over the array and you know uppercase all the strings. You might write a code comment like that and then as you start to write the for loop in the future it might be able to say oh I think you're doing a map and actually the suggestion that it makes is not completing the for loop but replacing the for loop with the better suggestion which would have been a map call, for example. That's kind of the goal here is that it should be able to have that kind of intelligence.
AJ_O’NEAL: So the comment in this case would be, like, it's machine learning on the comments. And if it sees you do something in one place based on a comment style you use, then the next time you use that comment, it'll start preloading code that's similar to other places you've used that comment, pull it as a template.
KYLE_SIMPSON: Yeah, it's got that local model on a per project basis. So actually the the first time you use it out of the box, it doesn't have a lot of very intelligent suggestions. But after you've used it for a couple of days or even a week or two, and you've opened up a lot of files in the project and it's ingested more, you'll start to see that it's able to pull in, even if you did that for loop in a different file, it's able to kind of correlate and say, oh, it looks like you're probably doing this. It's really good actually, very quickly. It's pretty good at recognizing patterns in adjacent lines of code. So like the first, the previous three or four lines of code might suggest a pattern. Like what you were saying where you're injecting SQL, it's really, it seems to be very good at that. But it's even able to start figuring out those patterns from beyond just the, the, the initial couple of lines before even outside of the file, it's just, it doesn't do that right out of the box that has to learn that over time, but yeah, the model will, that local project model will adjust to what you do. So it sees every time he does a for loop, he writes, you know, something kind of like this. So it'll start to suggest for loops like that. Even if that's not at all code that it ever saw out in the open source world.
Have you ever wondered if you could be offering a faster, less buggy experience for your customers? I mean, let's face it. The only way you're going to know that is by actually running it on production. So go figure it out, right? You run it on production, but you need something plugged in so that you can find out where those issues are, where it's slowing down, where it's having bugs. You just, you need something like that there. And Ray Gun is awesome at this. They just added the performance monitoring, which is really slick and it works like a breeze. I just, I love it. I love it. It's like, it's like you get the ray gun and you zap the bugs. It's anyway, definitely go check it out. It's going to save you a ton of time, a ton of money, a ton of sanity. I mean, let's face it, grepping through logs is no fun and having people not able to tell you that it's too slow because they got sidetracked into Twitter is also not fun. So go check out ray gun. They are definitely going to help you out. There are thousands of customer centric customer focused software companies who use RayGun every day to deliver great experiences for their customers. And if you go to RayGun and use our link, you can get a 14 day free trial. So you can go check that out at JavaScriptJabber.com slash RayGun.
AJ_O’NEAL: So here's the question that a lot of people ask, like me when I talk about this or show this to them. How do I know I'm not just training the robots that's gonna replace me? How do I know this is really for my benefit and I'm not gonna be enslaved by tab nine?
KYLE_SIMPSON: I am, I'm super glad you asked that. So when they, when Kodota approached me, I was on, it's quite honestly, pretty skeptical. There are other tools like this. There's some other competitors that are out there. I have to talk to some of them. I have seen some of them and I very quickly vetted in those other conversations or, or research or observations that there does not seem to be a very clear delineation between whether the tool is augmenting the developer or whether the tool is replacing the developer. And that's a really important thing for me because I don't want to participate in a future where developers become less important. I wanna be participating in a future where developers are more important. I think more important than ever is the humanness of us as developers where we can apply our empathy to better communicate with our code. And a computer, I don't think we'll ever be able to do that as effectively as we, as humans can. And so what I want to do is actually participate in getting us to the point where tools are letting us focus on that stuff and not focus on as much of the minutia that takes our attention away and distracts us and makes it hard for us. To communicate in our code. And so I pretty rigorously vetted them. I basically started with the assumption that they were full of it that they were really just trying to, you know, zap up VC dollars and replace a bunch of developers. And I said, I want, I don't, I don't want a future where the tool is doing the work. And I got two interesting responses from them. First of all, they said, our mission is to augment developers, not replace developers. So if we can believe at all a company's corporate mission statement then we can believe that they were founded from the perspective of wanting to augment rather than replace. Secondarily, they dove into more of the technical details of why machine learning won't actually be able to replace developers, at least not in any of our lifetimes. And they assured me that they don't think that the technology will go in that direction, and they're not trying to take the technology in that direction. That's not to say that...Other companies and other bad on hers won't try to do so. But I was after a pretty extensive set of sessions with them. I was very satisfied that what they want to do is legitimately help make developers better rather than take away all of the junior developer jobs because the computer can just do that for them.
DAN_SHAPPIR: That's a pretty extreme statement to say that you don't think that computers or machine learning or any other form of artificial intelligence we'll be able to take away software development jobs in our lifetime. I mean, look, my take on it is that it might eventually, but by that point, it will probably take over a lot of other jobs. So it's not as if we're at a disadvantage compared to other professions. But the blanket statement, I don't know, maybe 20 years, you know, it might be able to do some of our jobs for us. At least the boring parts.
KYLE_SIMPSON: So I, I will take a stand on it and I'm fine that other people disagree with me and we can't do anything other than just wait around and run the clock for it to see how it plays out, but I do not believe that it will actually be successful as a strategy. It's not to say that there aren't people that want to do so. I know for a fact there's billions of VC dollars going towards the question of, can we completely eliminate these pesky developers who are cranky and difficult and hard to deal with and they want to get rid of us as humans I am a hundred percent sure that there are people that are trying to do that I happen to think that that won't actually be successful and part of the reason why I think that it won't be successful is it comes from my core philosophy that the whole purpose of code is to communicate human ideas with other humans It is not at all.
AJ_O’NEAL: Amen
KYLE_SIMPSON: It is not at all unfathomable to believe that computers may develop a more efficient way of communicating with themselves. I mean, they already have binary code and computers as they develop more and more AI will develop more and more efficient ways of communicating with themselves. But the purpose of the textual source code that we write is not for the computer anyway. The purpose of the textual source code that we write, if we want to go down the road mentally of thinking that tools are going to start generating code, they're not going to be doing the kinds of things that we're talking about where they're like looking at the line of code that you wrote and writing the next four lines of code because that's absolutely ridiculously the most inefficient way that they could possibly do so. If the computer wanted to do stuff for us, it would find a much more efficient fashion. It wouldn't have anything to do with generating source code. So that may actually happen. There may actually be lots more layers of the stack. DevOps and monitoring and system admin and all kinds of things that actually do start happening more Automatically through intelligence and I don't begrudge that that will be true but the actual process of thinking about a problem that I want to solve and thinking about how to translate my idea of an algorithm and a solution into human readable source code so that my future self and other people can see what I was doing that's not going to go away in my opinion anymore than we might suggest that we won't ever have people writing poetry or novels anymore because those are forms of human communication and there's value in and of themselves to that human communication. So I think we will always have a place to play In describing what we want to do. I don't think that the future of all software development is the brute force machine learned version like the you know, the million monkeys on on typewriters way of programming. I don't think that's the future of all programming. I think there are parts of technology which will go that direction. We will still be writing code. And to the point of our discussion, I want to help make more a reality, smarter tools that help us do that job better. And that's what I see tab nine and Codota is trying to do.
AIMEE_KNIGHT: All of the stuff that we've kind of talked about leads to a question that I've been hanging on to for a couple of minutes. And that is the input into the model specifically. So it sounds like from what we've talked about right now, it's, it's your own input. Um, but what about somebody who is learning? Would they, in theory, like, can you tell it to not like down the road? What if you didn't want your own input? You wanted somebody else's input because you're learning or something like that.
KYLE_SIMPSON: Yeah, it's a great great question. I told the folks at Kodota that my interest in participating with them was to frame this tool not just as a productivity tool, which is kind of their main mission, but also to frame this tool as a learning tool, meaning that developers get better at what they do by seeing what the tool is suggesting and then learning how to use the tool more effectively and that sort of thing. That's a key part of the reason why I'm involved is to realize that. So there's a couple of things that I want to say on that front. Number one, prior to me joining the company already had as an objective, the ability from a monetization perspective, if the tool can learn from open source code, it can also learn from closed source code. And so there's a clear and obvious monetization path where a company, XYZ corporation could say, Hey, Kodota, can you come in and train a model against our internal million lines of code so that we have a model that is suggesting specifically what we do well in our own code. And we don't really care about the open source model, or maybe we care a little bit, but we actually care much more about our internal model. Um, that's a clear and obvious monetization model. I'm not spilling any beans there. Obviously people know that that's where these kinds of tools are headed. So they already had that on their agenda as something that they wanted to do. What I said when I joined was there's a lot that that's one of those, like, you know, that meme where it's like step one, do something step two, who knows step three profit, there's a whole bunch in step two, before you get to the point where actually companies are doing that. And I suggested to them that a much more effective way of getting there, maybe a baby step is to actually be able to do that for open source communities. So for example, writing React code versus writing view code, you want different kinds of semantics and different kinds of patterns and idioms to be impressed upon the code. Well, what if we could take models, what if we could take code bases from the creators of the React framework or the creators of the view framework or creators of the Angular framework? If they curated a set of canonical known, good known best practice kinds of code and we and we trained a model against those things and then allowed developers to basically in their settings just check a bunch of check boxes and say I want to. I want to turn on the view model today because I'm going to be doing some view code now all of a sudden the suggestions are filtered through what they know about view code, not just all JavaScript or all open source code and those suggestions start to lead you down more, hopefully more canonical view like code. If we can do that for open source communities, we can also do that for experts. So we can imagine people like, for example, Dan Abramoff, we could have a Dan Abramoff model that is like his genius way of thinking about all the React code that he writes. And that could be a subscribable model that people in the community could say, I wanna write code like Dan does. So and by the way, I haven't talked to Dan at all about this, so he may be surprised if he hears me saying it, I just picked the name out of, out of thin air. He's a well-known guy. So Dan's model could be a thing that we trained on Dan's code. And then that was available to people. So this idea of subscribable open models is what I pitched them as kind of the, the interim step and that we are in process of starting to look at that, both from a product perspective and as a strategic perspective. How do we find those communities? How do we find those people to start training those tools? But that's where we're headed is that a company could be able to say, here's the model that we've trained and we don't want this model to learn from anybody except maybe our lead developers. They could say that. An open source community could do the same thing. We could say, let's take all of the React code or all of the view code that's in the documentation site and let's train the model there. And that way the model that you're starting with, at least, is based upon known, like, canonical good stuff. So I think the answer to your question, Amy, about learners is that in the future, the tool is not automatically learning just from what you're doing, but it can also be filtering a lot of its suggestions through these subscribable models. So if you brought on a new developer to a team, you might not have the tool configured to learn from that developer, but you might have it instead configured to suggest from the model that was trained against the lead developer on that team. And then that developer is kind, it's almost as if that lead developer is sitting with you and pairing with you at all times, but actually it's just coming through the form of auto-complete suggestion filtering. So I think that's how this tool evolves from kind of this, it's a very watered-down mode where it's like all open source code and then all code in my project. We need to narrow even further the learning models. That's point number one. And point number two, eventually this tool, there is already the capability for, as I said, these rule sets. It's not really regular expressions, but it's kind of like a language that you write rules to match semantics together. Those kinds of rules could actually be opened up to be written by the people that are managing this at a company. So you could actually write rules, not all that dissimilar to how you write, like, linter rules or things. You could write rules to say, when our developers in the company start to do this kind of thing, we want these kinds of suggestions to come out. And that's another way that you could refine what learners are getting so that they learn not just about code, but about that company or that project-specific code.
DAN_SHAPPIR: So I have a couple of thoughts about the things that you've said, but I want to kind of express them in stages. So stage number one. I guess that also the tool can or maybe even does learn from which of its suggestions you actually pick. So if it's making four suggestions and you tend to pick a certain one, then that probably also can be used as training for the tool.
KYLE_SIMPSON: There is a small degree of that already. I have already strongly emphasized that we need a closed loop here where the way that I use the tool informs how I want the tool to improve. So the way that it currently does that is that if if it keeps suggesting a particular kind of, I keep going back to the for loop example, just because it's a convenient one, even if that's not what a lot of people write these days. But like, let's say that it keeps suggesting the same for loop and you keep not picking it, the more code that's there that's not doing that, the less likely that item is to filter to the top of the list of its suggestions. So if you think about it, they may have a thousand things that the model thinks you might be doing. It's only going to present you the top five or 10 or whatever. It literally defaults to nine. That's where tab nine comes from. So it's going to present the first N number that the top N number of suggestions that it's weighted. And so over time, those examples will just be weighted less and less. Eventually, I want this tool to literally stop giving the stop, stop including them at all, because it's actually learning what, not only what you're picking, but what you're not picking. Or learned that you picked an option, but then you went back and edited in a certain way and it should figure that out and not make that same Suggestion next time so there's a wide range of things that we want the tool to do in the future it's very early infancy in terms of what it can do right now, but that's absolutely an important characteristic it needs to learn how those suggestions It needs to learn not only from the code base itself, which is what it's currently doing but also from how I use the tool.
DAN_SHAPPIR: And the next step in my sort of train of thought, and again, probably something that you've been thinking about is that you can also go beyond merely suggesting stuff to add, but also potentially suggesting stuff to change or maybe to remove. If you've written something and the model thinks that maybe you've not written what you've intended to write, what you write doesn't match other stuff that you've written before. There there's a higher likelihood that there's maybe a bug there. It can maybe highlight that as well.
KYLE_SIMPSON: Yeah. I'm glad you bring those up. So a couple of things I'll say. Number one, the tool right now, Kadoda as a tool or Tab9 as a tool, the interface that we have to the tool is suggestions that are put into your autocomplete pop-up right now. The tool itself is actually already is and will be over time, much more than simply an auto-complete. And that just happens to be the earliest and most convenient footprint for these kinds of intelligent semantic things to filter their way into your coding experience. Developers are very used to getting auto-complete, IntelliSense kinds of things, so it's a nice, convenient way of piggybacking on their already existing habits. We're already talking in the product roadmap. And I don't speak for the company in terms of when these things happen or that they're definitely going to happen, but there's already discussions in the roadmap for several other modes of this tool having interactions in your code base. Here's an example. Say a lot of people probably know about this idea that when you start to use some item that you don't already have imported into your namespace, there are some editors and some configurations where it will automatically insert the import statement for that module at the top of your file. Some people love that, some people don't love that. I'm more in the latter camp. I don't love when, especially off my screen, you start adding code that I didn't know you put there. And sometimes that kind of annoys me when the tool does more than I want. So some people really like it and some people really don't. One of the ways that I can see this happening is instead of automatically inserting that import statement, it might, in addition to completing what you just did on that line where you used the module, it then might additionally prompt you in some way, not necessarily an import, I mean not necessarily an auto complete input, but it might prompt you in some way through your developer interface. It looks like you've used a model that's not in the scope. Do you want to import it? And then you could accept that, right? So that's one way that the tool could understand what you're doing and understand what you need and help you but not do too much magic behind the scenes that you don't even see happening. Another thing you mentioned kind of like correcting things. We know about linters. Linters look at they statically analyze code and they figure out if you're doing something that's not the right code style. Well, you can imagine a machine learning trained and a rules-based on top of that model for linting where you could say we not only have the ES lint, linter running in our project, but we also have tab nine configured that when you do a certain set of things that we consider to be not best practice after you've done them, then it could pop up with a little linter like warning, you know, in your, in the gutter of your code editor saying XYZ rule says that you probably shouldn't be doing it that way. We like to do it this way. So could going back to our for loop example, it could recognize that you wrote the for loop and it could actually say, hey, that for loop that you just did, we like to do that with mapper.4h or whatever. So there's that sort of model that we can envision. And yet another model that we can envision is what I would consider to be a replacement mode. So one of the things that I brought to them was I said, what if you could determine that I was doing something like concatenating arrays together? What if that was a semantic that you could determine through some combination of the machine learning semantics and also rules? If you could figure out that I was concatenating two arrays together, and as I was getting ready to commit a line of code or as I typed out or even completed a line of code where I committed, what if you could then highlight that line of code and say, would you like to do this this way and suggest an alternate form of that line of code that's maybe using more modern JavaScript syntax, for example? So that's kind of a take what you've already got in a piece of code and rather than report an error on it, like literally report a suggestion. And that's a little bit more like ESLint with the fix mode turned on, where it's popping up a thing saying, I think you're trying to concatenate two arrays. Would you like to do that with the dot to dot operator, for example? So that's yet another mode that I think that we'll be exploring in the product roadmap for this. And there are a dozen others as well. So expect that this tool will have a lot of different footprints in your editor in the future, besides just the things that show up in the autocomplete box.
DAN_SHAPPIR: Not anytime soon, perhaps, but I have to start to ask myself why the insistence on using JavaScript as a sort of the interface between myself expressing intent and the tool trying to implement my intent. I mean, you kind of gave a precursor to that with your example about the comment. I mean, can't I maybe envision this tool enabling some sort of a higher level or higher order programming language where I can still express myself in a way that's very human readable, maybe even more human readable because it's more expressive. And then the tool figuring out the intent and providing the lower level implementation in whatever because I don't really care.
KYLE_SIMPSON: That's an interesting idea. It's sort of, it almost in my mind sounds like macro level kinds of programming or meta programming. You know, kind of like Emmet that...You know, you can type in and it auto completes a snippet of HTML for you or things like that. I can envision some sort of that, but I think that is, I think that is orthogonal to what the goals of this tool is doing. In other words, to say, I think it's a useful endeavor to consider if there are more efficient languages that humans can write that translate into the source code that we communicate with. But if the purpose of the source code that we communicate is to talk with each other, then I wouldn't want to write it out in this nice, beautiful, abstracted thing, whatever that thing is, the successor to JavaScript or whatever. I wouldn't want to write it out in that, and then all of a sudden that go away and be replaced with that uglier, older JavaScript because then I've lost the important key thing, which is that I want to be able to communicate. So I see this as an orthogonal thing that is essentially programming language theory. Should we be designing new, higher level languages that allow us to communicate better? Yes. Does that mean that a tool should be letting us write in that and then right before our very eyes transpiling it down to something else? I'm not sure that I see that being very fruitful.
One of the biggest pain points that I find as I talk to people about software is deployment. It's really interesting to have the conversations with people where it's, I don't want to deal with Docker, I don't want to deal with Kubernetes, I don't want to deal with setting up servers, all of these different things. And in a lot of ways, DevOps has gotten a lot easier. And in a lot of ways, DevOps has also embraced a certain amount of culture around applications, the way we build them, the way we deploy them. And I've really felt for a long time that developers need to have the conversations with DevOps or adopt some form of DevOps so that they can take control of what they're doing and really understand when things go to production, what's going on so that they can help debug the issues and fix the issues and find the issues when they go wrong and help streamline things and make things better and slicker and easier so that they'll more generally go right. So we started a podcast called Adventures in DevOps. And I pulled in one of the hosts from one of my favorite DevOps shows, Nell Chamarral Harrington from the Food Fight show. And we got things rolling there. And so this is more or less a continuation of the Food Fight show where we're talking about the things that go into DevOps. So if you're struggling with any of these operational type things, then definitely check out Adventures in DevOps and you can find it at adventures in DevOps podcast.com.
AJ_O’NEAL: All right. Well, let's go ahead and have Amy. Go first, because she is ready and roaring to give us some pics.
AIMEE_KNIGHT: Yeah. In case I need to jump off here sooner rather than later, I feel bad because lately I've been picking more like infrastructure type pics and stuff, still doing some JavaScript, but because I'm kind of doing two different things in my current role, so this is kind of cool to me as I was researching different Terraform tools, so it will look at your Terraform file, which is if people aren't familiar, it's a tool for provisioning infrastructure in like Azure, Google, Amazon, but it'll look at your Terraform file and it will predict the cost of what you're trying to provision. So I will drop a pick for that because I thought that was kind of cool.
AJ_O’NEAL: Well, Amy, we know that you're jumping up in the pay tier when you're trying to solve these kinds of problems.
AIMEE_KNIGHT: No comment.
AJ_O’NEAL: No comment indeed. All right.
AIMEE_KNIGHT: It's fun. I do it because it's fun.
AJ_O’NEAL: Oh, I know. I know. But I'm just saying, like when you get to the point where you're, that's the kind of problem you're looking at. You, I know you're up in the ladder there. So Dan, you want to go next?
DAN_SHAPPIR: Sure. Why not? So we've been talking about tools to help you write cleaner, nicer code, automate your coding process. So I actually want to link to an excellent article that is the title of which is how to write unmaintainable code in order to, you know, for job security, to ensure a job for life. They have some awesome suggestions. For example, we've been talking about how difficult it can be to pick appropriate variable names or names in general. Well, how about using baby names for variables? I think that's an awesome idea. Another idea is to take whatever you thought the variable name should be throwing it into Google Translate, translating it into some random language, and then using that instead. A lot of excellent tips about how to make your code wholly unreadable and wholly unmaintainable in order to ensure that your company can never ever let you go. And that would be my pick for today.
AJ_O’NEAL: Dan, I'm just worried that maybe some companies would let people go if they did that.
DAN_SHAPPIR: Yeah, and then you try to debug whatever I left behind.
AJ_O’NEAL: I just rewrite it. But anyway Kyle, you want to go next?
KYLE_SIMPSON: My pick is if anybody has ever worked with the cookies API in the browser document dot cookies, which literally landed in the browser in October of 2000, so a long 20 years ago. Now, if you've ever worked with that and struggled or pulled your hair out with like how poorly designed the seams where you're like fuzzing around with string characters and trying to pull your cookies out. There is a proposal for a modern asynchronous Promise-based API for cookies, and it has landed in Chrome. They've implemented it and rolled it out. And there's a bit of concern from some of the other browsers. They're not really sure if they want to implement it, but I think it's a really good idea. So we should start playing around with that. And hopefully, I literally happen to be writing some code right now on a little project of mine that I'm having to manage cookies, and I really wish I had a better API for it. So I'm glad to see some progress finally after 20 years on that.
DAN_SHAPPIR: I'm kind of amused that I've never thought I'd get a chance to correct you on anything browser related, but it's document.cookie. Singular rather than plural.
KYLE_SIMPSON: Sorry, not document.cookie instead it's document.cookie.
DAN_SHAPPIR: Yeah, you write many cookies into document.cookie. That's an amazing API, by the way, where you add stuff into a string, but it doesn't, you change the string, but it actually adds to the string. It's an awesome API. Amazing API. A really interesting API. I actually agree with you that document, uh, that access to cookies should, does, should have been asynchronous, essentially almost any JavaScript API.
AJ_O’NEAL: Why would you know? No, no, no, you do not. We're talking about at max, like what a four kilobyte string. No, that does not need to be asynchronous. Absolutely.
KYLE_SIMPSON: Okay. Right. It does AJ. And let me tell you why. First of all, it needs to be asynchronous because the browser needs to be able to store things not always in memory so that Chrome is not taking up 600 gigabytes of my RAM and whatever. And the browser stores things like local storage and other things in lots of different places and not all of them are synchronously available or if they were, it would be really bad for performance. So if it like writes something to a page file or whatever.
AJ_O’NEAL: Then let the browser optimize that over the hood. Don't bother me with that.
KYLE_SIMPSON: But the other reason why it needs to be that way is that you need asynchrony when you're going to do cross context communication. These multiprocess browsers are all now running tabs and iframes and all these different processes. You can't have synchronous APIs that are coordinating across multiprocess. You do need asynchronous APIs for anything that can. You can make a change in one context and it can be seen in another that needs to be an asynchronous API, not asynchronous API.
DAN_SHAPPIR: Just a comment on that, AJ be aware that, for example, the local storage API was, at the time, made synchronous, and people have been using it to store like two megabytes of data. So maybe cookies are, in fact, limited to 4 or 8k or whatever, but people have been using local storage for huge amounts of data and then running into all sorts of performance issues.
AJ_O’NEAL: Yeah, and those performance issues are the punishment for doing it wrong.
KYLE_SIMPSON: The people who designed local storage have all unanimously come out and said, oh man, we definitely should not have made that synchronous. It should have been asynchronous.
AJ_O’NEAL: No, no, the people just shouldn't use it the wrong way. And then, oh, anyway, some of that asynchrity can completely go under the hood. Nobody needs to know about it. It doesn't, whatever.
DAN_SHAPPIR: Just do a wait and then go there.
AJ_O’NEAL: But if you tell me that they're gonna implement it with iterators, that's what I'm gonna go ape nuts. Again, with iterators.
DAN_SHAPPIR: Yeah, but you don't need to be aware that it's iterators. It just, if it's with iterators and it just works with a spread operator or
AJ_O’NEAL: no, no, no, no, no, no, no. I'm talking like the query, the query object where they implement it with iterators and there's no way to console dot log it without like doing, oh, well, I don't want to talk about it. I don't talk about it. My blood.
KYLE_SIMPSON: I can't wait for us to have a future episode of this, where we dig into AJ's problems with asynchronous in the browser.
AJ_O’NEAL: No, no, no, we're not doing that ever. We're not ever doing that. We're going to let that one just lie. Anyway, I'm going to do some picks, bring some positive energy back to the table here. All right, so first and foremost, what should be everyone's top pick is of course, three wolf moon. I referenced this the other day, but I didn't have the link for it. So now I've got the link for it. Three wolf moon, ladies and gentlemen. It is the ultimate in t-shirts. It's got not one, not two, but three wolves howling at the moon. This beats out wolf moon. It beats out two wolf moon. This is three wolf moon and the people in the comments are going to tell you all about it. If you need some comedic relief for today, click on this link. Oh dang it, I should have used an Amazon affiliate link. I'm going to fix that real quick just in case you do decide to buy it. And you need to get your three wolf moon comment reading on because it is going to make your day better. This is one of like, I think this thing is like 10 or 15 years old. I mean this thing is forever old but the comments are hilarious and they just age so well. And there's a bunch of other things like this as well, but some of them are no longer politically correct. Like there's one about a pen or a pencil and there's one about a jug of milk, but just there's these little hidden Easter egg gems on Amazon where there's all these great wonderful just comments that just make you laugh out of your your nose holes. Anyway, I'm also going to pick watch exec, which I mean, like in Node, you have NodeMon, but the problem, and it does, WatchExec is great because it's a single binary, it's cross-platform, and you can use it with any process. It doesn't have to be integrated into Node. So if you've already got something that watches files for changes and then runs a process when the set group of files change, then, you know, great. But WatchExec is super intuitive, very easy to use, the options for like how to specify which extensions you want to listen to. It's got reasonable defaults, whatever. It's just a great tool. It's made my life better. And I've got a cheat sheet up at web installed.dev slash watch exec. And then also I kind of wrapped someone's go library in a way to make it publishable because a release hadn't been released in a couple of years. And, and, uh, I may want to add some additional command line flags, but it's called dot ENV. And so if you're not familiar with dot ENV files, you, I mean, this, this is something that today you need to go look at dot ENV files. It never ever commit dot ENV files to your repository. Don't ever can fit commit like a config dot JS. You can, you have this dot ENV file. You put configuration and it's very simple. It's just key value pairs like name equals value, and then you get access to it and process dot ENV. I think that's what it is. Yeah. And that is just, it's a it's a, this is part of the whole 12 factor app thing. But anyway, so this, the, the dot ENV program is just a cross platform way to, uh, and again, a node, you already have a dot ENV package and Ruby, there's a dot ENV package, but sometimes you come across a thing where you can have access to environment variables, but you don't have a good way to inject the dot ENV into the environment before the thing runs. And so this dot ENV package just helps you to do that so that you can run dot ENV. Give it the envy file or let it assume the default, which is the dot E and B file dot E and B is spelled out dot E and V in the program way, but in the file way, it's just the period. Anyway. Yeah. And, and then you give it the thing to run and then it will run it with those environment variables. And I've got the cheat sheet up for that on web installed.dev slash D O T E and B. And those are my picks for today. AJ out.
Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit C A C H E F L Y.com to learn more.