Black-Belt Debugging with Chelsea Troy - RUBY 663

In this episode of Ruby Rogues, Chelsea Troy teaches us to hone our debugging skills to a razor-sharp edge. We learn how to actively improve debugging skills, train troubleshooting instincts and practical strategies for tackling brain-bending bugs.

Special Guests: Chelsea Troy

Show Notes

In this episode of Ruby Rogues, Chelsea Troy teaches us to hone our debugging skills to a razor-sharp edge. We learn how to actively improve debugging skills, train troubleshooting instincts and practical strategies for tackling brain-bending bugs.

Links


Picks

Transcript


Hey, everybody, and welcome to another episode of the Ruby Rogues podcast. This week on our panel, we have John Epperson. Hello, everybody. Luke Stutters. Hey.

Hey. I'm Charles Max Wood from dev chat dot TV, working on stuff over at most valuable dot dev. So go check that out. We have a special guest this week, and that is Chelsea Troy. Chelsea, how's it going?

Hey. It's going well. How about you? We're doing great. You wanna introduce yourself real quick?

Sure. Yeah. My name is Chelsea Troy. I am a software engineer. By day, I work on a couple different projects that focus on projects that are saving the planet or advancing basic scientific research or providing resources to underserved communities.

And then in the evenings, I teach a class called mobile software development at the University of Chicago in their master's program in computer science. And or I don't know, organize a couple conferences here in Chicago when we're allowed to have conferences, which is, you know, not not right now. So I spend a lot of time lately taking walks and going on bike rides because we only get 100 nice days in the city of Chicago. So Nice. Have you had a heatwave out there like we've had out here?

So it's funny you ask. Last week was relatively warm, but this week, it has been it's not gonna get back to 70 degrees until the weekend has been in the sixties, which is really nice for being outside with the exception of the fact that it's been raining cats and dogs for a lot of that. And so, I mean, people, though, people enjoy the temperate weather so much that they just go out in it anyway with their umbrellas and raincoats, but I like I like a cloudy day without the rain when I can get it. Nice. Yeah.

Well, the the topic that I have on our on our calendar is practical debugging, and it looks like you've given some talks on this. You I I found a blog post about it on chelseetroy.com. I'm I'm a little curious as we get going, like, what what is your approach to debugging? Because it seems like a lot of people just kind of go and tweak the code until it works. Well yeah.

So I guess the way that I debug really does depend on the situation, but one of the things that I have noticed in the way that we teach people to program is that we leave out certain skills when we are educating folks, and that is in part by dint of the resources that we make available and in part by dint of what we think of as programming as opposed to what what I find that it really is, at least in my case. So the vast majority of the tutorials online and even the majority of the courses that you'll see undergraduate, graduate institutions, Udacity, you name it, they're focused on teaching people how to do things and the vast majority of the tutorials that are available are. Developer has already practiced how to do this several times, so they get it right on the tutorial, and then they go through the tutorial getting it right. They don't run into issues or they don't run into a lot of issues. They certainly don't run into the number of issues that you would actually run into implementing something for the first time.

And that's for a number of reasons. They want the tutorial to focus on how to do something. They want the tutorial to be a relatively smooth experience for people, which is good and it absolutely has its place. One of the things that I think that we undervalue in our programming education is all of the skills around that that we end up picking up over time. So relatively experienced developer might have an easier time debugging than a relatively new one.

But right now, the way that we have it set up, the vast majority of people's intuition and skill around debugging has to come from essentially inductive reasoning from all of the situations that they personally have encountered in their career because we don't have a pedagogy around debugging. We don't have a praxis around debugging. We just sort of, like, figure it out from all of our individual experiences. Now for that same reason, we don't see particularly good translation between debugging on one stack and debugging on another stack. Take an extremely experienced Rubyist or extremely experienced Rails developer, and they'll be absolutely excellent at debugging in Rails.

Put them on a mobile stack, and they're gonna have a lot more trouble than you would expect based on the amount of experience they have in Rails because there isn't a generalized practice for debugging. And so a lot of people's quote, unquote debugging skill comes from I have seen literally this exact bug before as opposed to I have seen bugs like this before and I know what might be causing it or I have some intuition about how to narrow down what might be causing it. And so we end up with these extremely specific, extremely personal experienced based debugging strategies like you're talking about where people just kinda mess around with things until it works. And the more experience they have with a particular stack, the more they'll be able to hone in on what to mess around with. But then when they switch stacks, they lose a lot of that.

And I think that a lot of that can be solved by taking a more deliberate approach to the way that we debug and the way that we teach debugging because the vast majority of the time that we spend as programmers isn't usually on cranking through solutions where we already know what we're going to end up with at the end. And in the situations where we are doing that, where we're cranking a whole bunch of code, we know exactly how it's gonna work, a lot of times developers think of that as rote. They don't really like it. What they wanna do is they want to approach new challenges. And when we're approaching new challenges, we're spending more than half of our development time looking at something that's not working the way that we were expecting it to work and needing to figure out why it's not working the way we expect it to work and how to get it to work.

And because we don't model that, because we don't teach that, folks assume that a, quote, unquote, good developer spends less time doing that than they actually do because we don't see it modeled anywhere. And they, in addition, spend more time doing that than they might need to if they had a strategy for going in and debugging something in a general sense. Does that make sense? Yeah. Makes sense to me.

I'm one of the people that yeah. I I just wanna make it work. Right? And so if I can shortcut even a well reasoned approach, I'm gonna try it. Yeah.

It's it's it's a tough it's a tough intuition to fight, and I think that there's a reason for that. I think that we think about programming as being single model when there's actually 2 modes to it. So the first one we might think of is like building mode or productivity mode. This is mostly what we see modeled. This is mostly what we see in tutorials, and this is mostly what we're striving for.

This is also we reward software engineers for. If we see them spending a lot of time in this, like, cranking building mode, we think of them as a good software engineer. And in that mode, we're focused on getting something working fast. Our focus is on creating something that wasn't there before, and speed is, like, of the essence in this mode. So the problem comes in that that mode is most effective when we understand what our code is doing.

In a case where we're facing a bug or in a case where we're facing an issue in our code, by definition, we don't quite understand what our code is doing. Because if we did understand what our code was doing, it wouldn't be doing the thing that we don't want it to do. So the base assumption that makes the building mode work is no longer true, but we continue to try this building mode. We continue to move as if speed is of the essence, as if the focus is on getting something to happen. And it doesn't end up working particularly well, and it works worse and worse the less we understand exactly what our code is doing because speed relies on making quick judgments.

Right? Speed relies on heuristics. It relies on assumptions. And when our assumptions are correct, then we can move faster. But if we don't understand what our code is doing, there's some assumption that we're making that's incorrect.

And the faster we try to move, the more likely it is that we gloss over that assumption, that we don't take the time to reexamine that assumption. So the faster we try to move, the lower the likelihood that we're gonna be able to end up catching an insidious bug because we're not reexamining those assumptions that we're making. So there's really a second mode that we need to be aware of in programming, and it's something that we need to switch into when we don't understand what our code is doing. We need to be able to switch from that building mode into an investigative mode where we're no longer focused on creating something that once upon a time wasn't. And now instead, we're focused on understanding exactly what our code is doing, but maybe more importantly, understanding our assumptions about what our code is doing because somewhere in those assumptions, something isn't matching up.

That's our opportunity to slow down, to compare what our code is doing to what it is that we think our code should be doing and narrow down the location in the code where the difference is happening so that we can then fix it and bring our assumptions and the code's function back into line to resolve the issue. You don't make me slow down, Go ahead. Yeah. So you you alright. So talking about these modes since since we got here because I think this is super interesting, and it's it appears that you have thought quite a bit about this because you're super eloquent in talking about it.

Thank you. I appreciate that. There's a lot so there's a lot of language here, and I think I think I vaguely understand what's going on. Hopefully, I understand it slightly better than that. How do you teach this to somebody?

Because as I was sitting here listening to you talk, I'm like, okay. Yes. I recognize where over I mean, I've been doing this for almost 15 years. So at at some point, hopefully, I've gotten sort of good at this. Right?

And I feel like I have stumbled my way through sort of having these 2 modes. Right? Or at least I can I can look back on my past and, like, kind of feel like I recognize that? And as I as I think about, like, people that I'm mentoring, this is this is kind of the frame that I'm thinking about right now. This is definitely a stumbling block.

Right? So, for example, I have one person that I mentor right now that they're always in build mode every week, and I'm spending all all this time and energy trying to get this person to slow down. Right? So, like, are are there ways do you have tips? Maybe you don't.

Maybe you're just like, look, I'm just talking about what it what we should do, not necessarily how we get there. Right? That's fine. But I'm curious if you have tips, how do you convey this to somebody? Hey.

Look. Like, here's the value of slowing down. Maybe here's a framework for when you choose to slow down and when you, you know, flip back to build mode. I know there's a lot of weird questions in there. Look what you want out.

Yeah. No. It makes sense to me. I think that you're right. So one of the things that we need to be able to identify in practice is when do we need to shift between these two modes?

What signals can we use to determine that we should be in a building mode versus an investigative mode? And so one of the things that I use for this is so there are a couple of different strategies that we can use for debugging. Right? And there's one in particular that tends to show up really, really commonly when we are in when we are in build mode because it is effective for build mode. So I call it the standard strategy.

And what this is is that when you run into a bug, you try changing the thing that you think is most likely to be causing the bug, and you see if it works or not. If it doesn't work, you try changing the thing you think is second most likely to be causing the bug, and hopefully that works. And if it doesn't work, then you try changing the thing that you think is the 3rd most likely to be causing the bug. This is perhaps the fastest kind of shortcut strategy to get something working when you understand what's happening in your code. Because when you do understand what's happening in your code, the thing that you try first is in fact the most likely thing to be causing the bug and you get through it relatively quickly.

That's not gonna be the case with insidious bugs. So that's not gonna be the case with a bug where you're making an assumption that's fundamentally inaccurate to what it is that the code is doing because the things that you think are the most likely to be causing the bug are, like, in line with your assumptions about what the code is doing. So standard strategy, great for build mode. Here's where it falls down is that if you don't understand what's happening in your code, you can end up in this kind of vicious cycle with the standard strategy where, for example, you try changing the thing you think is most likely to be causing the bug, and it doesn't work. So you try the changing the thing you think is second most likely to be causing the bug, and it still doesn't work.

So you run out of ideas, and then you get pissed. And you go back and you try changing the thing that you tried the first time to see if it works this time around, and you get in this kind of, like, vicious cycle with it. Right? That's the failure mode for the standard strategy. That's where build mode fails us when we don't understand what's going on with our code.

And that, I think, is a really strong signal. When you're going back and trying the thing again that you already tried and saw didn't work, yeah, sometimes that works. Maybe that's what it was. There was some config issue or something like that. If it doesn't work the second time, I think that's your, like, really strong signal that it's time to switch into investigation mode.

Because at that point, clearly, you've made some assumptions. Something somewhere is not matching up. So then if if the standard strategy is not gonna work for us, we have to have strategies that we could use instead. Right? There's gotta be something other than the standard strategy.

Figure out you know, instead of focusing on getting the thing working as quickly as possible, we we can instead pick a strategy that focuses on something else. So the strategies that I use at that point both focus on there's 2. I'll go over them. But they both they shift the focus from how do I get this thing working to how do I identify the place where my assumptions about what's happening do not match up with the code and what it's doing. Right?

So there are a couple. One of them I try to one of them is helpful for speed in some cases when it's not possible, I use the other one. So the first one, which I can use for speed, is I call it the binary search strategy. Basically, the way that this works is that I identify some point where the code begins executing, right, and identify some point either where the code finishes or at which point I'm positive the bug has happened. So I have these beginning and end points.

Right? Now I try to pick some spot in between those 2, kind of in the middle. And at that point, I use a breakpoint or I use print statements or any of a number of tactics to get into the code and test that all of the assumptions that I am making at that point are correct. So I'm printing out variables, determining what the flow of the execution is. And suppose I test all of my assumptions at that point in the code, whatever I'm printing, logging, breakpoint, and everything's working as expected.

If everything's working as expected at that point, then I pick a point halfway between there and wherever my code finished executing or the bug happens. And so one of the things that we know from executing binary executing binary search in in computer programs is that it's pretty fast. You can you can narrow down in relatively few steps from a really wide range of options what could be going on. Even if you've got tons and tons of code in the middle. We're talking about in a really complicated system using binary search strategy, 5, 6 steps maybe, which is sounds like a lot until you consider that you might try, like, 14, 15, 16, 20, 25 different things in a really complex system if you're just trying to move straight to getting the thing working.

So I just wanna point out with your standard strategy, I'm a little bit weirded out that you've been watching me code all these years. You know what? The problem is I've been watching me code all these years, and I do that constantly. You know? The, like, maybe there was a ghost in the system the first time.

Let me try it a second time, see if the ghost is gone. I've definitely done it. I think everybody's done it. You know? Yes.

But but what you're saying makes sense as far as the the binary search strategy. It's the same idea as a binary search tree, and I'm I'm assuming that you got the idea from some structure like that. But, yeah, you know, I mean, we talk about big o and everybody makes a big deal out of it as an interview question, but, I mean, yeah, you here, it it actually is a measure of speed. Right? Because it's log of n, you know, number of lines in the area you've identified as where the bug may be occurring.

Mhmm. So, you know, and and the fact that you can pick chunks of the code instead of, you know, specific locations within the code allows you to narrow down. Well, it's either this chunk or that chunk. Mhmm. And and so, you know, you you might have to log in, you know, to to narrow it down, yeah, instead of, you know essentially, you're just doing a random selection.

Yeah. You know? It's it's an informed random selection, but it's still random ish and hoping that you hit the right thing. Because it's not just that you change it in the right place, but you have to change it the right way too. Right.

Yeah. It's absolutely the case. So, yeah, I love using the binary search strategy for that. I think I started using it for that after I was using git bisect, which essentially allows you to similarly use a binary search strategy to narrow down what commit something went wrong in. And what I you know, I sort of, like, knew this cerebrally from having studied computer science, but using get bisect, I realized, like, it takes me, you know, a lot fewer steps, let's say, 6 ish steps to figure out what's going wrong, like, which commit out of a 100 commits something went wrong.

A 100 commits sounds daunting at first, but 7 steps doesn't sound nearly as daunting. And so Mhmm. This strategy could also be applied there. Now the downfall of this strategy, it's great for speed, but it sort of assume it works best when you're talking about, like, a single threaded application or a situation where you don't have multiple processes where things can break off where the order is deterministic. And so when you're dealing with multiple threads or some other kind of nondeterministic order type of situation, The binary search strategy can be tough to use because you don't necessarily know from beginning to end of execution, like, what the order is to do that.

But what you do know is that you start somewhere, and then at some point at the end, something went wrong and all this stuff that happened in here, whatever order it happened in, something went wrong. So in those cases, I have to back up and use potentially a slower, less flashy, less cool strategy, which is essentially to start at the beginning of execution, and, like, at the very beginning of execution, do the same thing, print, log, breakpoint, test my assumptions. And if they're wrong, then maybe that's where my issue is getting caused. And if not, then move a little later in the execute. Basically, follow the code path through, which feels really slow because especially if we do it like we start at the very beginning, all our assumptions are right.

We start one step down, all of our assumptions are right. We we go to 2 steps down, all of our assumptions are right. Okay. This is getting boring. When do I get to skip ahead?

And then we get to step 3. Oh, wait a minute. I was assuming I was just skipping to step 8 and all of this time, and the problem was in step 3. So suppose that there are, like, 15 steps there, that sounds like a lot. But if your problem is in step 3 or step 4 or step 5, you only have to try as many until you figure out what the actual problem is, which at most is gonna be 15.

Whereas if you're trying the same thing a 1000000 times at step 8, that ends up still being more steps even though this process feels really slow because kind of again, by definition, like, we don't know what's wrong. So it makes sense to figure out it it makes sense to systematically figure out what's wrong. The other really nice thing about that strategy or both strategies, really, is that we can feel like we're making progress on a bug even if we haven't fixed it yet. Because psychologically, the way it feel well, I'm interested in y'all's thoughts on this, actually. But my feeling when I'm fixing a bug using something like the standard strategy is my first assumption didn't work.

I've gotten nowhere. My second assumption didn't work. I've gotten nowhere. I feel like I've made no progress until I have finished fixing the bug, which really sucks right up until the moment where I finished fixing the bug. Whereas if I have a systematic strategy, I could say, alright.

Well, I haven't fixed the bug yet, but I have narrowed down, like, half of the possibilities for where it could be. And I know that I've narrowed down half of the possibilities because I've quantified the possibilities and I'm moving through them with some kind of, like, plan as opposed to guessing. And if my guess is wrong, I don't really know how much longer I'm gonna be doing this. You know? There's, there's another downside to that whole standard strategy too, which is if you continue the standard strategy right through the time, it you're at the point when you don't know what's wrong now.

Right? But you continue just playing guess and check. Now all of a sudden, you probably didn't take good notes as you're doing this, And you let when you do finally fix it, you don't know why it now works. And so this just becomes a boogeyman that you're just, like, scared to go back and touch again because you're just, like, well, I can't touch any of that stuff. I literally have no idea why it's working now.

I mean, I have done this, and I can't I can't even, like, tell how many times, like, I realized somebody else was doing this because they're, like, shoot. I don't know why it's working. You know? Yeah. We, I think it's really funny when people talk about how programmers you know, oh, you know, software engineers or engineers, they're like they're not they're not superstitious or any of that stuff.

Everything's evidence based to them, and I'm like, no. Definitely. No. That's not how this like, the longer you work with software, the less you trust software. There's a reason for that.

And it's not, yeah, I think, you know, it's, I'm coming out the other side, personally. I feel like I feel like my code is my friend now. Now other people's code is the enemy. My code my code code I've written and no one else has touched is like a lovely warm blank. It's like something to come home to.

Uh-huh. So so yeah. I I I do feel like there's a big difference between debugging my code and debugging other people's code. Oh, absolutely. Is that like other people's kids?

So obviously and I tell I tell you what I feel like the difference is, is that when I'm writing something, I tend to find the bugs at the time. Yeah? So so I I I I this probably isn't true, but it feels like it. I feel like I'm finding the bugs when I'm writing it. But when when you've I mean, it happened the other day.

Was asked to come and look at a rails 5 site and work out why it was being being a naughty website. And it's someone else's code, you know. And, you just got no no idea what's going on at all, and it just feels like a much more difficult process. Mhmm. And I also get a lot angrier when I'm debugging other people's code.

I'm a lot more forgiving of my own code. Is it is there a, like, an anger management approach to debugging? Oh, man. So So it's funny that you mentioned this because so I'm, working on an implementation right now of the RaaS distributed consensus algorithm, and I'm the only person who has worked on this code, which means I got to make all my own stylistic choices about it. And every you know, all the messages are, like, cutesy, definitely not what the original raft team would have had their servers do, and they're they're very nice to each other.

They say please and thank you and all that stuff. It's very Canadianism. But I I find that I like, the problems in that code, the idiosyncrasies in that code are endearing to me because it's mine. You know? The same way that my friends, the idiosyncrasies, are, like, endearing, whereas, like, if somebody else were to try to run this code, they would probably be like, this is ridiculous.

Like, why does it this is no. I don't like this at all. But it's fun to me, and they did this. I'm not gonna cite this correctly. I'll go back and find it, and we can put it in the show notes.

But they did this study about people's opinions of their own origami. Are you all familiar with this study? So, essentially, they taught some people how to do origami, people who were not particularly skilled at origami to begin with. And what they found was this really interesting effect where the objectively worse the finished product was, the more the creator loved it because it was an indication of how much they had struggled with the origami, and then they finally got something sort of working. And that struggle was like that struggle played a role in their fondness for the finished product more than the objective judgment of the finished product, and I think that probably happens with code 2.

Is it look at my about my code. What's the message here? I don't like the way this is going. Well, it's that you have context on, like, all of the problems that you've already solved in that code. Right?

And other people, they don't have that. They just come in and they just assume everything that's working was always working, and they focus on the things that that that don't. And I think, you know, we struggle with that, and it's of course, it's when you're when you're debugging your own code to you have a little bit more intuition for what was happening, and you don't have to back translate from the code to the original intuition the way that you would with somebody else's code. And when we go in and we maintain somebody else's code, I do this a lot in my job. I find that many of the projects that I take on are relatively complex, relatively undocumented, relatively untested code basis where I'm going in and I'm adding documentation.

I'm adding I'm adding tests. In some cases, I'm going in and the original developer is gone and nobody knows how it works and I have to figure it out. And in that case, we're not just talking about debugging. We're not even at debugging yet. We're starting at, like, forensic software analysis effectively.

We have to be able to go in and, like, CSI this code and figure out from clues, like, what's what's going on what's going on in there and figure out, you know, based on, like, these indicators in the code. I think originally this piece of it was supposed to do this, and that's a whole separate skill set from debugging, which is also you know, forensic software analysis is a fun thing to get good at too. But once again, like debugging, if we're not if we're not promoting that to a first class skill set, if we're not recognizing it and teaching it as a first class skill set, it just feels to people like either wasted time or like an indication that they're not good at their jobs. And it's not that they're not good at their jobs. It's that this is a skill that takes time, and the fact that you need to take time to do this thing doesn't mean you're bad.

It means that this is one of the things we have to take time to do when we're doing this job. The same way that a cowboy you know, it's not just about riding. You also have to be able to saddle up your horse. You have to be able to feed your horse. You have to have cattle scents or whatever it is they have.

You have to be able to cook your beans in a can on the you know? You gotta be able to do all this stuff that's not just riding your horse. Right? And it's not wasted time to do all those things. It's stuff you gotta do to be, you know, a cattle person.

And I think it's the same with debugging, forensic software analysis. We've got these skills that we need, that we don't factor into our mental model of what it means to be a good programmer, and we really should. Can I ask you about reproduction? Sure. Because What's your question?

Most When a man and a woman love each other very much I'm so glad you picked that up. The hardest bit I have, especially in Ruby, because there's so much rails floating around, is reproducing the expletive expletive bug. Because a lot of the stuff I do is kind of an ecommerce site, and you get the people phoning up and they said your website just done that. And then the client phones me up and I said like they say like once or twice a day, the website does this. Mhmm.

Alright? And for most people, it's fine. But it's those edge cases where you're hitting a bug, which can't be easily reproduced. And often, once you have reproduction on that bug, solving it is just like the easiest thing ever. It's just getting it to do it again.

So is are there any tips for getting it to do it again? That's a good question because that can be a really tough one. You know? It can be hard to replicate the exact environment that it that it happened in, the exact state of the database that it happened in, the exact, like, whatever the configuration is that's going on. We know with bugs that are tough to reproduce that if we're not able to get it to reproduce, there's some aspect of the configuration that is outside of the environment that we're currently modeling that is different from what the person who saw the bug originally provided.

And I think there's kind of 2 parts to this. The first one is recognizing which aspects of the environment are missing from our understanding. And this one is tough, but it's the one between the two that I think software engineers have a little more success with than the other one. Because the other one, and this is an ongoing struggle, is figuring out how to get the details of the entire environment to be able to replicate it 100%. Because when somebody phones in and they say, like, my app just did this, it's not necessarily it's not necessarily true that that person has access to the details of the entire environment where the problem is cropping up or that they're even, like, aware of all of the environmental variables that might be causing the issue.

And so in those cases, it can be really, really tough, and sometimes the solution ends up being, alright. So it happens once or twice a day. We take all of the issues of this type, and we put them in a bucket. And suppose that it happens, so we get 2 of them one day, 2 of them the next day, and 2 of them the next day. So at what point does this bucket have enough data points in it that we can go through and systematically compare it to all of the times when this works correctly that aren't in this bucket?

What environment variables do we think could be different between those two situations? And so, unfortunately, I think sometimes we gotta, like, wait and figure out from this collection of individual instances what might be different about the individual instances from the way that, like, normal circumstances operate. We see some so there are there may be some valuable transfer here between techniques that that medical researchers use for diagnosing and understanding rare conditions is that you just don't have that many people who have it happen. But in order to get solid research on a condition, you have to sort of have a minimum sample size. And so what they'll do is they'll try to create a record of everybody who has this condition and hopefully get the sample size up to, you know, 20, 50, a 100, something where they can start using some aggregate analysis techniques to figure out what the difference might be.

But I think sometimes with rare bugs, we like I said, it's tough, and if it's not happening all the time, it can be difficult. But sometimes we're not where it's tough in the first place, and we're making it tougher on ourselves by not taking that systematic step either. We don't have a record of all the times this rare bug happened so that we can start to implement some aggregate analysis when it's happened 20 times, 50 times, a 100 times. And we can tell customers in the meantime, like, you know, we know this is a really thorny issue. We're not really sure why it's happening.

We're in the process of data collection right now to see if we can figure out why it's happening. And on the back end, be collecting that somewhere so that we know when we get to some certain number that we think, like, we can try again and try to do some aggregate analysis on this, then we'll set aside time. We'll put in we'll put a ticket in the system for when we get 20 examples of this. We'll go back and take a look at this in the aggregate. As opposed to each individual time it's happening, attempt to, like, debug it based on that single instance without looking at any of the instances in the past.

We have this is a little bit unrelated, so I promise we'll come back to the actual subject of debugging. But we see a similar tendency in organizational dynamics and teams on software teams where, like, if somebody's demonstrating a pattern of behavior of, like, mild microaggressions or, like, one little bitty thing that's, like, not really worth addressing on its own, sometimes those situations can be insidious because in any given situation, it's, like, not worth bringing up. But over time, this person demonstrates a pattern of doing this, and we don't we don't you we don't have any, like, aggregate strategies for saying, you know, there's this teeny little thing. There are these teeny microaggressions, and any given one of them might be a mistake, but it's happening, like, regularly over the course of years, and so it's a pattern that we need to address. I found that it can be really helpful in organizations as well to somehow even if we're not addressing those things when they happen, keep some kind of record of them and then address the pattern when it's clear that it's a pattern.

Because at that point, it's no longer about, like, whatever the latest incident is, which wasn't that big a deal. It's about the fact that the individual incidents, though not a big deal individually, aggregate to form a pattern that is, you know, ever so slightly dragging on our team. Ever so slightly increasing our turnover, our churn on who leaves the company. And it's it's a cost that over time we can't afford, but it's tough to address at individual instances. So I think there is there is a parallel there.

But I know we're not talking about organizational dynamics. We're talking about bugs, so I'm happy to come back to the code side of things. Have bugs in their personality, and they need to be fixed. Oh, yeah. Absolutely.

One one place where we have a solution for this. Right? So a clock. Your clock gets off by a little bit, like, every moment. Right?

Yeah. But you don't go to your clock every 2 minutes to go fix it by, like, the tiny milliseconds that it's off. You do it, like, once or twice a year, maybe, or something like that. Right? So we sort of this is the same thing.

Right? We you set up a system that you can check-in or whatever it is. Right? Like but but you have to design a system around it. Like, it's not something that you can you can't let it go because otherwise your clock doesn't work anymore, but you also can't check it every moment.

You have to come up with some system that works for you. I feel like it's kinda like the time boxing thing. Like, everybody that I mentor, I, like, teach to timebox. Mhmm. And and they're, like, well, how long do I time box for?

And I'm just, like, that's actually a personal thing. You just kind of say you just kinda figure it out for you. Like, I have my own time boxes that work for me. I tend to break my days down into, like, half days more or less, just because that works really well for me. Like, you know, I eat lunch.

This is like a natural break. And and because I'm, like, ADHD and I, like, get super hyper focused on something, like, that I always come up for error to half day. So I could guarantee that I can at least time box for that. You know? Mhmm.

It it's all sort of the same thing. But my point is, like, you have to design the system getting back to the subject or whatever. It and I I don't feel like you're saying something different. I feel like it's gonna be a personal thing kind of thing here. Yeah.

I would say so. I think so. This conversation is making me wonder whether Sentry and similar error logging platforms have a way for you to, like, automatically put certain types of bugs in buckets and then, like, alert you when the bucket has a certain number of issues in it or something. I know that, some of them let you bucket. Right?

So so if I have, like, the same kind kind of bug happen again, it, like, buckets it together. It says, hey. Here's an instance. Here's, you know, here's a list of 10 instances that it happened. Mhmm.

I don't know I don't know if there's an alert. I mean, I can make it I've used, like, 3 or 4 of them, and I can I mean, they usually give you an option to, like, give you a daily briefing or something like that? But Right. Like, I know that Sentry has a button that allows you to just mass ignore a certain type of error. So I wonder if there's the opposite.

You know? Like, if there are enough of these, tell me. We should we should discuss mass ignoring errors because that's a that's a ignore like, leaving the beeping signal on all the time is also a problem. Right? Because that means the electrical tape over the check engine light.

I don't need to know about this. That means when you really do have a problem, you have no idea. That's absolutely true. So this is something that I've run into a fair amount with end to end tests on mobile applications is like and part of this is that sometimes the end to end frameworks are, like, a little flaky at their core. I get you.

That's absolutely true. But the signal to noise ratio is, like, low enough that sometimes developers start completely ignoring their end to end test to the point that they don't even look at why it's failing. And at some point, the reason it's failing, like, sneakily changes, but people don't notice because they just see it fail, and they're like, oh, that test always fails. And then a few months later, something was wrong in the app for months. Oh, we didn't realize this was wrong.

Why didn't we realize it was wrong? Or in the best case, somebody goes in and they actually take a look at the end to end test, and they're like, wait a second. Wait a second. It's actually pointing out an issue. This isn't just flakiness.

It's, like, actually a problem. Oh my gosh. So Andrew Mason, who actually used to be on show, so I I, chat with him every week, and he was just discussing this exact or similar style of problem last week. Right? Like and he was he was trying to deal with the code base where, like, it was just failing.

Sorry. I'm totally about to say an opinion here, so it could be controversial. But he was, like, well, you know, which test do I delete? And I was, like, dude, I was, like, if your test suite is failing, it's providing 0 value right now. I was like, start commenting stuff out until it's green.

Because a test suite provides zero value until it's green. And you were talking about, like, I I mean, I've been at places before where they let let these tests go, like, all the time where they just rerun their test suite, like, 4 times till it passes. Right? But that flaky test is telling you something sometimes. Anyway, so my point is, like, I'm a big believer in deletings in deleting, like, broken flashy lights.

Alright. I'm done. How do you feel about coming back to flaky tests on some kind of regular interval similar to, like, coming back to rare bugs on a regular interval and try like, time boxing attempting to fix it. I think that's the same thing, but I'm not really a big believer in okay. So I work pretty hard in self discipline for myself.

Right? And, you know, I have friends and and I trust their self discipline to a point. But as far as, like, trusting, like, a general random developer that I don't know to, like, come back to a thing at a regular interval, I have almost zero trust for that. So so my answer is I just delete it instead because because I don't trust the other guy, I guess. Kinda like kinda like Luke hates the other guy's care.

Oh, man. It's it happens more as you get older. Oh, really? The trust goes down? I used to believe in other people's code, then I started working on rails.

Oh, man. So a friend of mine, hello, Wayne, He also keeps a software engineering blog, and, in one of his posts, he's talking about uncle Bob and uncle Bob's approach to software resilience. And he, in in talking about that approach, he points out that parts of the approach rely almost entirely on telling developers to be more disciplined and that you can't that doesn't like, discipline as the solution for making something not happen is never gonna make that thing not happen because you can't just get an entire population to all exercise discipline to the threshold that you would like them to. Coach fewer bugs, guys. Just be better programmers.

What is your problem? Then you don't have to debug. So uncle Bob was saying that we should be more disciplined, and you're saying that's not possible. Well, what I'm saying is that you're, not that it's not possible to be more disciplined, but rather that it's not possible to command a bunch of people to be disciplined and then guarantee that that's going to work. Right.

Can't use it to validate your personal problems, unfortunately. That sounds that sounds like do you wanna share it, John? No. I I I actually was just saying that, like, you can't you can't use the excuse of, like, oh, I can't be more disciplined because they said it on Ruby Rose. Oh, man.

Oh, if people always take everything that we say on here as ironclad advice, and there are a few statements from earlier that I need to reject. But, just sign of a good podcast that you wish you'd never done it. Oh, man. Is it? I will say that for myself, part of the reason I use automated testing is precisely because I do not trust my own discipline.

Because, you know, there are certain circumstances where now let me go ahead and say that I think unit TDD is very valuable in certain circumstances. I also don't see unit TDD as a panacea for software verification. I think there are other methods that we can use in addition, and I happen to also think some of those other methods are better suited to certain problems that unit TDD doesn't address. However, however, one nice thing about unit test driven development is that if I write the test first, then I code for the API that I want as a reader of the code, which is good because code gets read many multiples of the number of times it gets written. And so I can be lazy at the point where I'm writing the test and write for the API that I want.

Then when I'm writing the actual code, I'm held accountable for an API that's relatively easy to read as opposed to starting with an API that's relatively easy to write, which ends up being harder to read, which causes more strife over the course of the life of the code base. And it that works precisely because I do not trust myself to go the extra mile and make the easy to read API without the accountability step of the test in the front. And I imagine that my code would be harder to read if I weren't using that where I can to make any to hold myself accountable to that API. So it's a it's a perfect example of using a system precisely because I don't trust my own discipline. I can't I cannot rely on discipline even in the one person system of myself to make things work the way that I ultimately, like, theoretically want them to work because in the short term, it's harder for me.

Yeah. Well, the other reason that I do a lot of that is just that I mean, going back to the assumptions on how things work, I'm not gonna remember that next week. I mean, honestly, people are like, well, if I come back to it in 6 months, I'm like, if I come back to it in 6 days. Right? Mhmm.

And so, you know, by having some of these disciplines, it you know, just speaking to the larger idea here. Right? Yeah. If I can encode my assumptions, if I can make sure that the things that I care about are things that I am checking on, that's that's where it makes sense. Mhmm.

What's the adage? The more you cuss about a line of code and how inscrutable it is, the more likely it becomes that you wrote it. That is so true. Get blamed. Oh, crap.

Yep. Yep. And then the other half of the time, it's, well, look. John did it, and then you go and you look at the that commit, and he ran the linter and it changed the indentation. I actually put the code on that line anyway.

I've got a theory that linters are making bugs harder to find. Oh. Because back in the day, you used to be able to side channel do a side channel attack on most code bases. Or if you found, like, different indentation and nonstandard formatting and stuff, I just kinda zero into that. You go, ah, this person can't be bothered to put a space before a curly brace.

So chances are they're they're a total loser. But now and this is where the bug is. I'm like, you know, if it looks if it looks all over the place, then I'm like, ah, there'll be bugs here. And I just found this is really helpful for assessing code qualities. Can they can people be bothered to indent properly?

But now with the linters and the rubo cop and everything, then everyone's code looks the same. So you don't have that kind of side that meta attack, that side channel attack, to spot the dodgy bits of a new code base. Am I am I am I am I off on one there, or is this a real thing? Now you have to read the code during code review is what you're saying? So your linter is eliminating some of your priors for where your issues in the code might be.

I think I think the linter is solving the easy bugs. Like, you've misspelled variable name. And Yeah. You know, you've got kinda of Arriba Cox picking stuff up like this variable is never ever used, and that's it. But by removing those low hanging bugs, what you have instead is a kind of faceless wall of perfectly statically analyzed code.

And the kind it just seems like the bugs are getting harder. That's it. Bugs are getting harder and lint us to the blame. That's exactly what we wanted it to do. We wanted it to take away all the easy bugs so that we can work on the hard one.

So Now you're complaining that they're all hard. There's no easy ones for you to point out. Yeah. So Some people just can't be satisfied. Okay.

So getting back on topic, I actually wanted to revisit a thing, and I'm not, I swear, I'm not trying to set you up for failure. I just felt like we should probably address this. But you talked you talked about your 3 strategies earlier and how the sort of default one for us is the standard strategy. Try out the place that I think has the bug, then go to the next most likely place and so on. Right?

Until I eventually find it. At some point, you sort of have to, like, bail out of that and be like, I'm not getting anywhere. I should try a different strategy. Do you have a sort of rule of thumb, even if it's not perfect, for, like, when you start to bail out? I I don't think that we actually called that out.

Yeah. Totally. When I am trying the same thing multiple times, even though I already saw it didn't work, is usually when I like, if I haven't done it by then, that's the time. So you wait until that point of anger? For me.

Yeah. Exactly. When you've got, like, the orange face emoji is your actual face. That's fair. I don't I don't know.

When do you usually bail out? How do you know when to bail out? I don't. But I'm, I I'm all I'm big into time boxing, so I'm usually, like, you know, alright. Well, I'll let my if it's just something that I start out in the first place saying, I literally have no sense of where this thing is.

Right? Then then I'm just like, well, I'll give myself, like, 30 minutes maybe to, like sometimes I'll be like I'll give myself, like, 5 or 10 minutes. Right? So, like, play guess and check, and then I, like, jump into, like, something else. Mhmm.

But but if it's something that, like I feel like usually that process happens for me when I think I know where something is, and then I start down the rabbit hole, like, with high confidence that I know exactly where this problem is, and then I just keep discovering that I don't know where this problem is. In that case, it usually happens that, you know, I either need to, like, get up for a bio break or, like, to go eat or something, and then I'm like, I've been doing this for a long time. I should probably, like, stop. That's yep. Yeah.

I run into similar. And another thing that tends to happen to me is that I will be just banging my head against a problem with no luck for some extended period of time and then convince myself somehow to put it down and walk away. And in the time when I've walked away, it's, like, I guess, on a background process somewhere. And then it occurs to me, oh, I haven't tried whatever this other thing. Maybe I should do that instead.

And it happens so frequently that now I get into these mental battles with myself where I can't figure something out, and I wanna keep working on it because, like, the more it doesn't work, the stronger my resolve grows to get it to work. And then I have to convince myself to walk away because, for some reason, the more things don't work, the more determined I become that the next thing is gonna work, which is doesn't match up with the data at all. But I've gotten better about it, but it used to be a real problem. What about caffeine and alcohol? Oh, man.

So I do like my coffee in the mornings. I don't know if it makes me a better programmer. I do I have noticed that if I if I try to, like, mess with the system after about 5:30 PM, I'm probably just gonna end up breaking it in some kinda way where I have to come back in the morning and start from, like, behind where I would have started if I just stopped at 5:30 because I have to fix whatever I broke after 5:30. Like, the best thing that's gonna happen if I commit after 5:30 is I'm gonna have the system back at where it was when I started working, which is an issue. Interesting.

I I find that sometimes I need both. Caffeine and alcohol? The caffeine to motivate you to find a bug because it's always, like, you know, the big ones, always the ones you don't wanna look at. Mhmm. Right?

So you get the caffeine to rev you up. And then eventually, you reach the point of failure where you can't find the bug, and then the alcohol lowers your inhibitions, and then you start just trying crazier stuff. I like that idea as an inhibition inhibitor. I mean Interesting. I wonder.

That's how it gets depicted in pop culture a lot too. Right? What's really about these these brain studies where, you know, people can't see stuff and they can spot spelling errors easier when they're tired or they kind of impinge their brain and suddenly they can kinda spot, like, the word the appearing twice in the sentence more easier. You heard of that kind of stuff? I haven't heard about it, but I believe you.

This is a I'm sure this isn't something I've made up. But they kind of do something in people's brains, even they make them very tired or they they give them something. And then spotting spotting single character spelling mistakes is easier because their brain is no longer functioning on that higher level. That kind of reading it's it's no longer kind of speed reading. It's kinda doing one step at a time, then certain activities become easier.

So, this this is an idea I really take to heart in my 4 AM banging the head against the wall, bug hunting binges. I wonder if that speaks as well in part to why pair programming works is that you can have one person focused on the overall strategy and code flow, and then the other person's focused on, like, is this word in there twice? Does this match the API as it currently is? Are we using this variable? You could sort of employ 2 different levels of thought at the same time that way.

I don't I don't have an answer one way or the other. I was just gonna say the way that I always thought the code programming just worked is you had somebody writing and then somebody reading it, and the person that reading it was like, what the heck is am I reading? And I always felt like that that sort of is how I saw it working, but I'm sure there's many things at play. It just works. I just trust it.

So I did have a question. I think this is probably my last one for the day, but so this is this is kind of you you've kind of touched on this before. You said that you're sort of interested in creating, language around debugging, pedagogy as you said earlier, things like that. It it seems like you are sort of interested in this space. There I I think I mean, I I take your point.

Like, I get it. Like, you're right. Like, I literally have no words to describe my process. Like, today, I have some new words and some new systems, and that's really cool. Are there things that are, like, missing from here?

Like, are there things that we need to do to make this work? I was thinking as you went along that this seems like a societal shift. You mentioned earlier, actually, I might be answering my first question and opening a new one. So you said earlier, hey, we we actually, like, basically reward people that don't debug well, and we're, like, not rewarding people that are basically by just rewarding people that are in build mode all the time. So are there, like, problems across the board that we should be addressing?

How how do we get to a better place? It's actually a good question. It is a good question. I think that there are a couple of different pieces to it, and one of them might be about the incentive structure that we use for measuring our I don't know if value is the word or measuring our work as programmers or maybe as anyone and finding ways to model that so that it doesn't necessarily feel like a waste of time. And there's a psychological component to that, and there's an actual organizational incentive component to that.

And then the other part would be learning more about bugs. So I'll address both of those. The first one being the organizational incentive part, how do we go about, you know, providing accolades and career oriented rewards to someone who is able to figure out and resolve this, like, thorny issue that nobody else has managed to figure or that has existed for a long time? How do we promote that to the level of, like, getting this feature out on time? Because in essence, they both they they do the same thing for the for the goals of the software.

We want working software, which means the feature has to be there, but it also means that the feature has to be working. And we wanna make sure that we are recognizing and rewarding both of those things. And maybe that's at the organizational incentive level. It's something for engineering managers and directors of engineering to think about. On the psychological level, so I'm a big fan of and this is my mom.

Growing up, my mom really loved mystery books, and she kind of passed that on to me. She even wrote some mystery novels. She's very, very into the mystery. And, I had various plans for what I wanted to be when I grew up, a detective, a spy, various types of things like that. Those I really did.

It's a whole story. I'm not a spy. I already know somebody was about to ask me that. So, no, I'm not. It seems like that one.

Exactly what a spy would say. The that that's immediately what people say when I tell them I'm not a spy. I'll let you come to your own conclusions about whether I'm a spy or not. But yeah. So but but one of the things that has kind of helped me psychologically make peace with long and onerous debugging processes is to imagine that I'm a detective in those scenarios, and this is a case.

And, you know, which which cases do detectives get rewarded for solving? The really hard to crack cases. So regardless of whether or not there's any actual organizational incentive for me to go after this bug, I get to depict I get to picture myself as, like, detective Chloe Decker or whoever, and think of this as solving a case, which is helpful for me as an analogy that makes me feel like, you know, I'm getting something valuable done. I'm getting something cool done. It's not this isn't this isn't like a yak shaving type of task where I you know, it just has to get done, and that's the only reason that we're doing it.

There's something for me to learn here. There's something there's a skill for me to develop here. I'm gonna be a better programmer at the end of this because I've resolved this bug. So that's the organizational incentive and psychological side of it. And then there's coming up with better ways to understand and categorize bugs.

So because bugs are something that we largely have approached by an individual guess and check method, there's not a lot of good systematic research on bugs, where they come from, how they happen, what they look like. We do see so there are papers about, like, does static type checking reduce bugs? Does this reduce bugs? Does that reduce bugs? But the so I'll go ahead and say the denouement of that is that a lot of that research indicates that the two things that, quote, unquote, reduce bugs are code review and developers getting adequate sleep, which is an interesting result.

But the thing is that as you look into these studies I'll just say this. Looking into computer science studies can sometimes be a depressing endeavor because as we look into them, we realize that, like, the sample size isn't anywhere big enough to indicate a statistically significant difference a lot of the times, or our there are multiple comparisons going on here. The statistical rigor doesn't tend to be very good. And in particular, in the case of debugging studies or studies of bugs, we don't have a good handle on what constitutes a bug, and so what happens, researchers come up with proxies. They try to come up with these proxies to indicate what a bug is in a code base.

And it'll be things like, this isn't a fake example, and I understand why they did it. I don't think this is a, you know I I I get that it's really hard to figure out a way to say what is a bug. And so and you need, once again, a lot of samples to be able to do any research. So how do you find a lot of samples in a case where you're not really sure what a sample is? You try to come up with something that represents a sample.

And so they'll say, we looked at these 5 code bases, and we decided that any commit that changes two lines represents a commit where there was a bug. Like, what kind of proxy is that? I don't think that's a particularly accurate proxy necessarily. There might be bugs where you have to change a whole ton of lines of code, or there might be a situation where, like, I don't know, we changed deployment hosting providers, and that's one line. And it wasn't a bug.

We just, like, changed it. You know? We don't we make it's an assumption. It's an embedded assumption. We're assuming that a two line change equals a bug.

Okay. What's the accuracy of that assumption? What are the false positives on that? What are the false negatives on that? So we really just don't have solid research on bugs in that way because in order to do that, it's really, really tough to go back and retroactively figure out where the bugs were.

You have to, in my opinion, have folks or in my perspective at the moment, to get a good read on bugs, you have to have developers logging at the time of resolving bugs what is the bug. How much time did you spend on this bug? What did the problem end up being? What are all of the things that you tried in the process? I think that a really solid, really insightful, illuminative study on bugs would require that kind of data collection, and that's a really, really tough thing to do.

But, I mean, I'd love to do that at some point. I would like to get a cadre of developers together who are committed to logging our bugs and how we resolve them and figuring out what patterns we can find in that. What took a really long time? What didn't take a really long time? What prior experience really helped me out here?

Where am I translating skills from one code base to another code base and stuff like that? And the data collection process would be super intensive, but I think it's something that hasn't been done and something that I would love to do and or see done. But I think that's where additional terminology would come from. Yeah. Cool.

Well, I've gotta push us toward PIX because I've I've got a hard stop in about 15 minutes. This has been really enlightening, and, yeah, if people want to participate in the conversation going forward, how do they get a hold of you? Oh, man. So my name is Chelsea Troy. My site is chelseetroy.com.

My email is chelseetchelseatroy.com. My Twitter is heychelseatroy. I keep it consistent on the name as as much as I can, but, yeah, I mean, I'd love to chat with people. Those are probably the places where you would find the most. I do have some blog posts about debugging already on the site.

I'm happy to provide a link to the category or, what have you for the show notes if that's helpful. But, yeah, I'd love to talk to people about this kinda thing. I'm on a bunch of Slacks too. It's possible. If I'm on a Slack that you're in, my handle is Chelsea Troy because that's what it is on all Slacks.

So yeah. Awesome. Yeah. If we can get links to those in the chat, then we'll put them in the show notes. Alright.

Well, let's go ahead and do some picks. Luke, do you wanna start us off with picks? I gotta pick I gotta pick. I've been working on a giant, CCTV system that backs up about a 140 gigabytes of video data day, which is quite a lot. And it was it's all based off a Mongo database, and the Mongo database likes to corrupt itself on a regular basis.

We found that that bug, but we wanna move some data off the server onto a kind of off-site backup. We had the idea of sticking it in a in an s 3 or a or a Google Cloud option storage. And, my colleague used rclone, which I had not really heard of, but it was so fantastically easy to get a database blog. We're talking this was only a 100 meg, but any any size works is quite robust. To get data into a Google Cloud Drive or supports all kinds of different things.

It was so easy to use and set up. My pick for this week is rclone.org. If you wanna get data into the cloud from you've developed an environment, the server, that is really, really good. It's fantastically easy. Awesome.

John, do you have some picks for us? I have 2 this week. So, I think this is probably more well known in in the gamer community or whatever, but because I think that most of us in the developer community are using laptops. But I have I have to I have my laptop and then also have desktop machine, so I use a mouse. And I I got one of these gigantic mouse pads that, like, goes under your mouse and, like, under your keyboard just, like, takes up everything or whatever.

And it's like, I don't even know how to describe, like, how much better it is than having a little, like, you know, a few inches by a few inches wide square for a mouse. It's just completely different experience. Like, I don't run my mouse off the edge of my mouse pad constantly kind of thing. It's great. I highly recommend that you can get them for fairly inexpensive too.

I mean, they have they have plenty of expensive ones, but you can get an inexpensive ones as well. I got a razor one, and then I have, like, this Corsair one that I got for, like, $5 actually, because it was, like, on sale. Like, you can get them for fairly inexpensive. So but, yeah, I totally totally recommend getting, like, a gigantic freaking mouse pad. They're awesome.

I second that. I've got a massive one, and it's also wonderfully absorbent of anything you spill. Alright. Useful. So so there's that.

And then the other thing that I was thinking about this week, I was just reminded because I got dragged by so I've been a member of this Discord server for a really long time. It's apparently become popular, but it's like so I'm really into mentoring and stuff and and some friend, like, was like, hey. You should come join the server. Like, it's just people, like, learning how to code, like, answer their questions occasionally, blah blah blah. That'd be really cool.

So I I've been in the server for, like, I don't know, a few years at this point. But it's called the coding den. It appears to me on the surface to be, like, filled with mostly probably, like, college kids and stuff. But if you, like, are into mentoring, you know, they could probably use some more mature people to do some of that. There are some of those people there.

So if you're, like, into that, I'm just throwing this out there. It's the thing. So that's out there. It's apparently I I found it the other day. It's, like, apparently, one of, like, the most populous Discord servers on Discord too.

So for whatever reason. But, yeah, there's a whole bunch of people on there asking questions and helping other people. That's what I got. Nice. I don't know if I'm more mature.

I'm I'm more seasoned, anyway. Seasoning's valuable. There we go. Right? I like spicy food.

Yeah. There we go. I've got a few picks that I'm gonna push out there. Lately, I have been listening to The Wheel of Time books on Audible, and they're terrific. And I don't remember who the narrator is, but he's also pretty terrific.

There are 2 of them. There's there's a man and a woman, and they kind of depending on which point of view you're getting the story from. Right? If you're getting it from a male character, then it's the male narrator. And if it's a female character from the female narrator, but they're they're really well done.

I I really, really enjoy those. So I'm gonna pick those on Audible. And then, yeah, I've been onboarding with a company. I'm I'm not gonna announce where I'm working, but, things kinda slowed down with the podcast network to the point where I in order to pay the bills, you know, I had to go find some other work. So, anyway, the onboarding process has been excruciating, but the they they did send, like, a a box full of hardware, right, with a laptop and stuff.

And they had, like, this really inexpensive, Logitech wireless mouse, and and I really like it. So I'm gonna pick that. I'll find it on Amazon and put a link in the show notes. And, yeah. Those are my picks.

And then also, of course, most valuable dot dev. We're gonna be doing monthly q and a calls, probably get an expert or one of my cohosts from one of the shows on to talk about stuff and teach us stuff. And then I'm putting together a summit in December, and I'm gonna have a whole bunch of experts essentially get on. And the idea is is, you know, if you were if you woke up tomorrow where nobody knew who you were, you're kind of a mid level developer who was confident at their job and didn't really know what to learn next. You know, what would you do to become the most valuable the most valuable developer on your team?

So it's just gonna be an inter it's gonna series of interviews, but that's that's what I'm looking at doing for that that summit. And if you wanna just get the videos as they come out, that'll be free. They'll be free for a day or so. And then if you want, you know, longer access and access to a Slack channel and scheduled q and a's and stuff like that with them in Slack, then, you know, you can buy a ticket. So that's that's how I'm looking at doing that.

That way, the folks who are in areas where they can't afford it or if you're between jobs, I mean, you can just get free ticket and show up and and participate. So so yeah. So that's what I'm looking at doing there. Nice. Chelsea, do you have some things you wanna shout out about?

Yeah. Absolutely. I got some picks. So the first one is a book that I've been reading recently. It's called The New Educations by Kathy Ann Davidson, and it is about the American, chiefly, higher education system and where it came from, why it is the way it is, and what we might potentially change about it.

Now I'll issue a reservation on this book particularly with regard to tech. Some of the tech claims that it makes are not accurate exactly, but with regard to the teaching portion specifically, there are a lot of case studies and examples in this book of teachers who've done really interesting things in their classroom to help engage the students and help the students actually participate in meaningful projects that make positive changes in the world in areas where the students care about as they're in the class. It's really cool. I took a whole bunch of notes from it. I'm gonna be doing some stuff with the exercises from that book, I think, as experiments in my mobile software development class this fall.

So I'm pretty excited about that. It's called The New Education. And I guess the other thing, maybe this is too too cliche for the Ruby Rogues podcast, but at RubyConf 2020 this year, remote edition, I am going to be coleading a workshop about software maintenance and, like, risk mitigation mechanisms that I'm pretty excited about. The case study that we're doing is a 3 device system where some software all collaborates together to help a parrot emergency room keep track of all of its parrot patients. So parrots' lives are gonna be on the line, and mitigating risk in the software is gonna be in your hands as participants.

I'm really excited about it, and I hope that you will join me there in November. Awesome. Looking forward to it. And thanks for coming, Chelsea. This was really fun.

Yeah. Absolutely. I had a great time. Thanks for having me. Yeah.

Thanks for coming. Alright, folks. We'll wrap this one up. Until next time, Max out.
Album Art
Black-Belt Debugging with Chelsea Troy - RUBY 663
0:00
01:13:55
Playback Speed: