Learning, Testing, and Mentorship: Building Autonomy and Confidence in Python Development - ML 167

Today, Ben and Michael dive into a compelling discussion on the intricate dance between challenges, feedback, mentorship, and growth in the field of software development. In this episode, Michael shares their journey of overcoming the pains of independent problem-solving before receiving effective guidance. As we explore their experiences with Ben, they uncover the vital importance of openness to feedback and the profound value of peer review in refining solutions.

Show Notes

Today, Ben and Michael dive into a compelling discussion on the intricate dance between challenges, feedback, mentorship, and growth in the field of software development. In this episode, Michael shares their journey of overcoming the pains of independent problem-solving before receiving effective guidance. As we explore their experiences with Ben, they uncover the vital importance of openness to feedback and the profound value of peer review in refining solutions.

They delve into technical aspects, including Python's Pytest framework for unit tests and the delicate balance between complexity and simplicity in testing for maintainability and readability. Additionally, they touch on Michael's hands-on learning curve, tackling unfamiliar concepts such as RAG, embeddings, LLMs, and Git development, all while managing significant time constraints and social commitments.

Moreover, Ben shares his mentorship philosophy, likening it to military training—pushing mentees to their limits without prior warning to foster resilience and self-improvement. They also discuss the importance of documentation, bug bashes, and the fine art of balancing integration and unit tests to ensure robust and thorough software.

Join them as they explore the journey from initial struggle to increased autonomy and confidence, using real-world examples of testing gaps, code complexities, and the powerful impact of daily feedback. Whether you're a seasoned developer or just starting your tech career, this episode is packed with valuable insights to enhance your learning and development process. So, stay tuned and dive right in!

Transcript

Michael Berk [00:00:05]:
Welcome back to another episode of adventures in machine learning. I'm one of your hosts, Michael Burke, and I do data engineering and machine learning at Databricks. And I'm joined by my cohost,

Ben Wilson [00:00:14]:
Ben Wilson. I work on quarterly plans at Databricks.

Michael Berk [00:00:19]:
Today, we are not joined by a guest. We are joined by myself and Ben. And what we're gonna talk about is a really formative experience for me, and, we're gonna walk through basically how Ben approached teaching me and sort of mentoring me, through a very challenging project. And this project was building the LAMA index flavor for MLflow. So if you're not familiar, MLflow manages machine learning model life cycles, so model versioning, experiment tracking, does some fancy gen AI stuff like tracing. And, in MLflow, there are things called flavors, which are essentially wrappers around common projects. So scikit learn, TensorFlow, Keras, PyTorch, you name it. And I just built the flavor for LUM index with some help from another engineer on the MLflow team.

Michael Berk [00:01:11]:
And, this project was a big, learning experience because I wanted to learn basically how software is done, specifically at Databricks. And I'm not gonna lie. It was hell. There were, like, some really challenging times. There were some very long Saturday nights. Maybe not so much Saturday nights, but there's some long Sunday nights, long Friday nights. And, I ended up learning a ton. And we were just gonna go through and sort of reflect on what things I found valuable in the learning process and then also discuss Ben's approach to mentoring and teaching.

Michael Berk [00:01:46]:
And, yeah, hopefully, that this will give some value. It'll give a little window into how the Databricks engineering culture works. It might not be the best fit for you or your organization, but at least it'll be a a very honest look at how one organization does teach. So, Ben, over to you. What questions do you have?

Ben Wilson [00:02:08]:
Yeah. I mean, before we started recording, the question that came up, we were like, what are we gonna talk about today? I just asked. I was like, well, what did you learn doing doing this process? Because, I mean, it just released this past Monday. If you're an m o flow user and you're really curious about Gen AI applications, it's super awesome. The library's, like, really slick. And, you know, what Michael and and Yuki built is awesome. I remember bug bashing it about 3 weeks ago. Or the first one was, like, a month ago, and then I did, like, 3 weeks ago bug bash.

Ben Wilson [00:02:42]:
I was like, man, there's there's not a lot of issues with this at all. So, like, really having to dig in and and really try to find something to give you feedback on. I was like, this just works. This is awesome. But, yeah, you said that you, like, wrote down a a list of things that that you learned. Let's go through the list, man.

Michael Berk [00:03:02]:
Okay. So I have historically been a big journaler and less so in the, like, recent year or 2, but I still wanted to reflect on this experience because putting words onto paper really helps my brain stop thinking about it, and it also helps me conceptualize and sort of harden the the learnings that I had. So, I think going through this list in one shot might be a bit verbose. But at a high level, I came in, being able to write Python code. I was pretty okay at implementations. I would I would typically do 1 or 2 refactors for anything more than, like, 50 lines. Like, I couldn't just bang it out and have it be flawless. Now I actually sorta can, which is cool.

Michael Berk [00:03:48]:
And the thing that I was really missing is the software engineering components. So for instance, there's a module in Python called inspect. It allows you to look at, an import path to signature, that type of thing. I learned about serialization and deserialization. Like, what does that mean when you take an in memory thing and write it to a, something on disk? So, like, convert an object to JSON or object to a pickle file. I learned about Git development. Like, PRs, definitely was comfortable with that. But what's a feature branch? How do you stack PRs? How do you review? What's the tone? I arguably, the most valuable thing I learned was test coverage intuition.

Michael Berk [00:04:30]:
Like, what do you think needs to be added to your test suite so that you are guarding against edge cases that you might not be super sure of? But, likewise, how do you do that without literally testing all of Python and all of LAMA index? And then more importantly, how do you, like, augment things that are external dependencies? So if you have an something that does an API call to an external service, how do you mock that? How do you patch that? On the technical, all of py tests are patching mock fixture, com test parametrization. I learned about Spark UDFs. That was a weird one.

Ben Wilson [00:05:07]:
And

Michael Berk [00:05:09]:
I guess those are the core, like, base Python things and and software things.

Ben Wilson [00:05:14]:
Yeah. That's the interesting thing to to kinda bring up about the difference between because, you know, you currently work in a role, and I used to work in a role. And then at previous companies, I was doing this stuff, where you're using these libraries to solve some business problem. Like, hey. I need to move data from this place to this space and do all this crazy stuff to it, join all these tables, and then I'm gonna take something with that data and build a model out of it, and then I'm gonna do my validation and and try to tune it. And when you're when you're cranking that stuff out, even if you're adopting best engineering practices with that development workflow, at the end of the day, the only thing that you really care about is, like, the results. So a lot of times, you're doing an integration test to say, like, hey. Before we release this to production, let's make sure that the predictions look legit, that there's not crazy stuff happening.

Ben Wilson [00:06:12]:
So the testing mentality for that is way different than if you're building framework code, like, you know, pure software engineering code. And when you're doing applied development, you're not concerned so much about doing stuff like type validation, and you don't have to use those parts of the language that much. The the frameworks that you're using handle that for you as they should. So I'm curious to hear your perspective on what was it like learning those things? Was it an epiphany of, like, I didn't know that this is part of the court language, or I didn't know that, like, somebody had thought this up to build all of this, or was it more like, I know what I need to do to, like, solve this problem, so I'm gonna go and, like, write my own implementation of it. And then later on like, retroactively, you find out by talking to one of us, like, hey, dude. This already exists. You, like, use this this library that exists within the core language.

Michael Berk [00:07:16]:
Yeah. Yeah. So what Ben potentially is referring to is I built the inspect module, and then he was like, go look at the inspect

Ben Wilson [00:07:23]:
module. Fair.

Michael Berk [00:07:25]:
Fair. I built, like, a, like, one method in the inspect module. What was it like? First thing that came to mind was, like, realizing the magic of the layering of software. And this might be sort of a philosophical aspect, but I remember one of the first times that I got an appreciation for math. And I'm actually, like, comically bad at school, but, I I think the intuition behind a lot of the subjects in school are really fascinating. So for instance, math is built off of 1 equals 1. And if you stop and really ponder that, that's insane. It's led to all these incredibly complex and abstract theories that are really valuable.

Michael Berk [00:08:16]:
So, likewise, with software, it as you said before, I was a practitioner that wanted data from go to go from point a to point b fast and efficiently. And now it's like I have a greater appreciation that there were building blocks that allowed for those libraries to come in come to fruition. It's not I don't have one equals one appreciation, but it was really cool to see, alright, if I have pandas ETL, I can go and look at the pandas source code and say, alright. It's these low level transformations. I now know how imports work. I know how the testing around pandas works so that I could see, alright, that this function is supposed to do what it's supposed to do. I have a greater appreciation for how it was built. So that and there's, like, sort of a beauty to that and a almost like a not to go too deep, but it's like a transcendence to that.

Michael Berk [00:09:08]:
Like, it's it's I really feel like I'm standing on the shoulders of giants, and I really don't matter at all.

Ben Wilson [00:09:16]:
It's not that when you're using those frameworks, it it is a humbling experience when you realize, like, wow. A lot of really smart people spent a lot of years working on this stuff, and they built off of work that existed before that in some other language or some other framework. And it all compounds on on itself. That's standing on the shoulders of giants. That's how everything in in science in general works, in the world. In applied science, which is, you know, engineering. But by knowing those references, are you like, okay. I can go into the Panda source code, and I can see how this transformation works or, like, how does it know how to manipulate this one column of data in this row construct that this thing is an abstraction over? And you start realizing, like, okay.

Ben Wilson [00:10:12]:
I I see how that works, but then you go into the test suite because now you're thinking about that. I'm sure, like, how do they test this? Oh, I just wanna see and get get, you know, some ideas maybe. You look at their test suites, and you're like, alright. Do you get that did you get that realization while you're doing this stuff? Because you mentioned, like, hey. Not test everything in LAMA index, not test everything in in, like, core Python. Is that where you got that realization of, like, yeah. Somebody else is already testing this. I don't need to test this.

Michael Berk [00:10:46]:
For sure. Yeah. It's it's really challenging to fully grok what is reliable and what is not. And I'm still pretty bad at it, but this project was really valuable to help me understand where to draw the lines for what I can take as truth and what I can take as stable and what I need to guard against.

Ben Wilson [00:11:09]:
Yeah. And it's something, in my experience, you it just it gets more intuitive the more you do it. Yeah. And you you tend to with peer feedback and with, like, self reflection, you start realizing where that boundary needs to live with respect to what have you implemented and what do you need to test so that you're not encroaching on testing something that is completely irrelevant to actually test. And I've I've worked with teams before in the past, like, customers, where you go into the team and you're like, alright. This is a test happy team, which is cool. Like, they they wanna make reliable product here. And you look at the test, and you're like, alright.

Ben Wilson [00:11:57]:
90% of everything you're testing is, like, core Spark. You're validating whether spark can, like, join tables. You don't need to test that. There's a whole open open source community that is testing that every PR that's filed on that repo. And every time that's released before release, there's a whole suite of integration tests that are running to make sure that that functions correctly. That doesn't mean that the system is perfect. Whatever tool you're using, you know, humans are writing this code, and we make mistakes. We create bugs.

Ben Wilson [00:12:35]:
We've we create regressions, you know, because we don't have perfect test coverage. It's a it's a myth to even have something like that. So there's always gonna be something that is gonna come up that's gonna break your stuff that you're not actively testing, But your test should be written in a way where that becomes apparent from that test failure. And I'm I know you've been in a lot of our stand ups. You've seen a lot of the stuff that we talk about with, like, maintenance where, hey. This test suite failed, and then you look and somebody's got a PR fixing it within 30 minutes after stand up is done. Because the tests are written in such a way that diagnosing that is relatively simple because we understand that mechanism of, like, okay. I can modify this one test real quick locally to debug it or just write extract this one portion of the test, run it in a local REPL and see, okay, with this version of this library that just got installed, does one equal 1? Oh, no.

Ben Wilson [00:13:42]:
It doesn't. Okay. We need to, like, pin this version or, like, file an issue in their repo saying, like, hey. I think you just released some broken code. But if you're not testing, then you're not gonna catch that. And if you're over testing, you're still gonna catch that, but you're gonna have an additional test that's running that is gonna fail maybe once every 5 years. And when, like, that underlying library if you're doing like, hey. I wanna make sure that I can join these 2 tables in pandas and make sure that, like, the join is correct, And I'm testing that in MLflow.

Ben Wilson [00:14:22]:
We don't have tests like that because we expect that that that team that's managing pandas, they're testing that, and they do. And I've never seen some something like that break. So, yeah, you gain, like, that that intuition over time, I think. Yeah. Yeah.

Michael Berk [00:14:42]:
And to rapid fire some tangible takeaways because I usually hate it when people are like, oh, I'm so good at this now, but I'm not gonna tell you how it works. So for testing specifically, some of the things that I learned was, first, intuition actually is really important. It's hard to put a bunch of rules to it. That said, here are a bunch of rules. Don't test open source projects. Instead, choose your libraries that you're dependent on very carefully. The classic computer science edge cases of, like, is the parameter empty? Is the parameter none? Is the parameter the right type? Those can really bug you down with complexity and instead have using context aware testing. So for instance, knowing where this utility fits in and knowing the constraints of what parameters are going into this utility, testing those is typically a lot more high ROI instead of testing everything under the sun that could go into this utility.

Michael Berk [00:15:44]:
Integration tests are really valuable, and I think I got the highest ROI results from those. Unit tests are great to catch errors early, but, typically an integration test, if it runs end to end, something is generally pretty right. And now you obviously can't test every possible combination of, inputs to your integration test. So that's where sort of a very good coverage of a specific component that's a unit test, that's where it's valuable. But, integration tests, I actually started with writing them and then sort of would reverse engineer some of the unit tests. And that was super valuable just to see if I was on the right track.

Ben Wilson [00:16:29]:
Mhmm. Parameter Yeah. That one caveat to what you just said. Your intuition about that testing is a direct byproduct of the feature that you're working on. So if I had given you a task, which you might get in the next couple of months, like, hey. We need this this, like, abstract utility that does this thing so we can refactor the code base and make it work or something. At that point, you're not writing any integration tests because you can't. Like, what are you gonna integrate with in order to test that full end to end? It's a utility that's used in 800 different places in the code base.

Ben Wilson [00:17:09]:
So at that point, you are doing that sort of traditional, you know, unit testing approach where you might not wanna put some sort of safety guard at every point of use right before you use this function to be like, hey. Verify that this is null. And if not and if it's if it's null, then throw an exception right here. You don't you wouldn't put that before using the utility. You put it in the utility if you know that that's gonna break the usage of the utility. And then in your unit test, you're you're verifying that that throws with, like, all the different ways in the language that you can express falsiness, but also not throw for, like, a Boolean false or something. So there's some some nuance to, like, how you would adapt, like, different testing strategies, but what you were building was basically integration within another library. So that naturally follows.

Ben Wilson [00:18:08]:
And by the way, every flavor that I built in MLflow, I do the exact same thing. I actually write a unit like, I write a full integration test before I write any code. And that integration test will blow up because it's using APIs that don't exist. So I'll write that first, and then I'll go and be like, okay. I know these APIs need to be built. I'm gonna write those first. I'm like, hey. I need to log a model.

Ben Wilson [00:18:35]:
Okay. Here's my skeleton for that. I need to save a model. Here's my skeleton. I need to load a PY func version of that model. Here's, like, a very simple skeleton for that. That's and then you work towards making that test pass.

Michael Berk [00:18:51]:
Heard. Yeah. That makes a lot of sense. It's a good caveat. And then there are a few other things, but those, I think, are some core takeaways. I guess one final thing that's worth calling out is tests really inform the granularity of your functions. And so when I first built the serialization and deserialization so you have an in memory LAMA index object. LAMA index is very similar as I don't think we alluded to actually, but it's very similar to lang chain.

Michael Berk [00:19:18]:
It allows you to create vector stores, rags, LLMs, agents, those types of things. And when you look to take everything that is in the context of your Python code and put it onto a file so that it can be loaded in a new environment, that is a seemingly simple problem. But when I was writing that, it I basically had it too high level, and then I was like, wait. I can't test this. Then I made it too low level, and it was like, I'm testing stuff that's basically core Python. And then I ended up settling on this middle ground where there's basically 3 layers of steps that I think a lot more accurately depicted what serialization and deserialization should look like in this context. So, the yeah. It was really a useful learning experience about, like, what a function should look like.

Michael Berk [00:20:15]:
Like, how much should be in there? How discreet should it be? Because it can be too discreet, but it can also be too big and then hard to test.

Ben Wilson [00:20:22]:
Or impossible to test without Or impossible. Just, like, sheer integration test. And that's the trade off with with, like, code complexity and test complexity or or test run time complexity, where you're like, well, in order to make this more testable, I can I can write more tests and write, like, a lower level implementation, but what do I gain from that? Do I want, like, 10 x the amount of code in my my implementation and then 50 x the number of tests? All of those take time. So you're now putting that burden on everybody who's gonna contribute to that code base going forward because they have to wait for those tests to finish or pass. Right. And is it really the best use of resources? And that's really the question. And what do you what do you get out of that? Is there a massive benefit to knowing, like, hey. I'm testing all of these different permutations of of this low level API that I built in order to do, like, handle serialization of all of these different types.

Ben Wilson [00:21:29]:
It's like, not really. But when you do the high level, it's like, yeah, the code is way easier because you know? But you might be exposing yourself to situations where a user or the maintainer of that tool that you're integrating with is just makes it a design change or is choosing to use this in a way that you didn't think they were going to. And now your implementation doesn't even cover that. So you basically release something that's not usable for this particular use case or will break easily.

Michael Berk [00:22:05]:
Yeah. An example is the first feature request we got yesterday was for supporting external vector stores. And this I was well aware that this was a blind spot. I brought it up actually, like, a few hours to bend before the the feature was, requested. And then I also had called it out, like, a month or 2 back in the design as well. And we just didn't have a note in the documentation that it wasn't supported. So the guy filed a bug saying, hey. This isn't working.

Michael Berk [00:22:30]:
What the hell? And then Yuki politely apologized saying that, oh, we don't support that right now. And he was like, we'll freaking leave a note in the docs. Like, what the what the hell? I just wasted, like, 3 hours. And so, yeah, it's there are definitely gonna be blind spots, and you can't guard against them. But, that's That

Ben Wilson [00:22:52]:
was intentional, though.

Michael Berk [00:22:54]:
Like Yeah. So a 100%.

Ben Wilson [00:22:55]:
We just intentionally designed not to support that at first, but it's it'll be coming.

Michael Berk [00:23:02]:
Yeah. It's essential. Release.

Ben Wilson [00:23:04]:
Yeah. So, yeah, having, like, insight into all the, like, possible ways that somebody could use what you've built, how do you feel about that experience? Like, do do you think you've been able to put yourself in the shoes of users a little bit more effectively? Even though it's only been a week for getting feedback, but throughout the the bug bash process, that's one of the reasons why we did it is so that you could be exposed to, like, what does it feel like when somebody tells me that what I built is broken?

Michael Berk [00:23:40]:
Yeah. I have no qualms with that other than I'm, like, a little bit frustrated that I didn't do it right. But I think the bug bash went well. I, overall, was expecting everybody to find a bunch of issues and also hit the ground running a lot faster than I thought. But it made me realize that this was a pretty context specific type of project. And if you're not aware of llama index and, like, the deep nuances of the language and the package, you'll just go to the quick start. You'll try the basic stuff. Like, can I log a model? Can I load the model? Can I do inference on the loaded model? And, yeah, and then also just, like, when stuff breaks, they're like they scratch their head like any other person even though they literally have built MLflow.

Michael Berk [00:24:28]:
It was just cool to see that, like, simple warnings and simple error messages are the best. Like, this isn't working. This is why. This is how you could fix it. Done. That's so freaking essential. And then also having the, like, the the logical path be intuitive, that is also essential. So it definitely gave me a lot of perspective onto that, and it actually has changed how I've been writing customer code the past couple months.

Ben Wilson [00:24:55]:
Yeah. That's a 100% why we do those bug bashes is what you explained. You get people in there who have no context. They're simulating a user who's seeing this for the first time, and they have to figure out, like, what do I do? So if you start a bug bash that's scheduled for an hour and a half and nobody reports anything in the first 30 minutes, that means your docs suck. There is no quick start. There is no, like, how do I even use this thing? You they don't have a tutorial or an example that is the the hello world for this. But if you get to a point where everybody gets through the quick start in the first 5 minutes because they just copied what's in the docs, And then once they start trying to do something different from that, like, start modifying it, and you just see issue after issue after issue being reported, then your design probably sucks, which we didn't see with this implementation because it worked. And I was like, yeah, this is cool.

Ben Wilson [00:26:06]:
But you can be in in reviews like that where professional engineers who have experience building stuff, they start trying to do things that are kind of intuitive for them, and it lets you know if your the context that you're forcing upon people is so different from what they're used to. And I can tell you, like, hey. Your APIs are nonintuitive, or they're so, like, so foreign to what everything else is with this library. Maybe this needs to be rethought or improved a little bit. Or it can be, you know, the the true happy path, which is, hey. I I did the tutorial. I did the the quick start. I did it, like, an advanced use case.

Ben Wilson [00:26:55]:
Everything just kinda works, and then people are diving deep into, like, nitpicky stuff. Like, hey. This warning's really annoying, or this this shows up every time that I run this. This is super distracting. And that's what I consider to be a, like, a successful bug bash when you get to the the point where people are just picking out, like, annoying things. Yeah. Or he's doing something like, hey. I still have, like, 45 minutes to kill.

Ben Wilson [00:27:22]:
I'm gonna try to do, like, a trace on this and see why is this one step takes so long, and here's a flame graph of, like, why like, what I think is a problem in your code. Then you're like, sweet. That's some valuable feedback. I'm gonna go fix that.

Michael Berk [00:27:38]:
Yeah. That that's basically where we got to. Everybody got the quick start up and running other than this one weird weird, like, we think potentially LAMA index related bug, but not sure. And then, yeah, it just became nitpicks of, like, there was a a couple, like, false errors that when we tried to reproduce, it it was something else. But a lot of it was like this warning is annoying. There's, like, 7 of them when there should be 1, or, like, this scroll bar in the UI isn't working and stuff like that. So Yep. Yeah.

Michael Berk [00:28:09]:
I think it was successful.

Ben Wilson [00:28:12]:
So what did you think about the whole process of, like, timeline wise? When were you first given this project? What did you get as content?

Michael Berk [00:28:22]:
Alright. Leah, we could get into this. So this was probably the the hardest I've consistently worked because I was working my full time job plus doing this, plus trying to, like, live a, like, busy New York social life and, like, have friends and such. And so I started this, I think, in January or something, maybe a little earlier. Do you remember?

Ben Wilson [00:28:53]:
Yeah. It was summer on the 1st of the year.

Michael Berk [00:28:55]:
Yeah. I started doing design, and Ben had already put together an initial design doc. I learned everything in that 50 page design doc and then basically rewrote it, which was an interesting decision. I could have probably not done that, but I ended up actually grokking pretty much 100% of the concepts. And then I also did about 50 pages of critical user journey examples. So, basically, going through the LOM index docs. And to put it into context, when I was given this project, I didn't know what RAG is, retrievable augmented generation. Like, I I had heard it.

Michael Berk [00:29:33]:
I had had no idea what the concept was. I didn't really know what an embedding was. I didn't know what an LLM actually was, although I used chat gpt. I didn't know what it I had built 1 or 2 design docs before, but, basically, every single thing that I was given was new, including, like, Git development, like, all the things that I had discussed, and the course of how software is made. So it was a pretty freaking painful process. But one of the, like, intangibles that I think is absolutely invaluable that I learned is basically how to carve out time and how to create your, quote, unquote, replacements so that you do have the ability to work on the, like, 10 x type of stuff. And I'm still very bad at this. Like, any software engineer on the team would do this, like, probably, I don't know how much faster, but significantly faster.

Michael Berk [00:30:30]:
And so 10 x isn't in terms of, like, value to the company, but a 10 x is in terms of value to me. Like, what am I learning? How am I growing? And it was really, like, cool to be forced into a position where, alright, you can work 80 hour weeks and hate your life, or you can figure out how to delegate a lot of your existing work, and then also focus your time on what's important. And yeah. So I guess that's a very roundabout answer, but, started with a bunch of design, learned how to delegate a bit better, and then actually was able to carve out a bit more time. Got ropes into a hell project, so, my life became not fun for, like, 2 months. And then, yeah, started after the design generally got approved, started working on the implementation. Again, I spun my wheels a ton, didn't know how to do any of this. And after, like, building the wrong thing, like, 7 times, there was sort of an inflection point where I partnered up with a software engineer on the team, and he started giving me daily feedback, and then everything was quite smooth.

Michael Berk [00:31:41]:
And, yeah, that was the process, I guess.

Ben Wilson [00:31:44]:
So what preempted that pairing up?

Michael Berk [00:31:50]:
Yeah. So Ben and I were talking about this before. He was telling me that he was, like, mentally breaking me down like a military, like, boot camp instructor so that I was more open to feedback. And then once the inflection point hit, he paired me up. Is that the angle? And and I did ask for the help, to be clear. Or, like, I don't know if I asked for it, but I said I there was I remember there was an inflection point for me where I realized, dude, I spend, like, 6 hours going in the wrong direction for, like, you to give feedback once a week saying that's the wrong direction. And and it was like it I wasn't mad at all, actually. It was just like I was like, this is not going to be tenable to solve this properly.

Michael Berk [00:32:35]:
And so I brought that up, and you were like, okay. Then talk to this guy.

Ben Wilson [00:32:40]:
Yeah. So I could've and this is just, like, my style of this and how I've mentored people. And for, like, data science work years years ago, I would do the same sort of thing. When you have somebody who they're not overly confident in their own skills, but they're they're good at what they have been doing, and they wanna move into something different. There's a way to approach that. There there's a whole bunch of different teaching philosophies and styles. The one I found to be most effective for people that have a very similar personality that you and I have, which is we're generally stubborn. We like to figure things out.

Ben Wilson [00:33:21]:
We don't like to ask for help too often. We would rather, like, brute force our way through a problem. And without without positive or negative feedback and a regular cadence, whether we're asking for it or not, we can go off the rails pretty far, because we're just trying to do what we think is right. And that's that's very much, I think, from my personal experience and working with tons of data scientists, that's kind of the way that a lot of them work. That's how I worked. Usually, it's just you on your own trying to solve this problem. It's very rare that you're gonna have a team where there's, like, an experienced person there to guide all all of the people on that team and making sure they don't go off the rails too too much. It's more like, hey.

Ben Wilson [00:34:08]:
You you know this domain. You're you're deep in the weeds. Go figure it out. With software, it it doesn't really, like, work that way, or it doesn't effectively work that way. So in order to break that habit, I just use the same technique that I had in boot camp, and I, you know, understood from military training, which is if you want somebody to to really remember something and adapt to the way that other people have figured out optimal ways of doing things. You can tell people. You can explain it to them, sit down, have a couple hours of chats. Like, hey.

Ben Wilson [00:34:49]:
This is how you wanna approach this, and here's a whole bunch of things to to think about. But most people don't respond to that very well. I don't. I've never really interact with somebody who's going to really absorb all of that. You're gonna get bits and pieces. They're gonna, you know, try to figure out how to adapt their current thinking with those additional pieces of information. And that inherent bias that you have about how you you know how to build a solution to something, you're not really gonna change direction that much. The way that you can change somebody's outlook on something is by making them suffer a bit, making them get to a point where their process that they're doing, they themselves figure out that this isn't optimal, or, hey.

Ben Wilson [00:35:41]:
I don't know these concepts well enough, or I don't understand how this works well enough. And the the terrible thing to do as somebody who's, like, mentoring somebody is to just dunk on the person or tell them, like, they don't understand. Go read a book. You know, all of that sort of negative feedback is just super toxic, and I hate hearing people do stuff like that. It's not constructive. You're not building somebody up and making them, you know, better at what they do or what they wanna do. The more the more useful thing to do is to figure out where the person is like, where that inflection point is gonna be right before they're gonna get frustrated and wanna quit doing whatever they're doing. And everybody responds differently.

Ben Wilson [00:36:28]:
I've worked with people where that inflection point happens, like, days into them working on something. And I've seen people do, like, months working on something before you start noticing, like, yeah. I'm giving you feedback that you're you're going the wrong direction here, and I'm trying to give you kinda hints on that or, you know, let you know that, like, I'm not gonna give you the answer because that doesn't do anything for you. I'm not gonna do it for you. That really doesn't do anything for you except just make you feel terrible. You wanna make somebody grow and get better. They need to realize that they need to rewire their brain to be open to these new ways of doing things if they if this is something they really wanna do. And if it's something that you detect this might not be for them, then have a candid conversation.

Ben Wilson [00:37:19]:
Be like, hey. We'll help you with the project. We'll we'll get it done for you, basically. Thanks for all the work you've done up until this point, but you'll never hear from us again sort of thing because this clearly isn't what you want. But for her, like, with you, he he could tell you. I'm, like, clearly told because we talk, you know, multiple times a week. I could start seeing, like, you were like, hey. Could he could you take another look at that PR? Like, sure.

Ben Wilson [00:37:45]:
Yeah. And then I'd leave, like, one paragraph comment. I'm like, hey. I don't think we wanna go this way. And, like, yeah. Let's not do this. Let's think about a simpler approach here. But I wasn't giving you code blocks of, like, this is how to do this.

Ben Wilson [00:38:01]:
I could have.

Michael Berk [00:38:03]:
Okay. I I hold on. I have, like, 50 questions. Is how do you know that the breaking down is required?

Ben Wilson [00:38:15]:
Because I've tried it without that, where somebody's still all pumped up and energized, and they they they're like, yeah. I'm doing this great job, and I'm getting this feedback right away. That can work. It's just not as efficient for me, because what you're gonna be doing is perpetuating that belief that somebody has that they know the right way to do something. So they're not gonna listen in the same way in the future, or they're not gonna approach their next project with as much learning as they got from that first big project. So it's all about efficiency in my opinion.

Michael Berk [00:38:53]:
What timeline are you optimizing for it, and what goals are you trying to achieve with this breaking down?

Ben Wilson [00:39:00]:
The only goal is to make that person be better at what they're doing if they want that.

Michael Berk [00:39:08]:
Cool. I and just to be clear, I agree that it should be as open ended as what you just said.

Ben Wilson [00:39:13]:
But yeah. So you don't do something like assign a mentee, and we don't even do that to new hires at Databricks. Like, people in engineering, you don't assign, like, some massive project that somebody has no context on how any of this stuff works. Be like, hey. You got 3 weeks to get this done. Design and implementation, full testing, and then release in 3 weeks. That's so far out of scope of that level of experience. You would give that to, like, a senior engineer or, you know, like, l 6 or something.

Ben Wilson [00:39:46]:
Be like, hey. You got this? And they'll be like, another one. Okay. Got it. Will do. And they'll just go off and build it because this they've done it a 100 times before. So it's it's so common to them. They know they've learned all of those processes.

Ben Wilson [00:40:01]:
The hard thing that it is for somebody coming new into that environment from the outside is that they didn't see those people that they're now kind of idolizing. It's like, well, this person is so good at what they do, and they're so, like, they're so smart, and they understand all these concepts. You didn't know them 7 years ago when they were in your shoes and what they dealt with on that first project. They might have had, like, a good mentor who was, like, really helping them along and making sure that they these concepts that first time, or they might have had a terrible boss who was just, like, sink or swim, man. Figure it out. And they had to learn they had to feel all of that stuff internally and break themselves down and feel despair, and then, you know, get it working eventually with, you know, a massive, probably, stress and health hit to themselves, for being under that kind of pressure. Yeah. It's I don't you can do that, or you can handhold somebody for a year and make sure that, you know, you as a a mentor are just there for them at every any given minute, and they're not really you're not gonna learn that much at that pace.

Michael Berk [00:41:21]:
Okay. So there's a spectrum of hand holding forever to get this done in a week. Handholding forever is the easiest path, learn the least amount. Get this done in a week is the hardest. Learn the most amount probably. And the sweet spot is sort of in the middle. So

Ben Wilson [00:41:41]:
it's this is a seesaw relationship, though. So when you're talking about the easiest path versus the hardest path, that's speaking from your perspective. But the hand holding is the most amount of work and the most amount of, like, just time sunk into something for a mentor because they're helping this person, you know, 4 times a day, answering all their questions about every little, like, minute detail. You know, like, writing a function, you know, like, I don't know if this is right. I'm gonna code snippet this and send it to my mentor. Is this is this how you would do this? And they're like, yeah. It looks good. Write a test for it.

Ben Wilson [00:42:22]:
What does the test look like? You know, that constant feedback type thing, I've never really worked anywhere where somebody's willing to do something like that. Because if you're a mentor, you're usually fairly senior, and you've got a bunch of stuff to do during your workday that is probably more important than hand holding somebody. But that's the easiest thing for the person learning because they're getting answers, like, all the time. They're getting unblocked, like, continuously. And, you know, it'll be a fun experience for them. You'll learn, but you're you're not really learning. You're not figuring out how to figure it out yourself.

Michael Berk [00:43:08]:
Right. Okay. So there's

Ben Wilson [00:43:11]:
the opposite spectrum is ultra hard mode for the person who's learning this concept, And then the person who's, like, the mentor, they do nothing other than just, you know, review bomb after, like, several weeks of abomination being written, and they just drop a a like, hey. This is way off base. This sucks, or you suck. Why'd you even do this? That's I don't think that's productive. Right.

Michael Berk [00:43:40]:
What is the timeline that you are optimizing for for this knowledge? Is it, like, lifetime, or is it the next 2 years? Or, like, when should the fruits of the labor yeah. When should the fruits of the labor start paying off?

Ben Wilson [00:43:57]:
I think that's dependent on on the person and what their their own personal motivation is or what their personality is and stuff. But somebody who's who, like, really wants to get into learning something. This isn't just about software. Right? It's learning something. And they're they're clearly willing to put in the time and energy to it. Once they have really learned their own limitations, which sort of reprograms your subconscious to a certain degree, you now really know what you don't know. So you've you've gone past that trough of disillusionment, and you're now, like, coming up to, like, okay. I know that I'm that I'm deficient in all of these areas, and I know what to work on.

Ben Wilson [00:44:46]:
But you start knowing really what you should be working on, not just, I'm a get good at everything. I'm gonna be the the world's best at whatever this is. Anybody who's really good at something, they don't think like that. You're like, I I've spent some time learning how to learn this thing. So I when I need to learn the next thing that's adjacent to this, I know the most efficient way to go through that because I did this super painful thing, or my mentor put me through this super painful exercise to teach me some core concepts about how I I need to approach this. And that's what I was trying to do, which is okay. Michael just dropped 25 100 lines of code in this PR, and I did tell him to to split this up. But let's see if he gets the the concept of, like, smaller commits with with review and then not doing stuff like reimplementing core language functionality and, you know, really grokking serialization.

Ben Wilson [00:45:50]:
Like, what is a simple way to do this versus the super complex, arguably correct way to do it? You know, a couple of the ones that you pushed out there, at first, I was like, that's that's definitely that would work. But nobody wants to maintain that, because it's reimplementing something that already exists, or there's, like, way easier way to do this. And that's what we try to push for is, like, simplicity. So but you would not have learned that if I just given you the code snippet. Like, hey, man. This is how you serialize JSON, Alright. So how you deserialize back into an object. Agreed.

Ben Wilson [00:46:29]:
But then there's a certain point where when you hit that inflection point, you absolutely have to if you're using this sort of process for mentoring. You have to get that person once they're once they've been broken, effectively. That's a bad term, but, like, once they're ready to, like, really get something out there, and start getting really good at this, at whatever they're doing, you have to give them the means of working with an expert who's gonna have the time to, like, provide that that consistent feedback in a short period of time, that that kinda works for them. Because what you do at that point is you set a deadline. And if you're just going through and you're like, okay. This person, like, they broke a bit. And now I'm just gonna set them a deadline and and put even more pressure on them now that they've learned a a couple of these concepts from this feedback and they've done self reflection. They're like, oh, this is how I need to solve these problems.

Ben Wilson [00:47:34]:
I've learned all this cool stuff. That deadline is is just gonna put pressure on that person if you don't give them any any resources. So as a mentor, if you have time, you can be that resource. Unfortunately, I I did not have that time, but we had somebody on the team who finished all their their quarterly work early, and I was like, this is a match made in heaven. Just gonna be like, hey. Sat down, talked with him for a little bit, and was like, here's what you should be going for. Like, let's make sure that that this needs to deliver by this date. Here's our deadline, but make sure this guy, like, really understands all this stuff and sees, like, how to to approach these problems in a different way.

Ben Wilson [00:48:20]:
And he got super excited. He was like, yeah. I'm really excited to work with him, and this is gonna be fun. And yeah. Yeah. It it did released.

Michael Berk [00:48:29]:
It did release.

Ben Wilson [00:48:30]:
And, also But more importantly, way more important than the feature, you learned all that shit. Like, you're never gonna forget some of the stuff that you learned because you had to go through that process of being like, okay. I went off the rails this many ways. And part of that is not just the the technical competency of understanding all of these these concepts. It's more of it's also soft skills too. It's understanding, like, okay. How do I how do I short circuit my propensity to go off the rails by getting a peer feedback early and, like, talk through a design with somebody, or talk through, like, that that implementation detail, and then respond to comments to that and, like, fix that or adjust it to this other way of of going. Because, eventually, you know, 6 months from now, a year from now, you'll be working on a feature, and you're gonna get that pure feedback on that PR.

Ben Wilson [00:49:34]:
Your PR will hopefully be, like, less than 500 lines of code, and then somebody can just look at it. And your next step after where you are right now, which right now, if if the person that you're working with wrote a comment on a PR requesting you to change something or just asked a question, your propensity right now would be just do what he asked. Right? Like, he knows more than I do. I'm just gonna do that. A 100%. Right. But a year from now, if you can prove that that comment is irrelevant to what you're doing, if you look at the PRs that we do with one another, it's not just one person saying, hey. Fix this, and you just go and fix it.

Ben Wilson [00:50:18]:
Sometimes it is that, because you just overlooked something or didn't think about it. But a lot of times, there's discussions that happen back and forth. Right. Like, hey. Are we sure we wanna do this? Translate. Hey. I think this is not the right way to do this. Please fix this.

Ben Wilson [00:50:35]:
Here's a code snippet that I think you should use instead instead. And then the next comment is, I don't agree. Here's why, and here's example of, like, a test that I wrote that validates this. And then the other person's like, you're totally right. Yeah. Keep your first implementation. Like, that's way better, or change it to this instead. And then the final comment is, sweet.

Ben Wilson [00:50:56]:
Thanks. Awesome. This is so much better. Yeah. It's getting that back and forth, and the only thing that builds that is just time on keyboard.

Michael Berk [00:51:06]:
Yeah. Yeah. I I have actually had a few really good back and forth in the LAMA index implementation. But I the key thing was being comfortable enough with the material to have a discussion. Yep. And a lot of times, there's it's like at least some concepts go above my head. So for those, I'm just like, alright. I don't really have the time to learn every nuance of what you know so that I can see if you're right, especially if you're a trusted expert in that area.

Michael Berk [00:51:36]:
But, yeah, there were plenty of times when I was I didn't know more necessarily, but I had done my homework and thought that it was wrong and brought up my point. Sometimes it got overruled. Sometimes it didn't. So, yeah, definitely that back and forth was valuable.

Ben Wilson [00:51:52]:
Yeah. And you start to self calibrate when working in a team like that with the feedback of, hey. Some people's knee jerk reaction to any comment is to argue or just say, like, hey. I have this other viewpoint that I think is about more valid. And then that other person's like, no. Here's the evidence. Like, don't do that. You'll start to self calibrate to be, like, to be able to read a comment and realize instantly, like, yep.

Ben Wilson [00:52:21]:
They're totally right. I missed that. Or Yeah. Nope. They're totally wrong. I I need to actually fight this. And you're kinda spoiled in our team where the number of comments is it's usually 90% valid, 10% maybe that are up for debate. But I've definitely worked with other teams before where you go through a PR process, and it's, like, the inverse of that.

Ben Wilson [00:52:49]:
It's like it just becomes this weird flame war, and you're like, is there an adult in the room here? Like, what's going on? Who who's who's actually the deciding factor in, like, merging this thing? Mhmm. Interesting.

Michael Berk [00:53:07]:
Yeah. Yeah. Just the dust is sort of settling, so I don't have a bunch of, like, really insightful zingers, but not that I ever do. But it it was a a really cool process because, like, the breaking down aspect probably was necessary, even though it kinda sucked. It was also kinda cool, though. Like, I'm a bit sadistic when it comes to those types of things. Like, I like a real challenge. And then one thing that was really eye opening when I partnered up with Yuki, is getting that daily feedback was like a breath of fresh air.

Michael Berk [00:53:49]:
Like, I really cherished every sentence that came back from him. Not only is he, like, very good at smart, but now, like, a direction that he gave meant a lot more because I had all of these theories that had already populated in my mind, and I could just axe like 2 thirds of them. And then the rest of the third of the theories that were still active, I could quickly implement because I had already had to go through the pain of of thinking that I needed to create all the possible solutions, test out each one, learn what was the best, and do that all on my own. Once he could, like, narrow this the solution space down to this set, I was like, sweet. I can bang out 5 iterations, pick the best one, PR it. So, yeah, I'll I I definitely have to think on it more, but the breaking down is a cool concept.

Ben Wilson [00:54:41]:
Yeah. It it just makes you more open to get that feedback, really. Yeah. It it's preparing you for the process of peer review, And you learn, like, a bunch of core things that are very useful, like tools in your tool belt while working on stuff like this of understanding. Like, you mentioned a bunch of technical things about the Python language. Like, oh, I didn't know about parameterization of unit tests. We're using the Pytest framework. Cool to know.

Ben Wilson [00:55:15]:
Like, definitely super valuable. It won't be something that it'll be it'll be knowledge, like, a year from now that you'll just have intuitive access to. You'll know when to do that, when not to do that. And the more that you write, the more feedback that you get about, like, yeah, you could parameterize this test, but there's a certain point at which, like, you have logic control flow within your unit test that acts on the parameterized values in a super convoluted way. Like, if this condition and this condition and this it's, like, just split the test up. Like, we don't need to be fancy and create, like, parameterization for stuff like that. Just split it up. Yeah.

Ben Wilson [00:56:07]:
Because there is something to be said about, like, simplicity and, like, readability. And, really, with with using those tools, if you over if you abuse them, you could theoretically, you know, write an integration suite where it's just one test with this monster freaking parameterization setup that you have where you're, you know, not testing certain things if this condition is this way. And when you look at something like that, like an abomination test like that, and then you if you're just writing it and somebody else is maintaining it, you won't even bat an eye. You'd be like, yeah.

Michael Berk [00:56:45]:
I did

Ben Wilson [00:56:45]:
I wrote something cool and complex. However, if you get if you're the one getting tagged for, hey. This test failed. We need to figure out how to fix it. And you open that up and you see somebody else did that, you'll just your first thing that's gonna pop in your head is, like, who did this? Like, why is this so complicated? And in order to fix this, I have to rewrite the test, like, the entire test from scratch, and I'm not gonna be using this complex logic chaining here. I'll just create 18 tests, and, like, 4 of them might be parameterized. So, yeah, you get experience by just doing it. Yeah.

Ben Wilson [00:57:31]:
Exactly.

Michael Berk [00:57:33]:
Okay. Cool. I think I have a general summary in my head, and I'm ready to summarize. But any other things you want us to talk about before we

Ben Wilson [00:57:43]:
close? I think for next time. We'll we'll do some additional once it bakes a little bit more in your brain.

Michael Berk [00:57:52]:
Yeah.

Ben Wilson [00:57:53]:
Because I know you're gonna self reflect over a lot of the process.

Michael Berk [00:57:56]:
I need to use all this shit. Like, I'm gonna go and, like, break down my colleagues until they're ready to cry and then say, this is how you write a pie test. No. I I think I need to put some of it into action because there's some things that I I'm very bad at taking someone else's word as fact. I need to understand it and make it my own concepts. Even if it's word for word what they said, I need to arrive at that conclusion. And so I need to, I think, use some of these concepts in action before I can actually have good thoughts about them and good intuition about them. Mhmm.

Michael Berk [00:58:33]:
But okay. Cool. So the purpose of this podcast, at least what we set up from the beginning and hopefully achieved, was to demonstrate what it looks like to have a very large learning opportunity and then reflect on that. We talked about the process of, alright, initially throwing someone in the deep end, watching them flounder around until they're like, alright. I need help. But throughout that process, they learned how to tread water. They sort of learned what it feels like to have water go up their nose, maybe not breathe for a few minutes well, seconds. I don't know about minutes.

Michael Berk [00:59:08]:
And, through this process, they become a lot more open to learning. And then once you actually throw them the life raft, then they are typically a lot more able to swim on their own, and they need less guidance. And, also, the lifeguard, in this case, the mentor, doesn't have to do that much work. You're just, like, generally checking in. Alright. Have they drowned yet? No. Have they drowned yet? You know? No. As soon as they're drowning, go help them.

Michael Berk [00:59:33]:
So I still wanna think about that concept to see if the drowning is necessary, but I think it is. And, I think I personally benefited a lot from it even though drowning isn't fun. And, yeah, I think all the other tangible tips are not really relevant, but that's sort of, like, the core underlying principle. So, yeah, that's about it. Anything else?

Ben Wilson [01:00:00]:
No. I was just to encourage people who are interested in going through that process of, like, approaching mentorship or menteeship with thinking about, you know, intentionally doing stuff like that. And you don't tell the person beforehand. It doesn't work if you do that.

Michael Berk [01:00:21]:
That was gonna be the first thing I tried. I was gonna tell the person. Why can't you tell

Ben Wilson [01:00:25]:
them? It won't work.

Michael Berk [01:00:28]:
Why? I

Ben Wilson [01:00:28]:
was like, hey. I'm gonna screw with you for, you know, a month or 2 months, and I'm gonna, like, intentionally try to break you. Most people are like, why are you such an a hole? I don't wanna do this. You know? No. Nobody signs up for that. You know, what do they do in the military when, well, I can tell you what they do in the military. Yeah. I have no idea.

Ben Wilson [01:00:53]:
When, you've seen the commercials for, like, recruitment into the military. They show a bunch of people storming up a beach, you know, carrying, like, assault rifles. They're like, yeah. Go go marines or, like, show a bunch of, like, crazy cool warships, you know, doing donuts in the ocean and firing missiles and guns and stuff.

Michael Berk [01:01:15]:
That's what appealing to a base.

Ben Wilson [01:01:17]:
I mean, if you're appealing to, like, this is the cool part, you know, you're gonna be in a Mhmm. A fraternity of, like, soldiers and sailors, and you're gonna do all this stuff and tour the world and get to see all this cool stuff. In reality, it's not that at all. It's a lot of hard work, a lot of boring time spent just waiting, a lot of intense instruction, just unpleasant living experience, in general. So if you tell everybody if if those commercials were accurate, like, here's what it's really like in the military, Nobody's gonna sign up. They're gonna be like, this sucks. Like, why would I why would I sign up to do that? This is terrible. So what you do is you trick them, basically, into this ideal of, like, hey.

Ben Wilson [01:02:11]:
You're gonna do this thing that you're telling them what some of the things are of the end goal. And it's the same with mentorship. You're you're teasing somebody with appealing to what they wanna get out of something, and they will get that if they follow the process. But your first day of boot camp, you get off the bus at recruit training center, and you have these people just screaming in your face and telling you, like, how stupid you are and, like, why can't you follow simple instructions? Do you know how to stand up straight? Why do you, like, why do your clothes look like that? Like, why does your hair look like that? They're just messing with you, and that starts that breaking down process of at first, you're just afraid of, like, what what the heck is going on? And then they keep you up for 36 hours straight and just waiting, like, standing in the line while they're, like, processing people in. They're doing all of this intentionally. It's not the most efficient way of doing the things that they're doing, but it's the most efficient way to break somebody down so that they start listening, like, really listening. So you have to get to that psychological break point of, like, I don't know what I just got myself into. I don't wanna be here.

Ben Wilson [01:03:28]:
And then within 2 weeks, you're just, like, you're following orders. You're doing what you're told. You're getting things done in the way that they need to get done, which is building up that knowledge base and that experience so they can be built on later. So it's a that's that whole, like, indoctrination process of, hey. There's an efficient way of doing this that's gonna that's not focused on what you're learning. It's focused on the learner themselves.

Michael Berk [01:03:57]:
Heard. I feel like I have a bunch of caveats that I wanna test out. So, we'll we'll report that.

Ben Wilson [01:04:05]:
Doesn't work for everybody. That's a big thing. Like, not everybody responds to that, but that process is also a way of weeding people out. So if you came to me, like, 6 weeks into the project, and you're like, I'm not getting any feedback, I'm not gonna like, I don't wanna do this anymore. I'd be like, alright. Cool. No hard feelings. Like, like, no harm, no foul.

Ben Wilson [01:04:29]:
This isn't for everybody. Same thing happens in boot camp too. Like, there's a washout process. Like, you you wanna identify people who don't wanna be there because they can go and do something else or you know, there's no judgment on them. It's just like, hey. This isn't for you. It's fine.

Michael Berk [01:04:48]:
But, like, is there not another way to achieve this openness and, like, quick growth mindset without the breaking down process, or is that the only way?

Ben Wilson [01:05:00]:
Of course, there's other ways to do it. But

Michael Berk [01:05:03]:
But this is the best way?

Ben Wilson [01:05:06]:
It's one that's worked for me many, many times, and it's one that worked on me. I wasn't exactly the most receptive person coming out of high school to be like, yeah. I'm ready to be a warrior. I wasn't there for that. I was there for education and experience. But by the time I got done with, you know, all of my schooling and stuff, I was able to do things that there's no way I would have been able to do that if I hadn't started with that day 1 walk off the bus to be able to, like, know my own limitations. Like, oh, I can actually do this because I did that, or, hey, they taught me this this concept through a subconscious means that I just I'm not, like, afraid of failing with trying something. Because I was taught not through like, verbally, they didn't they don't sit you down and be like, we're gonna do this for you, and you're gonna become this person, or you're gonna learn how to do these things.

Ben Wilson [01:06:09]:
There's none of that. It's all experience based, and you never forget it.

Michael Berk [01:06:18]:
Alright. I believe it. Cool. Well, we're over time. Yeah. Until next time. It's been Michael Burke and my cohost Ben Wilson. And have a good day, everyone.

Michael Berk [01:06:32]:
We'll catch you next time.
Album Art
Learning, Testing, and Mentorship: Building Autonomy and Confidence in Python Development - ML 167
0:00
01:06:39
Playback Speed: