How to Get Sh*t Done - ML 121
In today's episode, Michael and Ben break down some surefire methods to be successful. If you follow these tips, you are guaranteed to co-found the next Google. Some topics include time boxing exciting work, tips for growing documentation, pitching to diverse crowds, and much more!
Show Notes
In today's episode, Michael and Ben break down some surefire methods to be successful. If you follow these tips, you are guaranteed to co-found the next Google. Some topics include time boxing exciting work, tips for growing documentation, pitching to diverse crowds, and much more!
Sponsors
Transcript
Michael:
Welcome back to another episode of adventures in machine learning. I'm one of your hosts, Michael Burke, and I'm joined by my cohost.
Ben:
Wilson.
Michael:
And we both do data science and machine learning and data engineering and all of the above at Databricks. And today we have an interesting topic that I have been thinking about quite a bit. And so while back, I was listening to a podcast with Lex Friedman and Sam Altman. So Lex Friedman sort of does long form podcasts. He's in the AI space and I think he's a researcher at MIT. Um, and he sort of took after Joe Rogan and doing these two or three hour podcast episodes with famous people and really picking their brain and learning about how they think and how they operate. And then Sam Altman is the current CEO of open AI. He also was. I think a founder of Y Combinator or at least ran a startup school within Y Combinator and he has a really impressive background and has been doing a very good job at open AI. And so they had a very insightful and interesting conversation. And in this podcast, they talked about this blog that Sam wrote called how to be successful. And so Ben and I are sort of going to go through the bullet points and see what we agree with, see what we disagree with and sort of. Uh, break it down for a data science and machine learning audience. That sound all right with you, Ben?
Ben:
Mm-hmm.
Michael:
Cool. So, uh, if you're curious to read it yourself, it was posted in 2019. You can just Google Sam Altman, how to be successful. And there's something like 11 points and he starts off really strong. Uh, I thought this was a very insightful point that I frankly hadn't heard before. And so the first one is compound yourself and you're like, all right, roll your eyes. If you get better 1% every day for an entire year, you're 37 points, something times better. That's super exciting. But that's not what he means here. What he means here is that over time, your returns on the same amount of effort should grow. So every day you put in, let's say 10 units of effort. And because you were growing and expanding your base and your current skill set, your current network, your current set of knowledge. 1% of effort or 10 units of effort should take you a lot longer 10 years from now than it takes you at this very moment. And so if you're thinking about it from a math perspective, there's a slope. So think of an exponential curve. The first derivative of that might be growing, but the second derivative of that is constant. The second derivative corresponds to the amount of work you put in every day. And the first derivative corresponds to sort of your growth and your compounding over time. So that I thought was really insightful and I hadn't heard it phrased in that manner. Ben, what are your thoughts?
Ben:
I think it's something that people don't consciously think of who are considered by others and maybe modestly by themselves as saying like, yeah, I'm doing all right. But other people are like, wow, this person's amazing. They've done all these crazy things. It's something that is part of their normal day-to-day activities and how they approach getting better at things. Like is this philosophy? And then people who don't seem to, you know, I've always referred to it as the rise to mediocrity. Everybody knows them. There's somebody that's, they've been at a company 20 years, they're kind of stagnated in their level. They've never really moved up to do, you know, take on bigger challenges, bigger responsibilities. They might be really good at, or they might be the person who knows the most about something. some system at the company or some, you know, if we're talking about data science work, they're like, oh, you know, Bob, that's our time series guy or, you know, Janet, she's a reinforcement learning person. She knows so much about that. She knows these libraries and how to tune them and how to configure them. But when you see those people who are really good at that one thing and that are also kind of stagnated. Maybe they're a senior data scientist or a principal data scientist, but they've been that level for 12 years or something. The reason that they're not moving on is because management kind of sees them as either, Hey, this person's content in this role. And there's some people who love to do that. They just love to get good at something and be, you know, just own that thing. But those people aren't going to eventually be leading that entire department or are going to go off to. They're not going to have an appetite to go to a startup later on as. an early founding member and put a risk out there of saying, hey, I know this skill a lot, but I also know that I'm going to have to learn 400 other things in the first three months that I've never seen before and just figure stuff out. Because that linear growth process that people, it's sort of a, it's a comfort trap for people, I think, from what I've witnessed of, hey, I know this thing in order to get incrementally better at it. I can improve my efficiency. I can learn more bits about this thing that I understand a lot about. And you can go into that rabbit hole and go really deep. You're probably along the way going to learn all sorts of useless stuff that has no bearing whatsoever on your ability to do your own core job, but also is not going to prepare you to take risks and do the next thing. need to do to challenge yourself.
Michael:
I think a lot of people struggle with prioritizing what to learn and what to work on with the internet and with all of the information at hand, there's so much cool shit out there and it's really hard to know before learning it, whether this will be valuable or not. And I don't know if you listener support Steve jobs and his isms, but one of his isms is basically learn whatever you're interested in and then hope that the dots will connect in the future and. That's one, that's one perspective, but you can also go and ask more senior people or other people who might know this information and say, Hey, is this valuable, should I spend, I don't know a day learning how to do this thing, or should I go spend it learning a new thing or a different thing? Should I sort of generalize or should I specialize? So Ben, how do you think about deciding whether to generalize or specialize and more specifically what you should be learning and working on?
Ben:
So I can only speak from my own perspective, which is the best way to learn new things that are probably going to be relevant is to volunteer. And what I mean by volunteer is there's a new project coming up and it's so far outside your wheelhouse that most people, when they see something like that, they're like, I don't know anything about this and I wouldn't even know where to begin. But if you're the person that, that volunteers and says, and it's very honest upfront saying, Hey, I have no idea how to do this, but I'm willing to give it a shot and I'm not going to quit. until I figure this out. Provided you add a company that allows that, which if it's a good company, they will allow that. And if it's a super risky project that might be coming up, they'll find somebody else who also knows, who maybe knows a ton about that, to either mentor you throughout the process, be like, hey, this person's gonna check your code or check your implementation or whatever it is, and make sure that you're on the right track. A bad company would just say, no, you don't know anything about that. You're going to do what you're continuing to do. That would be a situation where within your working life, your company and management is actively holding you back. And it's time to update your resume and get out there. That, you know. best bit of advice I can give to anybody that's in a situation like that. I've worked at a company like that before. It sucks. And I got out as quickly as I could. Um, but most good companies, the vast majority of them will, will allow you to do that. It's scary, but that act of you doing that and working on something forces you to do all that research and you've got to understand this stuff, but you also have that safety net of. peers around you who can also look at what you're doing and give feedback. And you can ask, you know, if you're open and honest upfront, you don't get in your own way when you get stuck. Now, it shouldn't be,
Michael:
Right.
Ben:
you know, oh, first 15 minutes of looking at the project, I don't know what this stuff is, I'm going to ask for help. That doesn't look favorably on you. I mean, it's all about, and it's another point in this list that we're going to talk about, about putting in effort and working. Give it a lot of work and see if you can brute force your way through it to understand it. Build something. See if it breaks. And then adapt. Get feedback. But each of those actions, when you're putting yourself out there for what our own egos sometimes prevent us as human beings from doing, is being seen as being incompetent or an idiot or a failure. Like, hey, I don't... Like why do people not want to take on complex projects? It's usually because they don't want other people to be like, this person's a dummy and we need
Michael:
Yeah.
Ben:
to get rid of them. Nobody in management thinks that. And it, I've never encountered anybody who's like, I can't wait to assign this to this person so I can fire them because they're too stupid. Nobody does that. No companies operate that way.
Michael:
Yeah.
Ben:
It's more like, Hey, that's great that this person volunteered. Yeah, this is maybe risky, but I'm going to let them try this and if they succeed and this is a massive success, awesome. Like I'm going to put them up for promotion. If they fail, then it's more like, Hey, the team came together and built this together. So it's a win-win in my opinion, but you have to take those moon shots of saying like, this might be beyond my capability, but I'm going to try to force myself to figure it out. And that's how you get that exponential leap over and over and over again. And so long as you're providing a vector of directional movement that aligns to your own passions, I think you'll be successful in compounding your skills to move towards what you actually want to do.
Michael:
Yeah, that actually segues perfectly into the second point. Ben, do you mind teeing that up?
Ben:
have almost too much self-belief. And in my perspective, There's a way to have a belief in yourself that's healthy without getting into your ego dominating. Belief with humility is an excellent thing and it's the mark of all truly successful people in whatever field they're involved in. If you're not getting in your own way and realizing that, hey, I might not know this now, I might feel stupid right now, but that's okay. That just means that I have all this opportunity to learn cool new things. So thinking of it positively and telling your ego to kind of like, Hey, take a back seat and just shut the heck up. I got this and having that belief to know that I will figure this out and I will gain these skills and this will align with what I want to do or, or it could be, I w I don't know if I want to do this. I want to try this out and you could do the project. Be like, that sucked. I hate this. I never want to do this again." Now you know. You've experienced it and you know whether it's what you want to do. But you have to have that belief in yourself in order to take that risk.
Michael:
Well, all right, I'm a double click into this. So these might be sort of interrelated points. They might be sub definitions of confidence. So there might be two different components. Um, but as I was thinking through this definition, I think I'm very good at separating my ego from the result of the project. So I've, I've failed a lot in my life. Um, and I'm pretty good at failing and being like, well, that didn't work. See you later. The different kinds of confidence that I think he's referring to is having the confidence that you will complete something successfully. And those both are sort of related and required for you to volunteer for something. It's really helpful to be able to say, Hey, even if this doesn't work out, I'm still, I still have worth as a human being. And also I have confidence that I can complete this task. And those are sort of two different things. So Ben, how do you think about that difference?
Ben:
So I think about what you just said in two separate temporal scales. So you have to have the confidence. If you're going to be tackling, like delivering a project, you should kind of know if you're going to finish it. Um, and you gain that by just experience, right? It's like, Hey, I've, I've done things that seem adjacent to this complexity in the past. I kind of know what the scope would be or if somebody has a good feel for how long this should take. So you can measure whether you're capable of figuring something out. You should know how long it takes you to do research. So you can't do something like, hey, I'm working in the data science department and I built all these models and I have all this great stuff that's running in production. take on a project to build like the backend serving layer for like a generic model framework. That's probably not, and if you've never done anything like that before and you just say, Hey, they scope this to be a two month project. Um, you know, it's, it's four total sprints, you know, 20 sprint points. I think I can do this. But if you've never done that before and you don't have anybody helping you. you've just set yourself up for failure. So it's about things that you kind of know whether you could do in a lot of amount of time. But then on the other time scale is all of the micro steps that happen throughout the day or throughout a sprint where you're working on implementing something and it's just blowing up. And how quickly can you go from nothing's working to, okay, I'm at a state now where this thing is doing what I'm, what I want it to do. I'm getting the outputs
Michael:
All right.
Ben:
that I want. And then move on to the next phase. Like how can you bucket up your work in such a way that you know you can make incremental progress and not get discouraged by things being on fire? That's super critical.
Michael:
That's true. Yeah. So one other aspect of this list that, and so far it sounds like a very bullshit startup-y like believe in yourself type of blog, but go read it. There's some actually counterculture and interesting perspectives. One of the interesting perspectives that's in this bullet is the difference between confidence of leadership and confidence of an employee. So so far we've been sort of talking about confidence of an employee, but here Sam Altman is talking about. setting the tone for your team and providing a vision and believing in that vision. And the example he cites is Elon Musk at SpaceX. He sort of said that was the threshold or the example of what confidence should look like because in 2010 or whatever, they were talking about going to the moon and or tomorrow, excuse me. And Elon Musk was so confident. He was like, Oh, we will be there. We just need to plan for it. And it might take five years. It might take 20 years, but it will happen. So Ben, as sort of a project lead now, how do you think about setting the tone with confidence versus being very calculated and transparent with the risks that are apparent in the environment?
Ben:
I think the only way to effectively manage an organization, whether that's a team, a department or a company as a whole, is it boils down to one aspect. If you want to set the culture for technical people, like engineers in an organization, data scientists, you have to have faith in them. And what I mean by faith is you trust them. You listen to them. you work with them and that sort of transparency and conversation of seeing these human beings as peers to say, hey, here's what we're thinking. We've got this idea. Maybe it came from one of you, but we believe in it and we know that you can deliver on this. Tell us how long you think it's going to take, how many resources you need, and we'll We'll be there to support you as you build this thing. And if you want to make it like a fantastic culture at a company, that's really it for technical people. If you have leadership, that's, that's basically like, Hey, we think all of you are awesome and you're going to succeed with this here's an aggressive timeline. You might think you're not going to be able to do it, but we think you're going to be able to do it. And then. Also to have understanding when things don't go the way that you planned, which is pretty much all the time. There are always things that need to be sacrificed or things that just don't quite work right. Never attack the people. You basically say, hey, we understand that these are the things that are out of your control that held back being able to do this thing and just listen to them. Listen to why this didn't work. punish people for that? Why would you punish people for working really hard to hit a deadline? That's stupid. Although that does happen all the time. I've seen it with my own eyes in a boardroom meeting where somebody's firing half of a team because they failed to hit some project that was unrealistic to begin with. It's like the only person that should be walking out of here is management. What is going on? have your people's backs.
Michael:
What should the success rate be of your direct reports as projects?
Ben:
success rate? I mean, if you're talking about agile, it's all about deliverables. Are you getting your stuff done?
Michael:
Yeah, so let's say what should the on time completion rate be?
Ben:
That's highly variable. Depends on what you're working on. Depends on the scope of your team. You know, if you're a data science team that's, that's doing cutting edge research focused thing in sort of a skunk works project, the, the output of a team like that is just going to be reports and analysis and maybe useful models or useful architectures that they've come up with. But if you're talking about a data science team that's focused on you know, doing something with detecting user behavior or, you know, forecasting something or, you know, classifying bodies of texts or extracting, you know, encoded vectors from documents. Those are all proven things that there's no research really. I mean, there's research on an individual level. The team might have to do research, but that stuff's been in production for decades or years in the case of some things. So as a lead or a manager, you should know whether this is a solved problem or not. And then that adapts and changes how you adjudicate your team. So if you have a team that's working on forecasting problems and the data is relatively clean and you know that this shouldn't really take that long and you give them the benefit of the doubt, one sprint. still not delivered the second sprint, still no progress, then it's time to start, you know, having some conversations with people and saying,
Michael:
Right.
Ben:
share your code with me. Let's walk through this. Where are you getting stuck? Why is this suck so bad? I'll work with you. But if you're doing stuff that's risky, then it's, it should be communicated upfront to that team. It's like, Hey, we just want to explore this. Like, is this possible? Can we make this work? with what we collect in data. And if we can't, just write up a report saying why we can't. But if we can, then let's look at your prototype. Let's see how it works.
Michael:
Cool. So moving on to topic three, learn to think independently. Now that's another eye roll headline, but I think it's actually pretty insightful. And here's a little anecdote. So I used to work at 2B Fox's video streaming service, think Hulu, and I'd worked on experimentation and we did a meta analysis of all of the returns in. And we tried to see which experiments corresponded to those very high lift results. And we're trying to find sort of correlations and see if there's any similarities, like, oh, maybe UI is really important or maybe content is really important or whatever it may be. And the first thing that hit me was let's say we have a hundred experiments and we plot all of them on, on a chart. It was something like 60% of them were positive. The ones that were negative were really, really negative, like impressively. So like 10 X, all the other experiment lists. Um, and the ones that were positive all the way to the right. Or had a very outsized effect. So there were like 10 experiments that corresponded to 90% of the results. And this law of. sort of right tailed returns. So whatever it is, like 1% of the people own 90% of the wealth. 1% of the companies own 90% of the net worth, whatever it may be. Those were money examples, but you can take any example. People, I think optimize decision-making for a uniform distribution of returns. They don't take into account that a very small portion of those returns will lead to the vast majority of the returns. And so when you're thinking about deciding whether to launch a product, deciding whether to invest in a company, deciding whether to, I don't know, whatever else you do in your life, you have to know that there's a very small set of things that are significantly better than all the others. So with that, that should inform your decision-making volume is really important and testing and getting signal really quickly and then getting out is also a very useful skill. So I think that that's a very underutilized concept and startups. It's sort of like ingrained in the startup community, but maybe on your day to day data science job, when you're thinking about a project, prototyping is really fricking valuable because then you can know, all right, will this thing maybe lead to good results or will it not? So Ben, how do you think about incorporating that sort of dipping a toe in mindset when you're an employee as a data scientist or a machine learning engineer?
Ben:
I mean, I think it's one of the other aspects that they talk about or that Sam Altman talks about in that block of text is that sort of that thinking outside the box and yeah, and that translates to a prototype. You're doing a spike effectively. Hey, I've got four hours to kill. I've got this idea. I'm wondering if I can take this data and do something with it. And like... People that are pretty senior in positions that have been around for a long time, uh, that are, that kind of know how all this stuff works. They might do that three or four times a week. If they have the time, if they don't have the time, they're still going to probably do it once a week where they have this idea of like, man, I hate how this works or I bet people would really like this thing. Well, you can talk about your idea. It's not as compelling as showing. And if you can show a prototype, like a result of something, Hey, I've got this idea that solves this problem, or I think we'll solve this problem here it is. What does everybody think? And that's how a lot of new initiatives get started in companies is somebody doing that and I've worked at places where people are like, well, it always seems like all the ideas are top down. you know, management or like, it's like, yeah, the managers didn't build that prototype though, the managers are busy managing, uh, and they're in meetings all the time and discussing, you know, doing one-on-ones with people. So that was some principle or lead or, you know, some, some IC that built some prototype in an afternoon and presented it to executive leadership saying, I think there's something here. Can we devote some time? to do this. And then management is going to be the one telling the team, hey, see if you can make this work. So you can do that as even a junior person. You can, you can build something really quickly. And the way that I see that implode for some people is they want to boil the ocean on a prototype. They're like, Hey, I got to have You know, make sure that this code is, you know, I got to put it into modules. I got to have classes built. I got to, I got to do it so that if somebody sees it, they're going to be like, wow, this is really great. You know, wonderfully done. If you saw any of the prototype code that senior people write, you'd be like, what is this script? Like, what, like this is such garbage. It should be, it should be a GSD script. 100%. And for those who didn't know from a couple of episodes ago, that's an acronym, get shit done script. Um, nobody's, you know, validating whether your code is living up to the purity standards of, of what, you know, production greatness should be, uh, you want to look at the output of what you wrote. Like, and that's it. That's all anybody cares about. Now, if it gets picked up as a project, use that as a reference. to make sure that you're doing the same sort of logic. But yeah, don't use that code in your implementation. But you should be just moving fast to do that.
Michael:
Right. So I, I think, well, what are your opinions on the fourth bullet?
Ben:
I think
Michael:
Do you
Ben:
that
Michael:
think that's,
Ben:
I
Michael:
it's
Ben:
think
Michael:
worth
Ben:
it's super
Michael:
skipping
Ben:
important.
Michael:
or talking? Oh, okay.
Ben:
I think
Michael:
Interesting.
Ben:
it's important, man. Believe it or not. So get good at sales. This doesn't refer to get good at working as a sales person for your company. Um, if you're in data science or engineering, you shouldn't even be thinking about that. What you should be thinking about is product about, am I working on something that my customers want and need? And are going to, you know, your customers could be external, they could be internal, you know, other people at the company or other engineering teams or other, you know, the data engineering team or something. But whatever you're working on, you can be proactive or reactive. Reactive is your team just sits there waiting to be assigned work. You know, your team's not really going to succeed. you individually aren't ever going to really succeed. It's not playing the entrepreneurial game that if you really want to be successful, you have to think about that. Like, how does my team shine? How do I shine by coming up with ideas and implementing new things? Now the part of sales that is so critical is when you come up with that idea and you build that prototype and you have something that's working that's solving a new novel problem that nobody thought about, how do you convince people? Now, a lot of companies out there, our own included, is like, hey, let the data decide. And that is true to a certain degree, but we're all still human beings and we interact with one another through the powers of speech and face-to-face communications. whether that be remote or in person. And if you, you can be the best AI researcher out there. You could be able to solve any problem with the fanciest algorithms and everything that you produce within an IDE is just pure gold. But if you can't tell anybody how your idea works or why it's important, if you sell that to somebody to say, this is why, you know, this, this numpy array that's coming out of my script. This is why this is going to be valuable for our company. If you can't articulate that and make people believe, nobody cares. It could be the best solution ever. It could make your company a billion dollars, but it will go nowhere if you can't convince people. So learning communication skills and about how to talk through a vision and make it, and be able to do it on multiple different levels. So the elevator pitch, which would be to upper management executives, like, Hey, you got, you got 90 seconds and no technical jargon allowed. Just do that. And sometimes it's good to practice that, you know, talk to a mirror, like pitch a crazy complex idea. Can you get it in under 90 seconds? And can you make it so that your grandma or grandpa can understand it? If you can, that's an elevator pitch. And then you have to be able to talk to middle management as well. So the director of your, your department, who's probably somebody who used to do your job, you know, 10 years ago, they know the tech, they know the terms, they understand it, but what they really care about is, is this going to work? And is it worth our time? So. business-focused discussion, team-focused resourcing. That's the questions they're gonna be asking you. And then can you communicate this to the rest of your team? Can you do a peer talk, which is in the weeds, talking not about implementation, but about what is this and why is this important?
Michael:
Right. Yeah, I think the underlying concept of all of this is, which Ben started this with, which I was very happy to hear, is figure out what the audience wants slash needs, and then just provide them a solution. It's really that simple. And if you do it in a fluffy way, that's funny and charismatic, that's ideal. But usually people will listen to logic and reason. So if you have empathy, or get empathy from someone else who knows the use case and say, all right, they have this problem. We can solve it with X. We can solve it with Y. We can solve it with Z. Z is probably the easiest. Let's pitch that. It's honestly that simple. And I've worked with a bunch of good salespeople at Databricks over the past year. And that's really what they do. They say, all right, we, they sort of reverse engineer it. Um, but they say, all right, we have this product. What problem does this fit into best within the organization? And of course it's sales, they're trying to make money, but it's a really interesting way that they just find problems and we work really well with them because we actually are the ones that are solving problems and we talk to the engineers and we implement like my role. And so it's really helpful if we say, all right, these are the six issues that this team is experiencing. Maybe you can use this to bubble up to execs to say, all right, we have this tool that can actually solve those six issues. What do you think? And then they can implement it across the organization. So yeah, starting with problems is, it's really easy and it's really simple and it's really effective.
Ben:
provided that you know how to do it. If you've never thought through problems like that, and you've always been reactive, that's such a foreign concept. Cause people are thinking
Michael:
Interesting,
Ben:
about
Michael:
yeah.
Ben:
their OKRs and if their OKRs aren't aligned to problems that need to get solved, or if there is no structure in place and it's just left up to you like, okay, we got 10 people, what do we work on? Well, let's just refine these things that are already in production. Like, yeah, great. Nobody cares. unless it goes down.
Michael:
Cool. So moving on to point five, this is very related to that to be experimentation story. Make it easy to take risks. Now I was reading this and I don't really know where the advice is other than the concept of that story that a small amount of things will lead to the vast majority of the returns. But Ben, do you have any sort of tips and tricks on how to make taking risks easier from either a psychological perspective or a logistical perspective?
Ben:
For a company, I understand where he's coming from here for like a startup. And it's valuable advice, but for an individual employee, and you're talking about a career about make it so that it's easy to take risks. It's don't hit your wagon up to the wrong set, like team of horses. So a bad. analogy for what I'm about to say. But if you were to say that, hey, we're going to start working or hey, I have this idea that's really cool, it uses all this crazy tech and you know, like, hey, I think there's this really cool thing that we can do with integrating a large language model and then image generator and then also use, you know, text audio processing. And we're going to create these awesome, you know, talking GIFs and then it'll upload itself to a streaming service and then
Michael:
Oh man.
Ben:
we'll give a link that'll be generated and people could use these. It's like a cool idea, I guess. It already exists, but like, hey, our company makes razor blades. How is that relevant to what we do? like desire to follow the preceding steps of like, Hey, I want to push myself and I want to have these cool ideas and I want to be able to communicate it. It really has to be a worthwhile risk, something that is relevant. And if it's, I know people say like, Hey, there's no dumb ideas or there's no stupid questions. I don't believe that there's, there's loads of stupid questions out there. And there's loads of dumb ideas. In fact, you know, off of your extreme right tail distribution that you were talking about before, that left tail or the left edge of that distribution is where 95% of ideas live. If everybody had amazing ideas, we would already be colonizing other solar systems out there right now as a species. We would be so much more advanced. That's not how the world works. That's not how humans work. We make ill-informed decisions all the time. So to make it easy to take risks is to understand that you should take reasonable risks and then set yourself up for time boxing, the amount of effort that you're going to be applying to that risk so that you know. You know, if there's no boundary to your time, and this is one of the reasons why research spikes in engineering are always time boxed, because if they're left unbounded to be like, Hey, figure out how to do this cool thing that we want to do. If there's no deadline for that, it's like, Hey, you got three days to figure this out and that's it. You know, however much you can get done in three days, let's see the status of that. If you get nowhere in three, three days, that tells everybody something. And should tell yourself something like this is really hard. Like we, we didn't understand or think of how hard this was, but if you get it finished in three hours and the spike is completely done and you work on a design based on that of like how you would do the actual project, that's a win. Like, Hey, great. Let's do it. Um, but if you get like 80% of the way done at three days, you don't know where the tail is. So in order to get it completely done, it might take another two weeks. And that's like a sunk cost because the longer that you work on something, the more invested you get into it and be like, Hey, I gotta figure this out. I gotta get this done. And you're just burning time and times
Michael:
Alright.
Ben:
the one commodity we is that we have that's fixed. So that's really careful, like really, really scary to get to, but that short. time block that you give to a particular risky endeavor. If it's just a day or two of trying to figure something out and you can't figure it out, it blows up. Fine. Move on to something else. No big deal. It's not like an attack against you or anything. Don't take it that way. Just feel like, Hey, I learned some things. I learned that, you know, the smallest amount of information that you can learn from doing something like that is that that's not possible with. current technology or your company or the data you have, whatever it may be, or with you. Somebody else in the team might be able to figure it out, but maybe that'll happen. You were only given this short period of time, but that's the minimum amount of information that you'll learn. The maximum amount is you could learn dozens of different new techniques while going through that and that's a win, even if it's the project that you were just researching as a failure.
Michael:
Yeah. For instance, I'm working on a time series transformer solution accelerator and very, very fancy words. And basically what it means is building reusable code that allows you to forecast time series with the transformer framework. And I spent longer than I'd like to admit building data cleaning tools for this crap retail data set that I was using and the data set dictionary structure. Uh, for those, for the transformer library, it doesn't allow for missing data very well. Like it's kind of a weird structure. And so I had to do a bunch of backfilling, a bunch of nulls, like, and yeah, it was just a mess to transform that data. And yesterday I realized that the model doesn't even fit on the original data set because I was using a non-parallelized version of the model.
Ben:
Nice.
Michael:
I don't know why it didn't fit, but. I probably put in like 15, 20 hours making the data immaculate. And then I was like, the data is immaculate. What the hell is going on? So I tried it on the original data set. Still didn't fit, smashed my head against the wall and I was like, all right, let's figure out what's going on. And now it's working beautifully, but it's just like, oh, that was, sorry. That was a very frustrating aside that I wanted to share.
Ben:
But it's a good thing to talk about because you learned a couple of things there. And
Michael:
Exactly.
Ben:
one of those things, just from your cursory overview of that, here's what I've, here's what I guess that you learned. One is you learned about that library a little bit deeper. So you're like, Hey, I know these APIs a little bit more. I kind of understand how it works. Uh, two, you learned about that data set API and how to do that. Three, you probably. learned a couple of new things about backfilling data that maybe you've never tried before, or at least you know it a little bit more now. But the most important thing that you learned, number four, is always execute the example code that is provided with an API library before trying your own data. Because that sets the framework of like, oh, I know how this works before I start doing anything else. So... Even if that project becomes completely useless, you've learned those four major things.
Michael:
Yeah, no. So I did. And I, it's supposed to train for 40 epochs. I ran it for about five epochs. I was like, Oh, it's training. Then it produces a fit and then it produces forecasts. Great. So it must be working. It wasn't fitting. It was just producing a straight line. And
Ben:
Nice.
Michael:
one thing that I will never ever do ever, ever again, under any circumstance ever is have. bad steps between my iterations where there's potential leakage of concepts. So in this example, I moved from the Google collab notebook into the Databricks environment and I didn't fully test that it worked on Databricks. It produced outputs, but I didn't test that it produced valid outputs. And that step, that incorrect step led to all of my downstream steps being incorrect. And so Lesson definitely learned and I will be more careful in the future.
Ben:
Definitely. I wouldn't even begin to tell you how many times I've done stuff like that. It's something that I think you have to do it a couple dozen times before things really sink in of like a process that, you know, it's not something you would document and be like, hey, before I start a project, I need to make sure that I do these 38 things. It's just something that you've, you end up learning. And I found I've learned it faster over the last two years or so implementing solutions for backend frameworks, because it's so such a frequent thing. Like, Hey, I gotta, I gotta go figure out how this package works. Never used it before. Have no idea how any of this works. Um, and now when I know that I have to start interfacing with stuff, I've always got like a webpage open on one of my screens, this is like just their API docs. And then another one that's I'll create an interactive environment or, you know, a very quick, simple test framework where I can execute things that I need to validate like, Hey, the docs say that this does this, I'm just going to verify that real quick and I'll run it, check, I'll like print the output to standard out and look, or if it's something that's really complicated involving a bunch of steps, I'll read a simple unit test that I just write the test first and say, hey, assert that this is of this type and it can, you know, all the numbers aren't the same and like things are, are outputting in a way that I'm expecting. And every time I'm, you know, messing with a new part of that API, I'll just run it through that. And it saves me.
Michael:
That's smart.
Ben:
It's not like it saves me a couple of minutes here and there. It's like, this saves me days of rework by just setting all of that stuff up at the start of a new project.
Michael:
Right. Yeah. It's creating the system that you can iterate within and making sure your first principles are first principles. And there's no faulty, faulty information there. It's cause I think iterating is not that hard. Um, if you're generally intelligent and you generally know what's going on with the stack you're using being like, all right, I should try this next. Maybe that worked. Maybe it didn't then try this, then try that. The worst is when you're iterating under false pretenses and that.
Ben:
Or the
Michael:
Psychologically
Ben:
worst.
Michael:
too, that shakes you to your core.
Ben:
I think the even worse thing is not doing checking while you're working.
Michael:
What does that mean?
Ben:
I've met a few people in my career who I've seen them work as they're developing something, both on the data science side and the software engineering side. I've seen people open up a VI editor. They start Vim. They start writing. in a plain text document, some Python or Java or Scala, and they'll write an entire module blind. And then they'll close that file, save it, open another module, start writing in there, and nowhere in that process did they write a test. And for data science, understandable. You know, tests are usually... integration tests and they're usually at the end part of a project. If you're just using standard libraries, you don't need to verify that the Transformers time series library can accept a call to execute training. You should hope that their APIs are tested on their end and they are, but doing data validations of it coming back. If you test when you think you're done and you've written 2,500 lines of code across 50 different functions, where do you start to test? If you wait until you're done and you say, all right, I'm going to run it end to end, see how good I am, see if it doesn't throw an exception, and you look at the data coming out of that, you've just basically created out of your own code. a situation that's very similar to supporting somebody else's code where no tests exist. Because you now have to debug
Michael:
Yeah.
Ben:
the entire thing. You have no idea what happened. And then some people are like, well, you got a debugger and an IDE. You have fun with all those breakpoints and figuring out where to set them and where to trace through where the memory addresses are and what the values are. It's just a huge waste of time, in my opinion. And it's simpler to just say, okay, most software that we write these days is procedural, the vast majority of it. I know a lot of people say like, oh, I use functional programming. Very few people use pure functional programming. Why? Because it's super hard to write that stuff. Stateless applications are incredibly challenging to write. And I think most people that say that they write functional programming are just writing functions. and they think that's functional programming, that it's not the same thing. Or like, I don't need to test until later on because I'm, I need to create this object first, so I'm going to define my class and then I'll test it later. But if you wait till the end, like, how do you figure out where it's breaking or why it's breaking, or did you just go down a path where you wrote, you know, 500 lines of code that you now have to delete. It probably took three or four hours to write that 500 lines of code. And it's all trash. It's wrong or, you know, so
Michael:
it.
Ben:
I'm very much a proponent of test as you go. And regardless of the paradigm that you're using to write your code, uh, it's all at the end of the day, generally procedural or it should, you know, your unit tests should be procedural in order to test elements. test units. So why not set up a little environment that you can validate, check as you go so that you don't go off the rails.
Michael:
Yeah, just to beat this topic fully to death, it's sort of like building a house and you have layers of bricks and you want the brick below your current brick to be good and you don't need it to be a diamond hard brick, but you also don't want it to be a Plato brick. You want it to be sort of somewhere in the middle where you can trust it. And then you're good enough to move on. So it's sort of finding this balance.
Ben:
Mm-hmm.
Michael:
And so we have a few more topics. I'm going to run through them and let's just pick one more. But again, this blog is pretty cool. You should check it out. Um, so the sixth topic is focus. Seventh topic is work hard. Wow. This talk about
Ben:
ABS.
Michael:
clickbait yet. Eight is be bold. Should be on a t-shirt. Nine is be willful. Also should be a t-shirt. 10 be hard to compete with. That's, I think more of a startup mantra than an employee mantra. 11 build a network. Yeesh. 12, you get rich by owning things, if you wanna get rich, that sounds pretty cool. And then 13, be internally driven. So Ben, did any of those stand out to you as a good final one?
Ben:
The last one is probably the best advice that
Michael:
Agreed.
Ben:
anybody in a tactical pursuit can follow. In these professions that we work in that involve computers, usually I've heard people say, oh, it's full of a bunch of introverts and I think that's garbage. There's plenty of different personalities that work in software development and data science and stuff. However, there is an overriding factor regardless of how somebody's core personality comes across in that people are usually hyper-focused on projects. That's how we're measured. That's what we're paid to do. That's what we chose these professions to do is to build cool stuff. Maybe that'll never see the light of day or maybe it'll be the next big thing. It doesn't matter. Most people that are passionate about engineering work are passionate about the process and the people they work around and just working together to build cool stuff. It seems to be like the thing that motivates people that are in it just for the money. They never stick around because it's hard. You get burnt out and if you don't have a passion for it, these professions rapidly become like a regretful anecdote on somebody's working history of their life. They're like, oh, you used to be a machine learning engineer at this company. That's impressive. Why do you own a coffee shop now? It's like, cause that I was doing it for a paycheck and that sucked sort of thing. The people that stick in it and that are really passionate about it are in it because they like that, that project worked because of that. That's what we're all focused on. And the people that do that well and have. the resiliency to in the face of the almost constant universal, uh, adversarial, uh, relationship with progress that building things like what we work on is. It's like the universe is constantly trying to tell us like, stop doing this. Like, this is not going to work
Michael:
Yep.
Ben:
and we're fighting against that. So if you don't have that drive to overcome that, to continue to solve the problems and to continue to get better at what you're trying to do or everything that came above this on this list about having that internal drive, that passion for wanting to get better at all of this stuff, it's never going to work out. You know, you're going to stagnate or you're going to be aimless or you're going to get dejected and quit. But if you want to stick in it and you are passionate about it. You gotta have that drive. Nobody else is gonna do it for you.
Michael:
Yeah, I mean, there's a bunch of different ways that you can sort of stay motivated, I would say, and leveraging your psychology is relevant. Like if, if money does drive you to be really effective, I think that's really rare, but maybe you can leverage that if insecurity drives you, maybe you can leverage that. But hopefully you have sort of more warm and fuzzy motivations of, I like the people I work with, I like innovating, I like writing code. Um, for me, I like the process of working. I really like getting into flow and just like hours pass and something at the end of that is produced. I find that very like, it's a very attractive concept to me. And so leveraging your own psyche and leveraging knowledge about yourself is really important, but, uh, typically doing it just for the money. They're probably easier jobs that pay similar. So.
Ben:
I mean, they might be way more work, but it'll be a different type of work.
Michael:
Exactly. All right. Well, again, check out the blog if you're curious, but I will wrap and we can continue. So today we talked about a actually I'll pause it. Can you hear me? I think my mic is registering. I can hear you, by the way.
Ben:
You can hear me just fine? Yeah, I can't hear you at all. And your mic just went on mute.
Michael:
Yeah, no, I muted myself. Can you hear me, Stahl?
Ben:
Now I can hear you.
Michael:
What the
Ben:
So.
Michael:
fuck? Right. Weird. Um, all right. Uh, I'll mark the clip and, uh, we're good to go. So today we talked about a blog by Sam Altman called how to be successful. And every single bullet was pretty much a click bait title, but there's actually interesting insight in each of those points, or at least in most of the points. So some of the things that we talked about today, um, just tips for being successful first volunteer for stuff. And don't volunteer blindly. You should volunteer for stuff that's a bit higher in complexity than what you're comfortable with. So have confidence, but don't be stupid. You should also be able to pitch to an audience. And so typically people think about selling. Well, an important component of selling is finding a solution to a problem and understanding that different people care about different things. So the execs want the high level elevator pitch, middle management might want something in the middle. And engineers really care about the technical details and want to know how much this will improve day-to-day processes. They don't really care about the money or maybe they do. And then finally, it's really important to get good at dipping your toe into something. So building prototypes, exploring some ways that you can be effective at that is time boxing to make sure that stuff doesn't run over use GSD scripts, get shit done scripts and Ben's personal strategy is have. browser open with API docs and sort of an interactive environment where you can explore and see how stuff works in real time. So did I miss anything?
Ben:
No, it's good.
Michael:
Cool. All right. Well, until next time, it's been Michael Burke
Ben:
and Ben
Michael:
and
Ben:
Wilson.
Michael:
have a good day, everyone.
Ben:
Take it easy, see you next time.
How to Get Sh*t Done - ML 121
0:00
Playback Speed: