Navigating Expertise Gaps - ML 172

In today's episode, Ben and Michael discuss how to handle situations involving individuals lacking expertise in machine learning projects. They explore scenarios where a team lacks expertise, considering approaches for consultants or team members. They discuss various personality types encountered in such situations, including those overly suspicious or resistant to change. Moreover, they discuss how to convince a boss that a proposed project is a bad idea, suggesting a structured approach with clear estimates, risk assessment, and alternative solutions. They emphasize the importance of honesty, transparency, and presenting options with clear pros and cons.

Show Notes

 In today's episode, Ben and Michael discuss how to handle situations involving individuals lacking expertise in machine learning projects. They explore scenarios where a team lacks expertise, considering approaches for consultants or team members. They discuss various personality types encountered in such situations, including those overly suspicious or resistant to change. Moreover, they discuss how to convince a boss that a proposed project is a bad idea, suggesting a structured approach with clear estimates, risk assessment, and alternative solutions. They emphasize the importance of honesty, transparency, and presenting options with clear pros and cons. 
The discussion then returns to the Gen AI time-series case study, suggesting a presentation of multiple options, including established algorithms and the Gen AI approach, to facilitate a data-driven decision.
Finally, the episode addresses the scenario of a teammate being untrained about a system they built, suggesting a combination of direct but constructive feedback and a collaborative approach to identify the root cause of the issue.

Socials



Transcript


Welcome back to another episode of Adventures in Machine Learning. I'm one of your hosts, Michael Burke, and I do data engineering and machine learning at Databricks. I'm joined by my amazing cohost, Ben Wilson. I write SDK proposals for open source project at Data Race. Today, we don't have anyone else.

It's just us, a panelist episode, and we're very excited about it. At least I am. I know Ben is as well, and you should be too. And here's why. Today's topic is how to deal with ignorant people.

This might be a little bit cathartic for at least me. I've been dealing with very kind, lovely, talented, and amazing people who might not know one thing that poses a bit of a challenge specifically to me and my work. But, of course, everybody at Databricks is perfect. So, we came up with a few case studies that we thought would be interesting to cover. The first is how do you enter into a greenfield project with a team that might not have expertise on the project?

There are a few angles. We can take the consultant angle where you're actually placed into that team to help or to lead. We also wanna talk about if you're a member of that team and you're given a really tough proposal where no one on the team has expertise. So, Ben, what are your thoughts? Having lived in both worlds, at at this current company and the previous jobs as well.

From the the consulting or teaching aspect where you're you're brought in to work with with a group of people that have no context on some directive that's come down. Like, hey. You guys deal with data. You analyze stuff. Figure out how to, like, predict the future.

And then everybody's just, like, you have, like, different personalities in a group like that, or at least from my experience. I've seen, like, there's the overly ambitious, you know, generally a younger person who's like, I can do this. I'll look up on Medium, and I'll get a blog post, and I'll I'll just, you know, copy some code and and make it work, not knowing that, like, hey. That's that tutorial that is shown is ultra high level on highly curated data. And in order to reduce somebody's propensity to navigate away from that, they leave out all the complexity that actually is involved in that so it's not to overwhelm a reader.

And then you'll have somebody in the in the group that's probably set in their ways. They they know what they know and are unwilling to try to push themselves to learn something that's very foreign to them. And then the rest of the group is probably from my experience of dealing with groups like this, and having been on teams like this, everybody else is either just checked out, are terribly afraid of this concept of what they're about to be tasked with, or just completely ambivalent. They're like, okay. Another project.

Alright. I'll do whatever you tell me to do. Got it. So a lot of different thoughts there. Let's start within the team component.

The one of the key takeaways is don't use medium as your your primary source of information. Let's but let's, like, go into a case study, I guess. So, I recently have been doing a lot of scoping for ML stuff, and, one of well, actually, I've had multiple projects that involved putting Gen AI into a place where it potentially didn't, like, belong. One of them was time series forecasting. And, actually, I had 2 that were time series forecasting.

And so I would pose the question to you, Ben. If someone says, hey. There's this cool thing called transformers. It's fundamentally sort of autoregressive. It it maps relationships between sequential pieces of information.

It should work perfectly for time series. Right? And because of that and because chat gpt is super cool, shouldn't we build a transformer based time series architecture? So let's say that is the logic of your esteemed boss, and you are part of this team. What is your approach to either building it or not building it?

I mean, my approach just to to be very clear upfront, my current leadership chain would never ever ask something like that. Like, never. Not even, like, joking would they ever say something like that. One reason is it's it's a ridiculous question to ask to somebody who's gonna be potentially designing and building something. The second thing is is nobody in our management would ever tell a senior technical person implementation details.

They would present a business problem and then leave it up to the nerds, us, to figure out how to solve that problem. And we would come with a proposal that they would review to make sure that we're thinking through the business problem correctly. And then we would do technical design, which they would also look at and pose a bunch of intelligent questions that could be just them playing devil's advocate to make sure that we can defend our position. And then once everything everybody agrees, then we go and build it. And they don't care how we build it.

They just want the business problem solved. I'd say that is an example of highly effective technical management, and that's sort of the gold standard, in my opinion, of how organizations should operate for technical solutions to problems. But setting that aside, if they if we're in bizarro land and my my manager actually asked me to have the team do that. My first my first thing would be to take them aside into a private one on one meeting and ask a lot of questions of, like, why are we doing this? Why do you think that that's the technical direction we should go down?

And why are you trying to tell me how to build something? Because that's weird. But after that conversation, if it was very much a just go and do it, do as you're told, I would probably update my resume. But I would I would go out and I would ask myself a bunch of questions that are part of things that I need to check boxes in my own head to to figure out if this proposal is the right way to go. One of those things is actually searching medium, because if there are posts out there where people are touting that this actually works, then maybe there's something to it.

That doesn't mean, hey. This is legit, and I should do this. It's okay. It seems like there's a lot of people talking about this, and I'll search other places too. Like, look for references.

And I'm scanning through that stuff not to look at their code or look at how they built it. I don't care. I'm looking to see what libraries they're using, and I'm writing those down. And then I'm going to those libraries or whatever they're referencing and looking to see how many times these examples from the people that created the library are out there. So if it's buried in some, like, obscure location in their docs or it's in their repo in some, like, experimental example thing, I'm like, right.

This wasn't built to solve this problem. Maybe somebody's just has figured out a way to kind of make this work. That's a red flag for me. Or maybe it's the library is designed 100% for this particular task. And if that's the case, I wanna see where it comes from.

So a good bit of a good tip for people out there, when you're looking through an open source repository in the initial read me, if you can find a reference to the white paper that's in, like, before the fold of the read me, is it in the first page, and is it highly like, very visible or at the like, within the actual description of the project. This is based on the implementation of this paper. That's a huge red flag for me because if that's what you're using to to kinda market what you've built, this project is probably coming out of a PhD program or, like, somebody's dissertation, which means, generally, it's it's not production ready. Because if it is production ready, they're gonna create a new version of that that doesn't that'll have the the white paper in the footnotes where it belongs at the very bottom of a read me or notification of, like, what this thing is, not as the advertising aspect of it. Because if it's for production, you want people to use it, you you're gonna state what the tool does and what its features are and how to get started with it, not, this is based off of my paper.

You know? It's kinda scary when you see that. I'd also be looking at other usages of it. So, yeah, you can look at stars, you can look at downloads of, like, open source trackers and stuff, And then you can also look for examples of its use just in general online. Are there tons of people using this?

How many instances of this are in publicly accessible GitHub repos? Are those serious repos? Are they do they come from big tech company, or is it, like, somebody's pet project that they're just monkeying around with? So you wanna try to determine evidence of other people using this approach. If you can't find any, then go back and figure out, like, what do people use to solve this problem?

Is it written down in a book somewhere? Can I get that book? Can I read it? Is it something that has well established patterns of how people solve this problem? If so, go research that and figure that out and see what is most commonly used.

You know, look at the law of averages. If if most people are using some approach to solve this problem, it's lower risk for you to, you know, follow the herd. Yeah. I so I did a lot of this work at my prior role where we would go and read a bunch of white papers and try to basically innovate in the space. But innovating without a baseline is really, really hard.

And so often, we would just implement things that larger, more research oriented organizations, like, I think our examples were, Airbnb, Netflix, Booking.com has really good experimentation, Google, and then a little bit of meta, and leverage those implementations in our own works to solve our own problems. And then we would tweak it a little bit as needed, But that's a really high ROI solution. And often tech blogs for these companies, they're really great sources as well, and a lot of them are on medium. So, I completely agree. So if you What you don't wanna do is find a non tech adjacent company with some, you know, tech organization blog referencing this this implementation, and then nobody else references it.

And but I I've seen that. I don't know how many dozens of times working with people. I'm like, where did you find this? They're like, oh, this cool blog post from this company. Like, that's a startup that has, like, 10 employees, and they just happen to hire the person whose PhD was based on this tech.

So, yeah, they're using it because this person was the one that was hired, so they're using something they already knew. It doesn't mean it's any good. And maybe that library is just full of problems, so it or it won't even work on your data. So, yeah, it's risky. Right.

Now what if you are not part of this team and you're coming in as an external consultant? Let's assume there's you're generally benevolent and you're not hated and it's a normal working environment. I would not modify. And in the past, I never did modify that approach where I just start asking why and try to backpedal from instead of talking about implementation details and, like, how we're gonna solve this problem, just start asking, like, what are we trying to do here? What problem are we solving?

And once we get that all nailed down, then we start thinking about ways to solve it. But not just gravitating towards this one thing, like, oh, our CEO said that we need to use GenAI. Like, cool. And then and how does that relate to this business problem? What what is the problem?

Well, our CEO said that we need to use GenAI to do forecasting. Like, that's an implementation detail put into a business problem. Like, that they have nothing to do with one another. If you wanna get it out there that you're using Gen AI for something, use Gen AI for something that it's really good at, that it's proven to lower your own risk. I'm curious.

What's your MVP Gen AI project? Because I've gotten this this question at least 50 times. Like, what we need to use Gen AI. What should we do? What's the answer?

Intelligent information retrieval. It's the number one thing that I think is a huge selling point for businesses to use Gen AI. Like, we So, like, an an internal chatbot type of thing? Or Yeah. If you want something that you can build in a week and get into production in another week, provided that you have the budget for it.

Find the best LLM you can for whatever your budget is. Get a good vector search engine. Index the heck out of all of your internal documents that aren't don't have sensitive information, and then hook those up together in an agentic framework, and then deploy it. Make sure that you can handle request volumes, and make sure that it's thoroughly tested. And if you can do that in 2 weeks with a, you know, relatively small engineering team, and you're gonna get massive ROI.

For 2 weeks of effort, the amount of time you're gonna save your organization with I don't think people understand how many times during the day so many people just need to get information about how something works or, you know, all of the docs that a company that has been around for more than a couple of years are gonna have available. And being able to index all of those and actually have contextual, you know, question answering capabilities is completely game changing for any organization. And it's gonna be cheap. Like, you you're gonna get, what, a 100 requests an hour maybe, maybe 500. It's not gonna be like, oh, I need this, like, massive deployment that I have to maintain.

And it it just it provides value from day 1 too. Right. Okay. Cool. Alright.

That's my answer as well. But going back to this scenario of the greenfield team. So you come in and you're an external external consultant, and you do all this sort of do a lit review essentially, figure out what's been done, what can be done. How do you then convey that information in an effective way to that team? Statistics work pretty well in my experience.

So it's hard to argue with data. So you can do it with spending an hour or 2 and collecting the information that we just talked about. Like, hey. You wanna do this thing? Here's how many references that seem legit that I found.

I timebox it to 30 minute search. Set a timer. Really, 30 minutes, me and Google, working through, trying to figure out how many good references I can find in 30 minutes, and then I take 30 minutes to do the next, like, alternative to that. And that the results of those and you can all that's something you can also pass on to people that you're talking to. Like, hey.

If you don't believe the way I did this or my methodology, here's what I did. You go and do it, and see if we come up with something that's kind of similar. Usually, when you are trying to convey this topic, like, should we use this tech for this thing, and you go through that process, most people are gonna come back to the table the next day and be like, yeah. You're right. Okay.

Let's explore these other things. Right. Okay. So prototypes with NorthStar metrics, essentially. Not so much building a prototype, just, like, researching how many Okay.

Just researching how many people are talking about these things. Like, can you find legitimate references of like, are people saying they're using this tech to solve this problem? Can you find it? And it just when you find one that seems legit, paste the link and put a a green box next to it. When you find something that's nonsense, paste the link, red box next to it.

Count up the green boxes as a ratio of total boxes. Heard. Say, like, hey. I got I got 3%, you know, catch rate on this idea versus option 2 has 90%. Right.

Which one should we go with? Clear. Alright. So that covers, I think, the team aspect. Now what if after all of this hard work, you've spent a reasonable amount of time trying to build prototypes, do your lit review, try to see if other people are solving this problem with a certain piece of tech.

You conclude that this is just a terrible idea. How do you go back to your boss and say this is a terrible idea? Let the data decide. You know? Show that.

Show your findings of the those ratios, and then explain what the risk would be, like and start breaking it down from a high level estimate of how much work is involved in approaching these different alternatives. And that's what a tech lead is supposed to do when they're evaluating this stuff. It's, like, I estimate option 1 is gonna take, you know, 10 person weeks to get something that'll be at a prototype phase. Option 2 is 2 person weeks. And then to get a prototype going, you break all of that down, and then you start assigning pros and cons, like risks associated with things.

Like, hey. Option 1 that you wanted us to do, super risky. We don't know if it's gonna work. Now there's plenty of time when you're at or plenty of times that this happens. When you're at a a company and you have something that's already working and you wanna do, like, greenfield research, like, test out, like, hey.

Is this like, could we potentially use Gen AI to do this? It's fine to do stuff like that. Like, just go and prove it out. But that's a spike. Right?

It's like you time box amount of effort and r and d time if you have the budget for that, and you have a bunch of free time to devote to this. Take one person who's who can do that methodology that you trust and have them work on it for 2 weeks or something. And at the end of that, they provide a report of what their findings are. For this particular use case, the report is probably gonna look pretty dismal. This is not not really designed for that.

But have people kind of made it work on highly curated datasets that they have, like, handcrafted? Oh, yeah. You know? People gotta get those those papers published. Are they peer reviewed?

Do people do like, do big tech companies pick this up and start doing it and then start talking about it? No. Because it's nonsense. But you'll know after 2 weeks, like, yeah. This like, we spent 2 weeks.

We learned some stuff. We have some examples. We learned more about Gen AI capabilities. We learn we have all these, like, API examples that we can use for other projects. Like, we've it's valuable time spent, and you have a report at the end that can be used for the next person that comes up with this idea to be like, yeah.

We already looked at that, and, no, it doesn't work. Yeah. I can definitely echo this. With Databricks, we professional services, we're given statements of work or SOWs, and often customers want to deviate from the SOW. Maybe some requirements in the business have changed or they see a shiny new feature that they want.

And it's a real challenge because we have a fixed budget of the number of hours we can actually be putting into this project. So really good way to push back against that is just say, we have this fixed amount of time. What do you want us to remove from the SOW to incorporate this new feature? And, likewise, with a boss, we have a fixed amount of time per times number of employees times number of weeks for this sprint, let's say, or whatever it might be to get this done. What do you want us to remove?

And even if it's like, there's oftentimes the spillover as well where, like, it might not even be able to be done in a slightly reasonable amount of time, and then it's a no brainer. So providing work estimates as the baseline or, like, the unit of ROI, By definition, that that is the investment, the I in ROI. But it's really, really effective for decision makers because then they can do a simple calculus of, is this worth 10 weeks of 5 people's time? Yes to no. Mhmm.

Big fan. Cool. So we've done team. We've done boss. Let's do super boss.

So you have a politically charged situation, and, going along with the theme of ignorance, let's say, someone else on a different team doesn't know what your team does per se, and they vetoed something that you really need to get pushed out. So how do you circumnavigate that via going up a few chains of management and influencing them to override your other team's decision? It's the important thing about influencing organizational change. So if you're calling, you know, big mom or big dad, let's just call that, like, c level suite, the people who can just assign a directive and people will do what they just said because that's, like, company vision. I can tell you the worst way to go about it is complaining.

If you just whine about the fact that you I can't get this done because this person said no. I mean, if you go to c suite with something like that, they're probably gonna tell you to shut up in no uncertain terms. You're not gonna get the result that you want because nobody wants to hear that. Nobody wants to hear somebody just you just have created a problem for somebody who doesn't have time to solve that problem. Now they need context.

They have to do a bunch of, you know, chats with, you know, multiple parties. They have to waste a bunch of their time in order to figure this out and then make a decision. So you're you're creating multiple problems for yourself when you do that. First one is you're not gonna get a quick answer. Second of all, you're gonna look like you're helpless.

And thirdly, you're gonna annoy people up the chain of command because you involved somebody who's now gonna be like, what's wrong with your team? Because whether you know it or not, your manager is gonna hear from their manager, who's gonna hear from that sealable person, but, like, why is this now a problem? Why are you not managing this correctly? Like, why didn't they go to you to solve this? So you can't do something like that and have it play out well.

What you can do, is when you present an issue, don't even mention that somebody doesn't want you doing what you're doing. You propose multiple solutions with pros and cons with both or come up with an idea that simplifies something that, in order for that to work, requires this other thing to work or this this thing that you're requesting to be in existence. And you send it out publicly and make it very benign, very very, not so much benign, but very positive in nature. Like, hey. We'd like to do this initiative, which solves all of these problems.

Here's some alternatives as well, and be very honest. Like, you better not make stuff up when you're doing that. Just present the the facts, but you give options there and the option that you want to happen, and it's gotta be something that's valuable as well. But when you make that presentation, make it so that it's almost common sense to go with that. And the the honesty aspect is super important with this, because if you're not being honest and haven't done the the presentation of ideas that makes this so compelling, people are gonna look right through it.

Be like, yeah. You you've you didn't do your homework. You didn't think about this other aspect. So you have to it has to actually be a good idea. But if you present it in a way where you're giving these different options and then presenting it focused on the problem that you're trying to solve and how it's gonna, you know, impact the company or impact your your part of the organization, you're probably gonna get your way if you present it in that way.

So even if there is resistance and somebody chimes in and says, you know, to this email thread or something, and they say, no. We don't wanna do that. This isn't the right way. Like, this goes in the face of our team's directive, and we can't support this. It they now are becoming that one, like, negative complainer in this thread, and an executive is gonna look at what the original proposal is and then and see, like, oh, this seems reasonable.

Why is this person complaining so hard about this thing and being very negative about this proposal? It seems like it's a good idea. Just go and do that. So that's one way to just sort of solve that problem for you in a very it's a very public way of doing it, but you can do it in a way that, you know because usually that person who is very negative is gonna come back and be like, yeah. We'll work on this with your team, and then you'd be very gracious to them.

It's like, thanks. We're really excited to to work on this together, and everybody wins. So that's my my take on that. So taking this back to the Gen AI time series thing, the way that I would handle that in that exact scenario, if it is the CEO who's requesting this and demanding that we use Gen AI to solve time series forecasting. My response would be the presentation of the options to solve the time series thing that have nothing to do like, maybe Gen AI is one of the the proposals there, but then put some other proposals.

Like, we have this thing running right now. It works fairly well. Here's its results, and here's here's a snapshot of its latest run and how well it's performing. And then here's option 2, which is improving that, the existing, by using maybe another algorithm. I did a prototype, and here's what the projection looks like that we have, like, 4% increase in in accuracy or whatever it is.

And then option 3 is the thing that was proposed. Like, we're gonna use Gen AI, and here's the results of that, and we estimate that this is gonna take this amount of time. And then after that, like, we're so we're we're adjusting business problem potential solutions or avenues of action, and then break apart that whole that that tight coupling of business problem and implementation directive into, you know, being able to read through the read between the lines with with the problem. It's like, okay. The CEO wants to use Gen AI, wants to say that they're using Gen AI because that's a marketing ploy right now, so they wanna see it relevant.

Well, create another section below that proposal on solving the original business problem on how can we effectively leverage Chen AI in our business and come up with 5 separate proposals for that that are business focused. Like, here's product related things that we could use this for. And that might be you having a brainstorming meeting with the team or talking with other people in other departments. Like, think, you know, play some jazz. Think outside the box.

Think, like, how you could leverage this, and then codify those as distinct projects of how to use this technology. How do you determine what the executive cares about? Just asking them? I mean, if this one's pretty obvious. Like, if they're saying we need to use Gen AI to solve our our forecasting problem.

They went to a conference somewhere. Somebody was talking about leveraging Gen AI for unique use cases and how amazing it was. They don't have the technical acumen to know that that other person talking is is completely full BS. So they just come back to the company after their little chat, and they're like, we need to do this too because our competitor's doing it. It's like, first of all, no.

They're not or not successfully. And second of all, okay. You want bragging rights about using new tech so that you seem, you know, like, one of the cool kids. Let's do that and see how much like, how long it's gonna take. Got it.

Sometimes it's a little more nuanced, but anytime I hear a we need to do we need to solve this problem using this, you know, the this particular tech, and it comes from somebody who is not in tech, I I'm very skeptical. Cool. Yeah. I really don't have a ton of stuff to add because I haven't done this much. I always come in as the external person at Databricks, and at my prior role, I didn't really I was pretty low level, so I implemented and implemented a lot of really cool and really technical things, but I never was a thought leader.

And now, again, in the Databricks role, you can just be like, that's dumb. Why are you doing this? And they will usually listen. And if they don't, you you phrase it a little differently, and then they start listening. But that makes a lot of sense.

Is influencing bosses, bosses, bosses the best way to get someone on a different team to do something you want? Definitely not. And you should only do that if you understand all parties involved. You know these people personally. Mhmm.

And you understand, and you're at a level where you can actually initiate that conversation without stepping on the toes of your own management chain. Got it. So you have to be you have to have some like, be in a position where people trust you, and you can't just be like some random, you know, hey. I'm a junior engineer, and I've got this great idea. I think our company is going in the wrong direction.

Have a chat with some people first. Like, running up the chain. Most of the people that are above you are gonna have years, if not decades, more experience, and they can help guide that conversation with you. That's the best way to do it. And I think the most effective way of getting something done is you always notify your management chain before initiating something like that.

And having conversations with them, they're like, hey. This is what I'm planning on doing, or do you think this is the right move? Or, hey. Can you interface with your counterpart on that side? Let's come to an agreement.

The lower level within a corporate hierarchy that you solve a problem, the faster you're gonna get a resolution, the happier everybody's gonna be. And, you know, it's just gonna be a better experience all around. But if it's it depends on who it is on the other side who's gonna be throwing a wrench in your plans. Like, hey. This person is gonna block our ability to do this.

Well, that's when you're like, alright. What level in my chain could override this person? And you realize that nobody can except for c suite, and that's when you have to come up with creative ways to do that. Got it. Alright.

One more question on the executive front. So I am currently trying to get an initiative at Databricks approved, and I make I think keeping it anonymous is the best move right now, but it's hopefully gonna be very public. We'll see. It's probably not gonna work. Let's be clear.

But it's something that I'm really passionate about and I enjoy. And what I would need to have happen is people all the way up to C Suite, essentially, maybe a level below, maybe, but probably C Suite, approve field engineers spending less time on customer work and spending time on another initiative. In that scenario, this is just for, like, my personal vision and also I think it benefits a lot of people involved. I've created a list of the people that are benefiting, and literally every party is benefiting except for the fact that we have less time to devote to customer work. So in that scenario, what are your thoughts?

Like, how would you go about approaching influencing executives for your own personal mission? And let's be clear. I'm also very low on the food chain in this discussion. Yeah. It's that's gonna be a tough one in when you're talking about capitalism.

Right? Mhmm. So it's directly impacting the business if you're talking about human hours spent solving something that could be making money. So if you're taking away from that, the battery the value with something else? You need to replace the value with something that is significantly higher than what would that amount of human hours would normally be doing.

And you have to higher is because you need to warrant changing inertia? Yeah. I mean, it's it would be a whole new sort of company project that would need a sponsorship from the c suite. And you would basically need to create, like, a business plan of what is this thing. So, basically, like, a startup pitch.

Like, hey. This is my vision. This is how it's gonna impact. This is all of my data associated with this. If I was to estimate the amount of work involved in proposing something like like that, 6 months at least to, like, get that formulated in a way that that pitch would be even considered.

And then the benefits have to outweigh doing nothing by a, like, a very large margin. It can't just be like, woah. We have this ephemeral metric that we're gonna use to, like, gauge how beneficial this is. It's like, no. It's it's gotta be, like, what appeals to the business?

Are we gonna cause customers to use more of our product so that we make more money and increase the value of the company, or is this gonna give us such positive publicity that not doing it is a risk. Got it. And if you can't prove both of those conclusively, like, to a point where nobody can it's in, like, an ironclad case that you're presenting, then it's the holes will be poked in into the discussion. And if you can't plug those holes fast enough, it'll go nowhere. Heard.

Okay. So we've talked about teammates, whether you're internal or external. We've talked about bosses, and we've talked about bosses' bosses. What do you do when you have a teammate that is ignorant about something that they have already built? So let's say the the time series thing, we go and we build a a great implementation.

It works, but it's just worse than a cheap PMD model. How do you go about persuading them that, a, maybe they should drop the project, b, this is costing money? Like, what would you say to them, and how would you try to influence behavior? I mean, this is purely theoretical because that would never happen on the team I'm on now. Nobody would even build something that they don't understand.

They would probably refuse to do it, and rightfully so. Where they just have a lot of questions like, what are you like, why are you allowing me to do this? Yeah. But in a hypothetical situation and I have worked in teams like this where somebody, like, takes on a project or they're doing it, like, on the weekends, and then they come and do a presentation as a big surprise. Like, oh, I worked on this for the past 6 weekends, and and here's this thing that I'm gonna do.

Sometimes, I'm like, my first reaction is a very much a WTF moment. Like, what are you doing? Like, why did why did you spend your weekends working on this? I won't do that publicly. I won't humiliate the person, but I'll have some very pointed questions with for them 1 on 1.

Very much say, who asked you to do this, or why did you think that this was something that the company needed to do, and why would you waste your weekend working on this? Like, why didn't you come up with a proposal, and then we could all evaluate it and get it as part of your, like, sprint plan? Like, what are you doing? Are you trying to, like, inmates run the asylum approach here? Like, I'm gonna I'm gonna do a mic drop moment, and everybody's gonna praise you, and they're gonna say, oh, yeah.

We need to pivot to this because that's not gonna happen. That's not how we do business. So I would I would dress them down and let them know how pissed off I was, about that approach because it's not useful, and it's not good for the rest of the team. Nobody else was involved in evaluating what they were building on their own. So you're you're trying to have a hero moment.

Like, oh, everybody, look at how cool and smart I am. I figured this out. Nobody cares. And and we have, you know, hackathons for stuff like this where it's time box to, like, hey. You got 24 hours or 48 hours to work on this thing.

That's the time to do that and then present your idea. So if somebody built something that I know is garbage and then was presenting it in such a way that they think it solves all of these amazing problems and it's the right thing to do, I would first look at myself to be like, how did I fail as a team lead? So this person didn't know that this is unacceptable. And then, secondly, I would look at them and say, why did you think this is the right way to do this? We have processes in place that have been proven out over decades in this this industry that we have co opted to run our our organization.

Why are you going against all of that wisdom? And if they continue to still just be like, well, it's a good idea. Like, okay. Now let's break it down. And before we go any further with this, we're gonna have the whole team test this thing out, and we're gonna poke holes in it.

And we'll see, like, is this actually great? Did you get lucky? But I also wanna see a design on this. I wanna see how the like, what your idea of how this works is, and let's discuss. So it's a recipe for them getting a mountain of work being dumped on them now that they did this, and they now have to defend their position.

Have you ever seen people successfully defend and then there's some product adoption? Or it's like, oh, actually, you were right? Oh, yeah. I mean, like, from hackathons, stuff like that happens all the time. Not from a hackathon where initially people are frustrated by the implementation or disagree with it, and then opinions are swayed?

Depends on how they do it. If they come up with a proposal and get permission to spend some amount of time during working hours and go through the the process of, here's my proposal. Here's my design. Here's my prototype. Please review it.

I'm gonna file some PRs in draft phase. Everybody poke holes in it. If you're including the entire team in that process or whoever else from other parts of the organization need to do that, of course, that happens all the time in our current group. Somebody has an idea, they run with it, but they follow the process that we know is gonna result in the best possible implementation for maintainability, extensibility, and it's set up to be successful. The the cavalier come in, try to do the mic drop moment of my solution is better than what the team has built.

Does it happen? Sure. Are you gonna piss off the whole team and make them feel like they're not good enough to work with you? Yeah. And that hasn't happened at Databricks for me.

Definitely not. But it has happened at prior companies that I've been at where I've been a a team lead. I've fired people for that, for this exact thing. And it's not because their implementation was bad. Sometimes you look through the code and you're like, yeah.

You're right. However, the whole team hates you now, and nobody wants to work with you anymore. You've basically become a pariah because you've excluded you basically by doing this, you you let the team know in no uncertain words that you think they're a bunch of idiots because they can't be involved in this process. So you are no longer welcome here. That's my solution to stuff like that.

You don't need a hero when you're building solutions. Other people need to maintain it. They need to be involved in that. I mean, like, we hire people for a reason on a team so that all of these other opposing views and and the sheer mental power of a group of people always outweighs one person's genius or their perceived genius. Right.

It's super toxic in my opinion for people that do stuff like that. Yeah. I'm trying to think. I don't know that I've I've seen a lot of really bad internal and publicly facing implementations. I've built a couple of them.

And it I don't know that it has a detrimental effect from what I've seen, but I can see how on a non consulting based organization or if you do specific things within the consulting space, like step on some another team's toes, then it can lead to problems. But oftentimes, like, these one off things just don't get adoption. Like, I remember I built a a one click TPC DS benchmarking tool that would load test against the warehouse based on this, org or, industry accepted set of queries that is representative of what analysts actually write. So it was pretty, robust and fundamentally sound. I had worked, with a few other people that had built similar components, and I sort of stitched them together.

But it also came from a Databricks SQL or subject matter expert group. And so it wasn't just out of nowhere overnight. I had originally built a similar thing for a customer, and then I workshopped it and got general approval. And there's been another iteration on it that actually works, I think, well. I haven't used it yet, but, yeah, mine just no one used it.

It was it was interesting to see. So that's often what I think what would happen if you just build out of the blue on your, like, weekend or whatever it might be. But sometimes you do step on toes or sometimes you say the wrong things to the wrong teams, and that's when it can become dangerous. There's a huge difference between building something in isolation that is a tool. So I would equate something like that to somebody on the team building, like, a bash script that that helps to automate something that we do every day.

If somebody did that, the whole team if it's good, the whole team is like, sweet. This is awesome. Thanks. I would be too. I'd be like, this is great.

Like, thanks for doing this. It's a huge difference between that and, hey. I just filed this massive PR that redoes the entire functionality of a core part of our product in complete isolation, and it's not, like, compellingly well done, or there's no, like, no documentation associated with what the design was or the process to be followed. Somebody drops, like, a a 30,000 line PR, and it it's not tracked anywhere. Nobody's looked at it.

That's grounds for, like, what are you doing? Like, why would you waste this amount of time? No. We're not merging that. This has to like, break that thing up into 12 different PRs, and let's go through the process of what we do.

And if you're if you have so much time to do this, why aren't you working on the the other very pressing things that we need to work on? So, like, nobody in engineering at Databricks would do something like that. Like, that would never happen, because we don't hire people who do things like that. Right. Cool.

Final category of person. Let's say there's a junior person. They can be a direct report. They can just be someone working on your project, and they are ignorant about stuff. How do you go about teaching?

Same way we we kinda bring people in in the the scope of what we do now within the team, which is never underestimate the the amazing capability of failure to instruct. So everybody needs to feel that, I think. Like, you need to feel overwhelmed. You need to feel like you're underwater. You have no idea what you're doing, and you need to break stuff and then go and fix it and learn how the thing works and why you didn't do it right the first time.

It's a very important and almost cathartic process of getting better at something technical. The way that that can blow up in your face as somebody who's giving the tasks for that person is you give them something that is far beyond their capabilities. So every there's a a metered approach to it. So as you want somebody to grow into a role when they're new to the organization or new to your team, you start giving them things that are adjacent to what their current understanding and capabilities are, but just hard enough that you know they're gonna struggle a bit. It'll force them to ask questions, and there's a benefit there.

They're gonna get to know their teammates on a technical level. They're gonna figure out, like, who knows a lot about this thing and who's giving me the best tips and advice. You're gonna get an an exposure for them of understanding what the review process is like. Like, who's going to be pointing out things in minute detail that are is gonna help you get better. But the task that you're giving them is not something that's so complex or has has a huge blast radius.

So you're not gonna be like, hey, You know, chain like, on on week 1 of somebody new come to the team being like, yeah. We got this project where we gotta create, like, 5 more like, 5 new rest APIs and the entire interface to the back end. It's gonna take about, you know, 3 person weeks, maybe 4 person weeks. You know, new person, this is yours. They now have to learn the entire tech stack.

They have to, like, reverse engineer how other ones are built now. They need a bunch of tribal knowledge about how the company manages all of this stuff, like, what our version strategy is, how do we do docs for this stuff. Like, there's so much stuff to learn in a project like that that there's no way they're gonna get it done on time. So you're setting them up for not just frustrating failure, but ultimate failure. Like, they're gonna they're gonna be so stressed out.

They're not gonna retain anything. They're just gonna try to tactically get through this as best they can, and the review process is probably gonna be traumatic for them because there's probably gonna be more comments than there are lines of code, from, like, experienced team members. Like and you're also gonna lower the you know, lower everybody's perception of that person because they're set up for such dramatic failure. People are gonna start wondering, like, who is this idiot that we hired? And as the person who gave that task, you just set them up for failure to basically look like a complete clown, and that's that's just immoral.

So the key is give them, like, a small enough project where they can they can knock out a couple of quick wins with a little bit of resistance, so they're slowly learning more and more. And over time, you start giving those bigger and bigger projects that are just outside their capability or their level of knowledge about how this stuff works, And then you're working with them to to be able to give them the answers they need after they hit that that insurmountable roadblock where they're like, okay. I don't understand what's going on here. And then you jump in. You're like, okay.

Here's what you need to do next, and they're like, oh, yeah. I I wasted, like, a half an hour trying to figure this out. Yeah. Okay. Now I get it.

That's how you build a team member's knowledge for for, like, a technical perspective. When you're talking about the greenfield, like, team member comes in who doesn't know anything about the, like, the principles of data science work or ML engineering. You have to gauge what they know, and, hopefully, you did that during the interview process to know where their baseline is, where they're coming from. So what do you need to what do you need to build up on their foundation level of understanding so that they can be successful to do the first couple of, like, smaller projects? So the ramp up time can be much longer in my experience for data scientists than it is for software engineering.

Heard. Lot of thoughts. One thing that I struggle with so I've I've been fortunate enough to lead a bunch of projects in the past, like, year, year and a half, or whatever it might be. And now I've, like, leveled up a little bit where I'm teaching people how to lead projects and, at least trying to. I don't know if it's going well, but, it's a really interesting paradigm.

And I think the hardest part for me is quickly assessing skills, And I'm gonna iterate on that. I like the idea of doing an interview, eve like, even if it's not for a professional interview, if it's some way to calibrate, because then you know how to start, like, guiding. And then doing the personality analysis of, like, how much out of their comfort zone do you want them to be, how much out of the comfort zone do they enjoy, being? What are the types of tasks that they like? What are the types of tasks they don't like?

What are the things that they need to get better at in terms of process, not just, like, output. Those are all, I think, a lot easier, if you have the baseline. If you don't have the baseline, you're sort of shooting in the dark. So, Ben, what's the most efficient way you gather that baseline? For software engineering, just like, hey.

Show me your last 10 PRs. K. Like, you can read what about a new hire? You can read from a PR history. Like, you can look at the first commit.

You can see what the state of the code is. You can look at the comments, see how they respond to them, and look at the final state of the code. And you can generally tell, like, how somebody's like, what their strengths are, like, what they really understand, and things that they might need to work on because that's the comments. Like, people that are saying, hey. This isn't like, let's not do this.

Let's do this instead. But a lot of that stuff like, everybody starts from a very similar baseline where, like, how I work now. Generally, we're not hiring people that have, like, a that are struggling to understand, like, basics of software engineering. Everybody knows that stuff really well and the concepts involved, and there's not a lot of that, like, ramp up like that. It's more what does it seem like these like, this person really enjoys?

And that's through 1 on 1 discussions. Like, ask them about, hey. How'd you feel about, like, your your last sprint or, like, this last quarter? What did you enjoy most? And they'll start telling you, like, oh, I really liked working on this project.

What they've worked on that they really enjoyed, and then, like, find those adjacent projects that Right. You know, be something that they can really sink their effort into and feel like they enjoy the process of doing that. And that's all about retention because when you have good people, you wanna make them work on stuff that they wanna work on. And the the team that I'm on right now, we have amazing people. So it's more like it's almost playing defensive with them with regards to projects because everybody on the team can do anything that we work on and do it exceptionally well.

It's more like, what do they really what's gonna make, like, them smile at the end of the week when they get this PR merged and and this project done? And that that's the goal is, like, match people up with that. Like, what do they what are they most passionate about so so that they feel like, yeah, this job's awesome. Because we definitely do not want them going anywhere because they are Yeah. Heard.

Yeah. No. I completely agree. And it's fun to find things that people like and help them discover what they do and don't like because there's a lot of there's a lot of aspects to a job that you can you can pivot towards and you can specialize in. You don't always have to be the person for the thing.

There's there's different ways to approach the same problem. Cool. I think we've covered it. Any other final closing thoughts? I mean, I had a question for you about, like, the data science aspect because you do this more way more than I do now.

Just like when you go into those teams at, like, a customer, and it nobody there has ever built anything using, like, machine learning. But now they have a project like, hey. We need to do we need to use ML to solve this. What's your process for for teaching them or, like, just starting that? The key thing is starting with where they are and then iterating.

I the way my brain works is I I approach a lot of life like Bayesian optimization or, like, hyperparameter search. I remember I spent a lot of time on it a couple years ago, and the analogies that you can get from how those algorithms work, they're really applicable to life. And so it's exactly the same thing. You're just searching a space for a new location. The question is, how do you minimize the number of steps to get to that location?

To do that, you need to know where you are, where that information is, and then how do you most effectively take a step. So if it's stochastic gradient descent, you leverage the derivative at your location to take a step down in loss. So with that, it's a really simple principle of, first, evaluate where they're at. The way you do that is go in and look at the code. 2nd is try to do some high level personality analysis.

This is super hard. Typically, cameras are off or not typically, but sometimes cameras are off. You maybe don't have very frequent meetings. But if there's energy in the room, it's it's a lot easier and a lot more fun. And then from there, you say, alright.

We're gonna solve this business problem. I will review your stuff. And if you get unblocked or if you get blocked after, let's say, 3 attempts, I'll show you a prototype. But beyond that, just figure it out. And then finally, it's really helpful to create a high level design for them so they can fill in the pieces so that they don't just go off in a horrible direction.

So I don't know. It's it's actually not that hard. I think it's pretty codified in my head now. When you get from that position of, hey. We can build a prototype that we have a model that does something.

How do you teach a team that that's the first step in 10,000 steps to get something that is, like, production grade for high use at a like, at the companies that they esteem so much. It's like, hey. You're deploying a a model that does, like, optimization of content filtering on at meta. That team, probably 20 people on it, and they've been iterating and improving on that model for over a decade. How do you steer them down the right path to be like, this is how, like, their principles are when working on stuff, and these are the things that they consider.

That's harder. So how do you sort of teach them to, a, think from first principles and, b, know what the right ones are? Mhmm. Alright. So the right way to do this is, create sandboxes where they make the mistake and learn it intuitively as we were alluding to earlier.

That takes time. So I often just say it, and I often demonstrate first principle thinking. Like, that's definitely one of my strengths is I I can't really make sense of anything unless I understand the baselines and, like, the first principles as I think we've alluded to. So when I'm in a call, I think I've demonstrated that pretty evidently, and, hopefully, that's enough for them to see how to do that. But the the correct answer is, like, spend a year with the team.

Like, create little, like, playgrounds where they can say, alright. This is how I learn why modularity and code is important. I need to refactor this piece and or, essentially, like, swap out this model with a different model. You want that to be modular. You don't wanna have to rewrite your entire piece of code.

And what you should do is if they don't have it modularly built, they should sort of feel the pain of of bad code, essentially. And that way, they develop intuition around it. But, again, that takes time. So I try to just say it and then demonstrate first principle thinking. What are your thoughts?

No. No. Like, the same. Stuff that I used to do to teach fundamentals of data science was do I used to really like building hyperbolic examples. And I'd always use, like, very simple models that would, like, train, and I could run validation in seconds.

So create some dummy dataset and make it so that it's mutable. And then, you know, set up a particular, you know, optimization library that would demonstrate the the happy path of, like, hey. We have this dataset, and I have ordinal values and continuous variables, and I have some categorical in here, and it's clean. Like, it's been engineered for features. So that, like, this is good.

So it feels like something that they see in a demo in, like, SK Learn. Like, oh, SK learn dataset, and the model does this, and I can change this parameter, and it increases its accuracy, and it it works. That's a that's a pipe dream in the real world. You never get datasets like that. So I would I would emulate that SKLearn dataset with their data.

Like, maybe spend an entire day just cleaning it and going through that painful process of, like, getting it so that it's in a good state. And then just take raw data from and train the model on that on, like, the clean data. And then take the raw production data, like stuff that's come in over the last couple of hours that has no feature engineering done to it, no cleanup. Like, it's just garbage, missing data, malformed data, whatever. And then show them, like, iterate over the over each row of that data and show the result, and then see the look on their face.

I'm like, this is why feature engineering is important, why you need this pipeline to make sure that your model is not throwing exceptions with every other row or has these issues. Like, hey. We have these like, in your your raw dataset that we started with, that you showed me when I first showed up, you had these 18 columns that all have covariance associated with them. But there's a signal in this other column that is really important, but it's getting drowned out because these other 18 columns all track together with one another and the magnitude of their values dwarfs the changes in this other one. So this is why these parameters exist in this library, and here's what we have to do to clean this up.

We pick the one that, you know, most closely tracks what we have for some sort of correlation here with our predictor. And then this other data that we're also incorporating now is actually seen by this algorithm. It's not drowned out in noise. It's so funny that you like doing this. I I get so bored.

Like, I don't maybe I'm just I think part of it also is I feel young. So I don't I feel like when I enter these rooms, I'd feel like I'm not qualified to, like, be teaching, so I don't really that's not like where I'm at in my career. Like, I'm I'm still at the I need to learn in my career. Uh-huh. So maybe it'll change at some point.

But I have reached the point where it's like, I don't care about the implementation. I just want it to work, and I wanna solve real problems. But it's so funny that, like like, that sounds like pulling teeth to me. Like, creating a notebook that, like, walks them through the conclusion they need to reach. Good god.

Yeah. I mean, if you've done it a 1000 times, like, built Yeah. I can't even tell you how many, like, decision trees models I've built over the years. So it doesn't take me long to do that, and it's also fun to it's not fun writing that code. It's Mhmm.

It's very boring. It's very rote, but it's fun playing with it a little bit. Like, hey. How bad can I make this? Like, yeah, I can get it to throw exceptions, like, nobody's business.

Like, anybody can do that. But can I get it so that it's predicting something that I know they don't want it to predict? So I look at the data. I look I understand the business problem that they're trying to solve. Negative sales or something?

Or, like, can I get this, like, feedback loop going in a a time series forecaster that's gonna cause an actual business problem for them? Like, something they would panic about. Like, something completely irrelevant in their inventory. But can I get the model to say that the demand on this is going up by tens of 1,000 of percent week over week? But not just like this weird exponential flaw in the model itself.

Like, that's easily filtered out. Can I get it so that I can manipulate the data and simulate what could go wrong with their in their original implementation just by putting the the right synthetic data in there? Like, here's what can happen if you're not controlling this or you're not using the right algorithm or you you just use the defaults and you didn't bother to learn all of the the aspects of this library. Here's why this thing is important. Let me show you, and I'll show them the data.

And at first, they're looking at it, and they're like, yeah. Seems legit. Okay. Like, alright. Let's see what the model does with this.

And you show them, and they're like, that's saying we're gonna sell 1,500,000 popsicles 6 weeks from now. Oh, if this model went out to our purchasing department and they weren't looking at it, like, yeah, that could cause a big problem for us. Yeah. Who's gonna get blamed? Purchasing department or the data science department?

Yeah. So you only get one mess up like that in production. If you're 2, they're never gonna use it again. Not to, like, beat this to death. What about and let's take the consulting example.

If your own team, it's if it's your own team, it's a lot easier. Why do you find where's the empathy come from? Like, why do you want why are you invested in them learning these things? I feel like it's also, like, part of your nature, but what's the angle? Can you explain it?

I don't know. It pains me to see people doing things in the dark. And I I really enjoy, like, my my own personality. Like, I really like epiphany moments that I get. I'm like, oh, I finally understand this thing, and it's expanding my understanding of this concept.

And it's, I don't know, like, a huge dopamine hit for me. I love that, like, revelation that happens. And I haven't encountered too many people who are actually paying attention and are and actually care about what they're doing in their career that don't like that. Like, most people like that element of surprise and understanding and that epiphany moment because it excites them. Right?

Like, we're biologically programmed to, like, we'll respond to that brain chemical. It's a it's something that drives us as a species. Right. I've discovered something. Other people have discovered this before, but for me, this is my first time discovering it, and it makes most people feel, like, really invested in what they're doing because now it's personal.

So being able to impart that to other people and ignite that in them, when you go back and check with those people maybe a year, 2, 3 years later, they're fundamentally changed for the better in their practitioner status. Like, they're just better at what they do. They're thinking differently. They're thinking more efficiently. They're building better things.

And for us as a whole species, the more people that are building better things and more responsible and effective means, the better we are off as a species, in my opinion. So why not pass that on? That's an interesting angle. I, yeah, I think I agree. I did it for you or a 4 month check-in with a customer I used to work with, and I basically trashed a lot of their practices in a, like, constructive way.

But they're also doing great things, to be clear. And now if, like, adopted a lot of them and their development cycle is just so much faster, and they're so excited. Yeah. They're they're just enjoying their jobs a lot more. So Yeah.

That's literally, that's the only thing I miss about the field, about what I was doing. That experience. Literally. The only thing. Like, you know, being able to go back and talk to the a group.

The thing that made me smile the most, it only it happened a couple of times. Right? Go back and talk to a customer, like, 6 months or or a year later after my last engagement with them, and they show me something on the screen. And I it is beyond me to understand what they've done. And and then I ask.

I start asking questions like, oh, how does this work? Or what did you guys do here? And they explain it like they're explaining it to a 5 year old, which is what I taught them. Like, hey. If you're interacting with somebody who just needs to get the TLDR, just explain the the core components and then let their brains fill in the gaps.

Yeah. And they do that exact technique. I'm like, yeah. Alright. This is awesome.

Great work. Now let me see the tests. And then he they just drive right to it, and they're like, here's all our tests. You'd look at it. Yeah.

This is this is good. Like, you're Yeah. You built something amazing here. You should be proud of it. And it's not for me to be like, oh, yeah.

Backpack. Yeah. I I totally righted the sinking ship. It's more of just like, this is so cool seeing people grow. Mhmm.

I'm just I'm I don't think it is like, oh, I helped do this. It's more like they did this to themselves. They were just they opened themselves up to listen to somebody who's suffered through the inefficient way of doing this. So all I was doing was just telling them things that I've done stupidly in the past or just have learned the hard way, and they were able to short circuit all that and just get to a better place. So I'm like, I'm just really I would always just feel happy for them.

And they all just seem so engaged, like a whole team. Even the people that originally were checked out Mhmm. Were were very frustrated. They have this new mode of operation. Everybody's just excited about the next thing and about and also about improving what's existing.

Right? Like, the people that are excited to maintain what they built. It's like, this is pretty cool. Yeah. I think I I feel that a lot from my internal team.

With customers, it's sometimes harder because you don't see them again often. Yeah. But sometimes you do. Mhmm. Okay.

We are. This has this has been a concise and efficient episode per usual. I will summarize. So, basically, we went through 5 scenarios of discussing how to manage ignorant people, whether it be a team, a junior person, a boss, a super boss, or someone that's adjacent to you that has built an abomination. Some things that stood out to me at least, business leadership should propose business problems to their senior technical leadership.

They should not define implementation. That's where the hand off from executive in the business world goes to executive in the tech world or TL. When doing greenfield things, it's really good to start with a lit review and see if other people are solving the same problem with a given piece of tech, and then just sorta audit what is the trend in the markets. Clarify the problem, period. Ensure you're solving the right thing.

When convincing a boss, provide estimates of work required. They often do this ROI calculation of will the return in x amount of hours of work be worth it. So that's a scale that you can lever leverage where you say the amount of work is really high or really low. Obviously, be honest, but you can take different angles. And always leverage research spikes to create POCs to make those estimates more accurate.

And then if you wanna influence executive slash via executives, complaining makes someone look bad. So whether it's you or whether you get someone else to complain, it'll make that person look bad. And always try to find a happy medium where everybody wins. So, Ben, anything else? No.

Good summary as always. It's so fun. Yeah. It was. Alright.

Until next time, it's been Michael and my co host. Ben Wilson. And have a good day, everyone. We'll catch you next time.
Album Art
Navigating Expertise Gaps - ML 172
0:00
01:16:15
Playback Speed: