[This episode is sponsored by Hired.com. Every week on hired they run an auction where over a thousand tech companies in San Francisco, New York, and L.A. bid on Ruby developers providing them with salary and equity upfront. The average Ruby developer gets an average of 5 to 15 introductory offers and an average salary offer of $130,000 a year. Users can either accept an offer and go right into interviewing with a company or deny them without any continuing obligations. It's totally free for users. And when you're hired, they give you a $2,000 signing bonus as a thank you for using them. But if you use the Ruby Rogues link, you'll get a $4,000 instead. Finally, if you're not looking for a job but know someone who is, you can refer them to Hired and get a $1,337 bonus if they accept the job. Go sign up at Hired.com/RubyRogues.]
[Snap is a hosted CI and continuous delivery that is simple and intuitive. Snap's deployment pipelines deliver fast feedback and can push healthy builds to multiple environments automatically or on demand. Snap integrates deeply with GitHub and has great support for different languages, data stores, and testing frameworks. Snap deploys you application to cloud services like Heroku, DigitalOcean, AWS, and many more. Try Snap for free. Sign up at SnapCI.com/RubyRogues.]
[This episode is sponsored by DigitalOcean. DigitalOcean is the provider I use to host all of my creations. All the shows are hosted there along with any other projects I come up with. Their user interface is simple and easy to use. Their support is excellent. And their VPS's are backed on solid-state drives and are fast and responsive. Check them out at DigitalOcean.com. If you use the code RubyRogues, you'll get a $10 credit.]
JESSICA:
Welcome to episode 249 of the Ruby Rogues. Today on our panel we have Avdi Grimm.
AVDI:
Hello from Tennessee.
JESSICA:
Coraline Ada Ehmke.
CORALINE:
I'm just a dwarf standing on the shoulders of a giant who's standing on the shoulders of another dwarf. It's basically dwarfs and giants all the way down.
JESSICA:
And I am Jessica Kerr, better known as Jessitron. Today we have a special guest and I am super excited to talk to Dan Luu. He has a post that several of us wanted to pick but Coraline was first on an earlier episode about the normalization of deviance. And I am super excited about deviance. But this whole normalization thing is kind of ruining it for me. Dan, would you like to introduce yourself?
DAN:
Hi. I'm Dan Luu. So, I used to work for Center, a small startup. Then I worked for Google and then I worked for Microsoft. And I think these are all good companies to work for. Two of them are considered really, really good companies to work for. I think the third is considered still above average. But it's sort of funny how they still have all these things that are really screwed up. And it's not them in particular. I talk to my friends at companies that are considered really great places to work and they still have all these things that are really screwed up. And so, I wrote this blog post and it's just sort of asking why is it that places, even places that are really good, are still also really screwed up?
CORALINE:
So Dan, do you do hardware now and you used to do software?
DAN:
Almost the opposite, sorry. I used to do purely hardware and now I'm moving to doing more and more software.
CORALINE:
Oh, cool. Okay. What kind of hardware stuff did you use to do?
DAN:
So, I used to design CPUs. Well this is sort of, design is like, if you talk to a CPU designer they might not say that. I used to do… it was a startup, sort of everything you would do inside a CPU including design, [verification], writing microcode, a bunch of other stuff. And then I sort of moved onto some projects. So at Google, there's a thing I can't talk about because they're super-secret infrastructure, or Microsoft. But they're both hardware accelerator things. The Google thing is something or other. Microsoft, we're trying to figure out how to make virtualized networks faster. And this is something that's sort of, this trend that's been happening over the past I don't know, five, six years. CPUs don't really get faster anymore but people still want things to get faster. So, you try to move more things to the hardware to continue making things faster.
JESSICA:
That's interesting. Are we adding layers?
DAN:
Interesting. So, I think a lot of it is like, it's sort of punching through layers, if that makes sense. Like if you look at your, just for example like your TCP stack, there's all these layers between you and the user [application] and then actually sending a packet out on the wire. But if you look at RDMA, this stands for remote direct memory access, you will punch a hole basically into the user space memory. And in the user space memory you can just go… well you actually have a map to somewhere else but you basically just write something. And that can just go on the wire directly to somewhere else. And so, we're sort of, [in some sense] we're adding layers. But the abstraction will blow a hole through a bunch of different layers because it's extremely slow actually to do a sys call and do something. And so, a lot of these things will just blow a [inaudible] in the system and let you touch the hardware pretty much directly.
CORALINE:
That sounds way above my pay grade.
JESSICA:
[Laughs] Yeah, that's like really fascinating. And no, you're totally not talking too much. You're supposed to talk.
[Laughter]
JESSICA:
Yeah, we're just, we're kind of like, whoa. That's really fascinating because when we write software we're usually like adding layers of abstraction. And then it's pretty fabulous that people like you can come back and then punch through the abstraction right where it counts to make it fast.
DAN:
[Laughs] [Inaudible]
CORALINE:
Like a super hero breaking through a brick wall.
JESSICA:
Yeah, and meanwhile the rest of us can continue to write to the abstraction so that we don't mess things up.
DAN:
Yeah, but one of the interesting things is like a thing that [inaudible] especially, so Microsoft, Google, Amazon, is that behind the scenes people want to make the abstraction the same. No one really wants to make people have to rewrite everything. So, people still want to make it look like you're writing to TCP or UDP or whatever, right? And behind the scenes they can make it look like anything they want. But it sort of ends up being… I think it's… I don't actually do the software layer but I think it's actually quite painful if you're actually doing that. Because it means you have [inaudible] in your face. It's like Windows or whatever, right? And you have to make everything behave exactly the same while underneath it's completely different.
CORALINE:
It's what a lot of writing software comes down to, right?
DAN:
[Chuckles] Yeah, that's fair.
JESSICA:
Yeah, make things look easy when they aren't.
CORALINE:
Easy is really hard.
JESSICA:
For our listeners I wanted to mention that the blog post that Dan wrote that I keep talking about is at DanLuu.com/wat. That's D-A-N-L-U-U dot com slash W-A-T, in case you want to pause the episode and read it.
CORALINE:
Yeah, which is not a bad idea. So, should we go onto the article?
AVDI:
Let's do it.
CORALINE:
Okay, cool. Dan, you want to describe what made you want to write this?
DAN:
Yeah, so part of it was just, I don't know. I feel like I shouldn't talk too much about my [team] at Microsoft. But part of it was just some things on my team that were, I don't know, in my opinion non-optimal. And just talking to my friends. And I just hear these stories from friends of mine. Again I guess I shouldn't say anything that could be [inaudible]. But they'll tell me about [something like,] “Oh, we don't have version control here?”
JESSICA:
[Gasps]
DAN:
Or, “We don't have tests?” And these are companies that aren't… it's not just like [inaudible] where they're just like [inaudible]. I'm talking about companies that actually [chuckles] that release a platform that other companies rely on or they produce the thing that users actually use. And this sounds pretty shocking, right? But then I talk to my friends and I say the same thing. “Hey, we don't have X,” and people are like, “Are you kidding? Are you trolling us?” or “Is this for real?” And so, I don't think my company's actually any better about this. It's just because I work here I sort of get used to the things that are strange about this place, right?
CORALINE:
You made an observation that's industry-wide that everyone has something like that, that they're doing that just makes you say, “What?”
DAN:
Yeah, I think so. It's not to say that all places are the same. Some places are better than others. But I've never heard of a place that doesn't have at least some things that are really, really strange to outsiders.
JESSICA:
Yeah, you mentioned in the post that it makes sense to do things like skip tests. I'm not going to say it ever makes sense to skip version control, uh-uh. [Laughs] But some things like tests that when you're trying to determine whether something is useful, you're in that MVP phase, “Is this even going to be worth anything?” you skipped that. And that a lot of new companies as they increase in value and suddenly they have something to lose in the case of messing it up really badly, they don't lose that culture of, “Yeah, we just need to get stuff to production. Tests aren't the norm here.” At Stripe right now we're totally in that transition of, “We have proven that this product is super useful. We're expanding it and we're going back and improving reliability.” But it is really hard to change what is valued in people.
DAN:
You know, I talk to my friends in startups and I was actually at a startup for a while. There's this thing that I've seen. And I don't think anyone wants this but it's just a thing that naturally happens. So, you have this product group or whoever it is that's basically responsible for making the product more awesome and they're great. The product has grown. The company's worth millions of dollars. Curves are going up and the right people are super happy. But it means that if they ask for something, they get it. And infrastructure is this thing that people understand is important. If you ask them, “Are tests important? Is infrastructure important?” they'll say yes. But when it comes down to it, if product wants a thing and infrastructure says, “We can't do that, that's not stable,” the thing's going to happen.
And it's very hard to transition from this point where product is all-powerful and can ask for whatever they want to where infrastructure can actually push back and say, “No, no. We can't do this because in the long run it'll slow us down.” Because product has basically provided all the value to the company to that point. And I see this in a lot of different companies. I don't actually know how to make this change happen. I've seen companies that have been past the change. But I've never actually experienced that change itself. I'd be super interested in knowing what that's like.
JESSICA:
Dan, you mentioned before the show that one way to get from these surprising deviant practices toward a safer place to be is through one employee who is acting against their own interests?
DAN:
There's this great blog post by Yossi Kreinin where he sort of describes in his opinion how this happens. So, he's a manager at I believe an Israeli startup Mobileye that's doing pretty well. It's sort of his opinion that in general change comes from the top, right? Because if managers want something, even if they don't say they want it, their employees are smart enough to sort of know what's going on. They can see who gets promoted and whose act is rewarded, and then will do things the manager wants even if it's not what the manager says they want.
The alternative is a completely unreasonable employee. Can just say, “Hey, I really, really strongly believe this. Let me just do this thing.” And this is like totally not in their best interest, because it's not what their manager wants. It's not what the management chain wants. And so, most of the time when people will do this, it doesn't work out very well for them. But sometimes if you're unreasonable for long enough this actually works out. You can convince people around you that this is actually the right thing to do. This actually matches my experience, at least from what I've seen. When people do this it usually doesn't work out. Every once in a while it actually does work out.
I'm hoping someone will tell me I'm totally wrong. This blog post is wrong. And this is actually a great thing to do because there are things I believe in. And sometimes I do them, sometimes I don't. But I don't want to… I don't like the idea even though again it's [inaudible] my experience, I don't like the idea that you're sort of damaging your career by doing what you think to be the right thing.
CORALINE:
I think what it might come down to is some people have the privilege to deviate from a norm. And we would prefer it that when that deviation occurs that that's an attempt to make things better. But I
think you have to be in a particular point in your career. You have to have a certain amount of social or political capital in your company to be able to start being ornery and doing the right thing that is different from what everyone else is doing, and [inaudible] that change from the bottom.
AVDI:
Quick clarification. Can you just give one example of what you mean by a manager who says they want one thing but actually wants something else?
JESSICA:
I've got an example. One example would be we value software that doesn't fail. We value uptime and reliability. But I would like to take this time to thank all the people who work so hard to deal with this fire that happened yesterday and stayed late to put out this calamity that happened. And when the emphasis is on, “Oh, thank you so much for fighting the fire,” and I don't hear anything about, “So and so wrote a test that caught this bug before our production, our customers ever saw it,” then the managers are saying they value reliability, they're valuing reliability in the company but they're valuing firefighting in people.
AVDI:
So, when the medals get handed out…
CORALINE:
Exactly. Another example would be, “We favor hiring diversity so everyone reach out through your networks and bring in as many people as you can to hire,” and your network consists of all [inaudible] white guys. The manager can have a strong desire for diverse hiring but not do anything that actually supports diverse hiring, and maybe even send out press releases or set it as a company goal and not actually accomplish anything, which of course never happens.
JESSICA:
Yes, because if you reward individual people for submitting resumes, they're just going to submit the ones that come to the top of their head, which is not necessarily the most diverse. One thing we've done at Stripe is ask people to specifically look through their network, “Here's a Facebook Graph query. Run this and find people that you know that maybe you haven't thought of to submit but would improve our diversity.”
AVDI:
When I read this post, and it's a great post I agree, about people saying that they value one thing but showing that they value something else, it made a weird connection for me with something that I'd read recently. Jessica, I think it was something that you recommended. It was one of Mark Manson's blog posts, 'The Most Important Question of Your Life' I think it's called. It might have been somebody else that recommended it. But I don't know, it's a weird connection but it's one of these personal improvement blog posts. But it talks about how there's what people think they want and then there's what they show that they want.
And he uses the example of spending his whole young adulthood thinking that he definitely wanted to be a rock star. But he never actually wanted to spend late nights driving a rickety old van to some venue for pocket change or spend hours and hours and hours practicing. He's talking about how you have to actually want… you have to actually want that part. You have to want the practice part. For fitness, you have to want the endless burning sensation in your muscles as well as the outcome. And it's a weird connection but it just seemed like a personal version of this organizational malady where there's what you say you want but then there's what you show you want.
JESSICA:
Yeah. Then there's rewarding the activity that leads to what you say you want. Dan, did you have an example?
DAN:
Yeah. So, I was actually just thinking about this. And I think it's like pretty… something I've noticed [inaudible] I guess I'll try not to name companies and specific things, but there was one company I worked for and they'll say all kinds of things are important to them. X is important, Y is important. But if you look at who gets promoted, look at the technical fellows of which there are like maybe 10 or 11, they're more than half infrastructure people. And if you look just generally up the promotion track, it's mostly infrastructure people. It's very heavily disproportionate [inaudible] infrastructure people. And so, they'll say they want like you get a great user experience and all sorts of stuff. But what they really want to do is build really cool infrastructure. Which is fine, like I like building cool infrastructure. But it means that some areas are just sort of neglected.
I guess it's the opposite of the product focus thing that we were talking about earlier. And you can see the reverse in a lot of other companies where they say they really care like you said about uptime or whatever but you look at who's actually getting promoted, who gets paid well. And it's not the infrastructure people. So, then it's pretty clear they don't actually care, they just… but it's something they pay lip service to.
CORALINE:
Do you think that the people who are setting goals like that are aware of the contradiction? Do they think that maybe their actions don't have consequences or they think that just saying it is enough?
What do you think is going on in their heads?
DAN:
Yeah, I don't know. I think it depends a lot. This is something that, I don't know. I feel like it's hard to generalize too much about. Because at some companies, so at Google for example, they sort of try to normalize promotions. So, when you go for promotion, you go to this committee. And the committee doesn't know your work. They might not even know the area that you work in. And they sort of look at this file. You get peer reviewed and they decide whether or not you should get promoted. So for them, they sort of have a standard for what it means to get promoted. For Microsoft, my understanding is that a manager can just promote you if they want to. And so, I guess it's hard to say that it's a general class of things across the companies. So if at Microsoft, if a manager says a thing and says they want it and then they don't promote people who do it, that's a pretty clear local thing. They could have promoted the person and they didn't.
Obviously as you get higher this eventually isn't true. Your manager can't promote you to CEO or even to technical fellow, whatever. But in general, for levels most people are at, they can just reach in and give you a promotion. But a place like Google where your promotion goes through a
committee, or there's some general standard of what you should do, it's possible the person actually believes it and really wants it but they can't because of the company culture, whatever reason, just doesn't believe this thing is important or the committee doesn't believe it's important, or whatever.
JESSICA:
So, certain important pieces and precautions can be neglected. I have an example of the irrational employee who gets things changed. And to Coraline's point, he has a lot of social capital. So, my friend Doug has worked at this insurance company for 10 years. And he's a Linux admin. He was a developer before that. And now he's finally reached the point where he's totally overwhelmed with server requests and can't possibly keep up with them. And so, he's implementing DevOps. He's pushing this through. And the company's helping in little ways but really, it's him saying, “No, I am not going to stay up 'til two in the morning doing your server requests. But I will stay up until two in the morning automating the service request.”
And he has that social capital and frankly, he's going directly against what his manager tells him to do, because his manager just wants him to sit there and close tickets. But fortunately for Doug, he has political connections above his manager, so he just goes over his manager's head and does this stuff. And it's completely irrational. But what's the worst they can do? Fire him? He will get another job because he's doing DevOps.
CORALINE:
I think that brings up an interesting point Jessica. You made me think of it when you talked about the manager wanting him to close tickets. I think sometimes we assign metrics to things that we want to change. And they may in fact be either wrong [inaudible] measuring what we think we're measuring. Like that manager might say, “Oh, tickets are a great way to measure that we're actually getting things done,” when he's not measuring the fact that it's an inefficient way to get things done. And that inefficiency is invisible which is why your friend is doing things a different way. So, I wonder if there are other examples where the fault is in our measurement as much as it is in a mismatch between our actions or our intentions.
JESSICA:
At least [inaudible]
DAN:
This is something I've [inaudible] seen a lot. Not directly because I usually work in low-level areas where the metrics are more obviously, are sort of good or bad. But I've definitely seen this in other product groups where their goal is to get more daily active users or the goal is to get more conversions. So, what do they do? They pop up this huge window in your face. They [inaudible] you stuff, all this stuff that you really don't want as a user. In the long run, it's terrible, right? But it sort of works in the short run. This graph goes up. It's like, “Look at this. We have more users. This is great.” People get promoted and then afterwards maybe it doesn't work out well. But people got promoted. And that's sort of in the short run what people are sort of gunning for, right?
JESSICA:
Oh, yeah. There's a thing here, the danger of success. Sometimes the worst thing you can do is succeed and learn from that success, because there's a cognitive bias which really makes a lot of sense of, “This worked in the past. Therefore we will keep doing this.” But often the environment has changed around you. You have more competition now. Your software is bigger. And just thinking you can hold the whole thing in your head doesn't work anymore. But it worked for you before so you can get stuck in that rut.
CORALINE:
You'd think that as software developers we'd be better at thinking about this sort of thing because we like to pride ourselves on doing things like testing and validating our abstractions. But it seems like we're just as guilty of cognitive bias and cognitive shortcuts and pattern matching as everyone else.
DAN:
It's so funny. I feel like software developers, we sort of as a group have these things that we feel like we're good at. But when I talk to my friends who are totally different industries, we have all of their exact same problems.
CORALINE:
We're not as smart as we think we are. So Dan, one of the hilarious examples in your article of a surprising sort of “What?” kind of thing had to do with Google. You said that they have, they really focus on ops and security practices but you mentioned the letter Z and how letter Z throws a monkey wrench into that whole perception. Could you tell that story?
DAN:
So, I've heard this story from a couple of different people. And I'm not 100% sure it's true actually, but it sounds plausible. And the people who told me this story are at least pretty credible. But yes, I was looking at their codebase. So, I used to work at Google and you see all these things with Z at the end. There's like ['stringz'] and there's like 'metricsz'. And if you want to add like a performance count or something, you put a Z at the end of it. And so, I asked, “Why is there a Z?” and most people had no idea. But I asked people who've been around for a long time. They were like, “Oh yeah, so that was from back when we wanted to expose counters”.
CORALINE:
Wanted to expose what?
DAN:
Sorry, wanted to expose counters. So, like if you want to see how many, for like a network thing how many packets have been sent or for a website how many hits have gone through it or whatever, how many cache hits, how many [inaudible] on some cache or whatever, you can go to the name of the service, 'slash something Z'. And I'm told the reason that was done is that it was for security. Because if you just had a thing that was like, I don't know, google.com/counter, that would be totally insecure. We put a Z behind it, no one gets to that and that would be secure. And if this is true, this must have been a long time ago, like 1999 or 2000 or something like that. Because Google is really good about security now.
Recently I found this thing actually that they informed us and Amazon that Intel CPUs have this horrible bug where inside the VM you could lock up the CPU. And this is terrible of course, running a cloud service. And they found this. And this is pretty common. It's pretty common where there's a really bad security flaw, they find it and go inform other people. And so, they're I think at this point well at least in my opinion, the best in the world at this kind of thing. And they went from somehow adding a Z to the end of something to make it more secure to having what is like I think pretty clearly the best security in the world. And that's something that, I wish I was around when that happened because I'm pretty curious how they actually made that change.
But I've heard that part of it was they were severely compromised. And this made them get serious. They've always had good security people. But something that often happens in I think a lot of companies is [inaudible] people say, “Oh, we should do this,” or, “You can't do that. You shouldn't do this.” And then the product people are like, “Well, that's a good idea. But we really need to grow the product.” And at one point I'm told another story was that they were so compromised that you came in, they had a pile of laptops. They just said, “Put your laptop in this pile, take one from this pile,” because they knew they were completely [inaudible]. And this caused security to [chuckles] basically became a [political] problem and become able to stop these things in the future.
JESSICA:
That's interesting.
CORALINE:
It's amazing that Google once did security through obscurity. That's just stunning.
JESSICA:
They were scrappy too, once. It's interesting that you can have really good people but not everyone gets listened to. There's always way more things that we should do than we can do. So, changing who gets listened to makes a big difference.
DAN:
Yeah, it's actually pretty interesting. So, I'm at Microsoft right now and we're trying to have SREs like Google has SREs. So, Google has SREs that I think are super interesting. They have this sort of antagonistic in a good way relationship with developers, right? They can choose not to support a feature. In the most extreme case, SREs can stop supporting it, too.
JESSICA:
SRE is site reliability engineer?
DAN:
Yeah. So, I guess you would call this DevOps in a lot of places. But yeah, if SREs choose to support a team, and let's say the manager made this decision, the team will be upset and the manager will probably [inaudible] his people, because no one really wants to be on call all the time. And SREs basically take the on call and they'll handle issues as they come up. And they do add automation to prevent issues from actually having to get mitigated by hand. And so, because they have this power to just walk away from anything, they can just say, “No, you can't do this.” So, they demand you have [run books]. There must be instructions for how to handle this kind of stuff. They get trained up with developers. They can really debug the code. And they do all this stuff to help reliability. Because they're not responsible for features, they can add monitoring. They can add automated mitigation. They can do all this stuff.
And at Microsoft it depends on who you talk to, but most managers that I talk to don't believe this is possible. They believe that devs should just be in charge of the product. And so, our ops organization inside Azure recently renamed itself to SRE. The role has not really changed yet. They're trying to change it but there's this big fight between managers who believe that no one except for devs could possibly understand the code well enough to actually go debug things and when things go down actually go fix it. So, this org that is hiring [extra] SREs and trying to build this expertise. And even if they can do it, it's not clear that they actually politically can do it. So, I don't know. This is sort of a weird thing. But I think this probably happens everywhere where you have this transition from let's say just dev culture where devs support everything to an SRE culture where you have people just in charge of automation, reliability, and that kind of stuff.
JESSICA:
So, SREs are like magical fairies that take away on call and fix bugs? And they have the freedom to go from team to team and will leave your team if you make it too hard for them?
DAN:
[Laughs] I guess that sounds about right. I mean… I guess the thing is…
JESSICA:
I want that job!
DAN:
[Laughs] The only thing is I guess it takes a lot of time to move from team to team because the SREs that I know of, they train up with a number of different teams. So, they're like getting really familiar with the product, as familiar as devs are with that product. But yeah, that job sounds super cool. And something that I wish I did at Google that I didn't is they have this rotation that's a sixmonth program where they train you as an SRE and you work as an SRE for six months. I think the reason they do this is to trick people into becoming SREs. Because people don't mostly want to be on call, right? But so, they give you this program. I think they even pay you a little bit more while you're doing it. And I think about half the people convert to SREs. So, this is sort of one of their sources of SREs. They're like, “Oh, you know you don't really have to do this in the long run. You can just try it out.” And it turns out a lot of people actually do enjoy doing it.
JESSICA:
Sweet. Yeah, I love just not adding features and just instead making things more reliable and adding monitoring and all those little things that get missed. But finding a place that values that is trickier.
CORALINE:
So, at my work we do a lot of monitoring. We have Monit in place. We have all these things in place. We use [inaudible]. We use [inaudible] logic. And we're trying to get better about this but one of the problems we have is that we get so many notifications that it's hard to figure out which ones are actually relevant, which ones are actually trying to tell me something. I don't know how many emails I get a day telling me how many Ubuntu packages are ready to be upgraded on server X. And you talk about that a little bit in your blog post about people who turn off notifications because there are too many of them and they're too annoying. So, we end up missing things. So, what are your thoughts around that process, Dan?
DAN:
So, I think this affects not just… [Inaudible] talk about reliability and uptime and that kind of stuff. But it affects all of software. My friends who are not technical, they'll run an installer. And this is especially bad if they're not technical and they're on some relatively nice distribution of Linux. And they still will have 14 warnings and 3 errors and then it will still work. So, they'll be like, “Well, /tmp/whatever/w3x513 couldn't be created.” Is that bad? They have no idea. And I'm a developer and I have no idea either. I have to go dig into these errors. And I don't because most of the time, 90% of the time you have these errors, nothing is wrong. Everything is fine.
This happens to me at work, too. There's a script I regularly run. It has 71 warnings. I sort of glance at the warnings. If the number increases to 72 I try to find out where the new one is. But it's like basically impossible to tell what's going on. And so, you just get in the habit of ignoring these errors. And you can't not do it. People might say that you should inspect every one. But this is literally impossible. Like if I did this at work, I would not have enough hours in the day to even run four scripts, because we have so many things that just eject noise, just right in your face.
And then this is a thing that happens also, like in medicine this is well studied. And if you've been in a hospital, I was hospitalized for a while. It's not a great experience. One thing is there's just this constant beeping. There's the beeping of the things that are doing things that are normal. I think there's stuff that just beeps that's like 'beep, beep, beep', that's fine. And then these alarms go off. I forgot what the story of the ventilator was, actually. So, I should go look this up. I should read my own blog post.
JESSICA:
[Chuckles] That was the one where there was an anesthesiologist who turned off a ventilator and thought he turned it back on but he didn't turn it back on. And so, the patient went into a vegetative state for lack of oxygen. And the reason that he wasn't alerted to the ventilator being off was that someone had turned off the beeping.
AVDI:
Because the beeping was annoying.
JESSICA:
Beeping is annoying. Oh my gosh. I've walked out of restaurants because they haven't turned off their beeping fry cooker thinger.
DAN:
[Laughs]
JESSICA:
That's harder to do in a hospital. You're kind of stuck.
CORALINE:
And you've got all those wires and tubes attached to you. You're like, “Who's in charge here?”
JESSICA:
Right. So, when our software beeps constantly then we stop listening to the beeping. And that is like the biggest challenge of monitoring and alerting, is to find the actionable alerts.
CORALINE:
We have Slack integration for our monitoring tools now. So, we have entire channels devoted to, “Hey, something's happening that maybe you want to look at.” And we've gotten to the point now where it's like, “Oh, I've seen that error before. It's no big deal,” versus, “Oh, that's a new error. I better look at that.”
JESSICA:
It's almost worse. You get kind of used to things being wrong and then that deviance is normalized.
CORALINE:
We did something kind of weird and I still kind of weird about it. We now are suppressing alerts for what we call expected errors. It's like sometimes things just go wrong. And when they go wrong, they go wrong in this place or that place. So, if they do don't throw and exception, just log it and move on. [Chuckles] That makes me feel a little dirty.
DAN:
Oh, that's so interesting. This is one of those things where I think [I'm the reasonable] person, right? When we get compile warnings or any kind of warning, I'm like, “We need to fix this.” The warning needs to go away permanently. We cannot have this. And I think people sort of roll their eyes when I say this, because I say this so much. But it just drives me nuts that we just have all these warnings all the time. And I'm sort of making a difference but there's still way too many of these to actually fix. And I don't know. It just bugs me.
JESSICA:
It's one of those things that fixing one of them makes no difference. But you don't realize how different your life might be if they were all gone. Because then you'd be able to notice things that actually are wrong. It's a high cost to get to you don't know where, just like implementing version control.
DAN:
Yeah, I'm super happy with projects where it's clean on warnings, [clean on lint]. And any time there's a warning you really know it's potentially a serious problem. You can go and you fix it and you go back from one warning to zero warnings. That's great. But I think that a lot of environments, people have literally never seen that. So, it's hard to describe the upside of this actually, if you've never seen a case where you don't just have 450 warnings or something.
JESSICA:
That's true. It's kind of weird. When you work with someone who's never worked anywhere else, or it's got to be a large number of people who've never worked anywhere else, they can be used to things that people with a variety of experience are shocked by.
CORALINE:
And it's something I'm touching on in the chapter I'm working on for my book right now, 'The Compassionate Coder'. And that is taking advantage of that time when you just hired a new developer and they are seeing your entire system for the first time and they are reacting to it honestly and openly. Because very soon…
DAN:
Like a canary in a coal mine.
CORALINE:
Yeah. Very soon they're going to be conditioned that this is normal and that mythology around the codebase is going to be repeated to them enough times that they're going to start believing it and they're not going to make value judgments anymore. So, that time when they're new and they're looking at things with fresh eyes and asking questions like, “Why are you doing this?” or “Why is there a Z on the end of this thing?”, that's a really valuable time and you really need to take advantage of that and learn as much about your codebase and your operations and your processes as you possibly can during that time. Because it is really short.
JESSICA:
Right. Before they fix two warnings and then nobody gives a crap about those two and so they stop fixing warnings.
CORALINE:
Exactly.
DAN:
Yeah, I'm mentoring someone right now, someone who just joined probably three weeks ago. And I can just see, he doesn't often object very strongly. I can see this look on his face where he's like telling me with his face [chuckles] “What are you doing? Is that actually possible you have this practice?” And I sort of make a note of these things. We unfortunately are too busy to fix these things, which is why we have them in the first place. But at some point hopefully we'll have the time to go down this list and go fix all these things that new people just sort of make faces about.
AVDI:
I feel like a part of the resistance to fixing stuff like this comes… it's this self-reinforcing cycle of a lack of maturity in fixing those little things. Like if you're not used to fixing those little needling warnings and things like that, then you go one day and think “You know what? I'm going to make a change. New Year's Day I'm going to make a resolution. I'm going to make a change.” and you go to fix one of those little warnings and it turns into this monstrous rabbit hole of tracking people down, of reading up on the sources of these warnings and digging into systems and tracking people down to ask them if it's okay to change something, getting [approval] [inaudible]. And I feel like there's a level of resistance that builds up just because we don't have maturity around very quickly and executively dealing with a new warning that's popping up or a new issue that we haven't had before.
JESSICA:
True. When there's no process to fix these things, you're establishing that process at the same time which is really hard to do when you're new and you don't know anybody. One thing I do when I'm new at a place is spend an inordinate amount of time fixing something like that. Like I hit this error and it didn't make any sense and gosh darn it, I'm going to make that error make sense for the next person, even if I have to spend a day or two on it. And that day or two is really spent building a model of the system that I'm trying to make a change in. And I just have to sit back and say, “I actually spent a couple of days building a model of this system and that is going to be useful later beyond this tiny little change that I'm leaving a breadcrumb for somebody behind me.”
CORALINE:
I think that part of it might be that we become used to or immune to weak signals. And that new people, when you have that energy and enthusiasm without the context, you don't know what's a
weak signal and what's a strong signal. So, we're able to respond differently toward the factors at that point in time. Does that make any sense at all?
AVDI:
Yeah, I think it's a great point about weak signals.
JESSICA:
Yeah.
CORALINE:
It's like when you're new they're not noise yet. Everything is urgent.
AVDI:
Yeah, and it's tough. I feel like the ideal is to eliminate noise as much as possible and then you don't have to sort these things out as much. But it's very difficult. And I'm not dealing with this in an organizational context right now. But I feel like all this stuff is fractal. It scales up and down to organizations and then down to things like families and personal life. Like I'm trying to eliminate some of these same types of issues just in my personal life by devoting a certain amount of time to fixing things every day, fixing things or dealing with administrative tasks.
And it seems to make a huge difference, the fact that I devote time to it every day, because it's that maturity level, it's that ability to say, “Okay, here's a signal. I'm so used to dealing with new signals at this point that I can just bang, bang, bang, figure out what I'm going to do about it and I can do it and I can be done with it.” But if you save them up, if you build that pile of issues that need to be dealt with, that huge bin of mail that includes both urgent bills somewhere in there and a whole ton of mail that you're just going to throw away when you go through it, that big mix of signals, it's a lot harder to deal with, it seems like. I don't know any other way of coping with these things other than being very habitual about it, being very daily about it.
JESSICA:
That's a good point. We don't have time to do all these things. But the fact that we don't have time to do all these things leads to us not spending any time doing any of them. And you can set aside a specific amount of time. Sometimes hack days at work wind up being spent on stuff like this. I know I did in my last one. You can set aside a specific amount of time to pick something out of that list and do it. And that can increase your productivity in the time that you're spending working on features.
DAN:
So, I'm sort of curious how people who are placed where you can do this have seen this change happen so you can do this. There's an org that I [inaudible] work with. And they have one day a month, do the right thing. And managers are like really excited [to tell you about] this. They're like, “Oh yeah, people go up and clean all this technical debt and do all these great things.” But when I talk to ICs, individual contributors, one of them tells me he'd just laugh every time this pops up on his calendar. It's sort of a joke. So, they sort of want to have this time but they aren't able to actually make this happen. And I feel like a lot of changes to fix these things are sort of like this. People are like, “Oh, we should do this. This will be great.” But somehow the message gets lost as it goes down and it becomes a sort of joke instead of a thing that people actually do.
CORALINE:
I haven't had that experience. I've had good experiences with hack days like Jessica was talking about, or like 10% time or 20% time. Those things have worked pretty well for me. But I have generally seen those work in organizations where people feel like their contributions matter. And they do feel respected and listened to. So, I imagine if there's some kind of disconnect between management and engineering, that sort of… an attitude of, “Well, it doesn't matter,” or, “Wow, this is not important,” or, “They're not really sincere” could creep in and ruin that. But that might be a case of like the pain in the organization, sometimes change needs to come from above and sometimes it needs to come from below. And I think some of that might be an organizational dynamic that maybe you haven't experienced in the same way that I have, Dan.
DAN:
How interesting. Yeah. That makes sense. Certainly some of the organizations I worked with have been, I don't know, dysfunctional is too strong a word. But there's this disconnect between management and ICs that makes it hard for I guess [any] messages to get through.
CORALINE:
Yeah.
AVDI:
I feel like there's like a lower bound of [periodicity] beyond which all advantage of momentum goes away. Don't even talk to me about monthly habits. I think at the point of a monthly thing, every time, for me anyway, every time feels like the first time again. I can address that… like I have some tasks that I had scheduled for myself monthly and okay, I can do a little bit of documentation around them to remind myself what to do and it helps a little bit. But it's still, it's like getting to the very edge of that zone of, “This is actually a habit” versus something that I drag myself into doing one more time every time except it actually winds up being every three months. Weekly, eh. Weekly you can sort of have a habit, I think. But it's…
CORALINE:
Weekly you have a routine.
AVDI:
Yeah.
CORALINE:
Monthly you have a reminder.
AVDI:
Yeah, but monthly barely works for me. I feel like just with human nature it's hard to have a real habit, it's hard to have momentum on something you do less than once a week.
CORALINE:
I would totally agree with that. I think that we can't [inaudible ourselves [inaudible] and periods that are [inaudible] a day. I would totally agree with that. So yeah, maybe [part of it] was like once a month we're going to do the right thing. Maybe the manager instead should just focus more on making sure that we're doing the right things as we go.
JESSICA:
Dan, I…
AVDI:
And then when you consider that a lot of agile iterations are still longer than [inaudible] two weeks or even a month and [inaudible].
JESSICA:
Dan, I have one example. I was thinking about have I been in a company that made that transition, and there was one. It was a little catalog retailer. And to replace the point of sales system they pretty much hired a new team. They got one Java developer who had worked there before and then they hired two more of us, which… so, they got new people who knew each other, had worked together before, so we had kind of this standard of how things are done. And then management didn't know anything about Java. They were totally clueless and they knew it, which is great. So, we were able to establish new practices and do things differently than the other teams and the other teams picked some things up from us and some things they didn't. So, in that case it was kind of an overwhelming number of new people who worked well as a team.
DAN:
Oh, that's interesting. That actually reminds me of something… sorry, I don't mean to change the subject, but it reminds me of something a friend of mine was just telling me. He co-founded a startup with a few other people. And they now have I believe four or five employees. And this is something I've also noticed and something… this wasn't in this blog post but I still find it to be striking where it's like very hard to both get and give feedback. I've basically never had negative feedback in my career and it's not because I'm amazing. I do all kinds of stuff wrong. But people are sort of in general, hesitant to give negative feedback because they're afraid it might demoralize you or something like that.
So anyway, this friend of mine, this startup, he mentioned that when they first started it was like very awkward for them to get feedback. It was very hard to do it. And now, they're able to do it and be not brutal but just be totally honest. And everyone takes it in stride and it works fine. So, they'll be like, “Oh, this email you sent out to the company was pretty [inane]. You sort of went on about this thing. Please don't do that. It was not helpful.” Or, “This code really sucked,” bad for this reason, bad for this reason, whatever. And no one takes it personally. And even at this company, the three of them, the co-founders, they can talk about this. But they don't do this with other people because they have not figured out how to do this with other people.
And I feel like there's this effect you get when you work with someone for a long time where it's easier to both give and take feedback. And I don't know. Ideally you don't do this with new people.
But I don't know of any place that does this. I know Bridgewater tries to do this.
JESSICA:
[Laughs]
DAN:
In the sense that you can just give straight feedback. And they're known for being brutal. Like interns often cry in meetings, because people are not used to having told the honest feedback. So, if you actually do, do this, it's really quite strange. And people really start crying. And so, it's like, I don't know, it seems really tough to do this with a team that isn't really used to each other, I guess.
CORALINE:
I think there are ways to do it without making people cry, too. In theater there's something called a shit sandwich where you're like, “You're did a really good job in that monologue. I thought that your intonation was not as good as it could have been. But it seems like you're really drawn to this part and I think that's great.” So, you take two good things and you put something bad in the middle of it. And that softens the blow some. So, it's possible to be humane and empathetic and still give objective feedback. I don't think it's a matter of people getting used to being made to cry.
JESSICA:
So, that's the… yes. And that's the negative feedback in a very small context of surrounded by compliments. But I think when Dan gave that example of the three founders who could say anything to each other, that's because they're doing that from a place of acceptance, of they know the other person respects them, wants them around, values them. And so, when you make a comment about the email, it's just about the email. It's not about you as a person.
DAN:
That's a really good point. It's like hard to… this is a reason I basically never give negative feedback to people. It's hard to differentiate between, “Hey, this specific piece of work had a problem,” and, “You suck as a human being.”
JESSICA:
Right. It's really hard to hear that and not make it personal until you have the relationship built that you know you're accepted and valued as a person. And we need that. And we need to establish that kind of relationship so that we can get negative feedback. Because not giving negative feedback is like swallowing exceptions.
CORALINE:
But Jessica, we're beings of pure logic and reason. We don't need emotions or empathy. [Laughter]
JESSICA:
That's why we can…
CORALINE:
Didn't you read the job description? Come on.
JESSICA:
[Laughs] Yeah, it's strange how the article that Dan linked to by yosefk is called 'You can read your manager's mind' or something like that. And the fact is we're so used to with the computers we get exactly what we ask for and that's frustrating. But with people we don't get what we ask for. We're actually likely to get what we want whether what we want is useful or not.
CORALINE:
True. So, to kind of bring us back I guess on the topic, we've been talking around the idea of normalization of deviance quite a bit. And we've talked about some specific examples. But we haven't talked about, we haven't really gotten into how to prevent. It's easy to say, “Oh my god. That organization is so weird. They do this weird thing. They don't have source control. They don't do this. They put Z's on the end of things.” But we are guilty of these things as well. So, what are things that we can do as individual contributors, as maybe managers or technical leads? What kind of things can we do to prevent really weird stupid things from just going unnoticed in our organizations?
DAN:
So, there's this paper that I linked to that I like a lot by John Banja. He works in healthcare and he has this list of things. I think it's actually hard to implement the list, but his list is pay attention to weak signals, which we talked about; resist the urge to be unreasonably optimistic, which I guess makes sense; teach employees how to conduct emotionally uncomfortable conversations which we just talked about; system operators need to feel safe in speaking up which is I think related to what you just talked about; and realize that oversight and monitoring are never-ending. My feeling is this list makes a lot of sense but it's incredibly hard to do.
And organizations are often… like people are incentivized to do the opposite. Just looking at the list, say resist the urge to be unreasonably optimistic, most of the people that I know that have had really great careers, as in they've been promoted up to technical fellow or [inaudible] or something like that, they're very optimistic. Maybe they don't feel that way and if you talk to them candidly, they will tell you what's wrong with their product, what's wrong with things. But if you want to get promoted you don't say, “Hey, I did this project and it screwed up in these 14 ways.” You're like, “Hey, I did this thing. It's great. It's great for the company. It's like super amazing.” And so, this is sort of something that we set companies up to have people not do these things. And I don't know how to fix that. That seems like, as an IC, something that's extremely hard to actually change.
JESSICA:
Right, because those people who got promoted to fellows who did these amazing things, by being optimistic and ignoring a bunch of failure paths they also got lucky. And those failure paths didn't happen to have ruined the company.
CORALINE:
I love that. They didn't happen to ruin the company. That is so cool.
JESSICA:
Right, right. But if you want to be at the tippy-top you need to be good at what you do and lucky. But what we need I think as an industry is a lot of people who are not reaching that pinnacle of, “I did an amazing amount of stuff and I got lucky it didn't break,” but a lot of people who are, “I did a useful amount of stuff and it didn't break because I took all these precautions.”
DAN:
Yeah, that makes a lot of sense. And there's something else. There's a friend of mine who's a technical fellow. He later left to found his own company, but he was a technical fellow at one of these large companies. And he told me the story of how he got there. And part of it is he just did really great work. The guy's a genius and so on and so forth. But the individual pieces of the story, some of them are quite disturbing. And I think this is just normal though. I've talked to other people at similar levels and they tell me the same story.
So, once upon a time he was on this project. I'll all it System A. And then in another part of the company, System B was being created. And System A was a small thing. It's supposed to be pretty good. This is a hardware company. This is in the early days of, not that early but [inaudible] it is hardware so people don't know where the [inaudible] operating system hardware should be] and all this stuff. So, it's pretty complicated. There's a high failure rate. And so, System B is going to [inaudible]. It's going to do everything. It's going to be the one system that everyone in the company uses. And so, as this is going on, this guy notices. He's like, “System B can never possibly succeed. This is too complicated. It's going to fail for sure. We'll make System A simple and it'll be fine.”
So, what happens is he gets pressure from upper management. At this point he was not yet a technical fellow and upper management says, “Well you know, we should merge System A and System B. System A just does what System B does, but it only does one part of it, so this is redundant.” And so, how he's able to fight this is he just lies. It's constant lies. It's a stream of lies coming from him to upper management. He was like, “Oh yeah, the product is basically done. You really shouldn't cancel it. It'll be done next week, actually.” [Chuckles] And it's like six months away from being done. He's like, “Oh yeah, things are great. We'll [inaudible]. We've tested the initial version. Everything is fine,” when in fact it's just falling part.
And eventually System B does fall apart. He was right. Because System A wasn't merged in, System A does not fall apart. It actually ends up being successful. It's a little bit late because no one knows what they're doing. It's research. So, it's not surprising it's late. But had he not just lied to upper management, his product would have been canned. And this is a story that I hear relatively often about people who lead projects at a high level. I don't think you see this in startups where everyone knows everyone else and you [inaudible] see through the lies. But a company of 50 or a hundred thousand people, no one really knows what's going on. And so, there's sort of this screen of disinformation. And people I know who are very high up, like VP level or whatever, it's sort of their job to generate this disinformation. The disinformation is always negative about someone else but positive about their own product.
I find this to be super depressing and I think it's one reason I've never become a manager. But, I don't know. This actually works, which is why people do it.
JESSICA:
It's almost telling the truth by lying because they really needed to not cancel System A but he couldn't tell them the real reasons for that so he made up some other reason that would get the company to the truth through lies.
DAN:
[Laughs] Yeah.
JESSICA:
Yeah, that's called…
CORALINE:
Jessica, I can't wait for your television pilot to come out. [Laughter]
AVDI:
Has anybody else read the article 'The Thermocline of Truth'? I'm not sure exactly how related it is but what you were just saying reminded me of this a little bit. There's this article from 2008 talking about how in large organizations there is what the author calls a Thermocline of truth. The thermocline is the point in ocean water where the temperature suddenly drops off. You have all this really cold water coming up most of the way to the surface and then there's this thermocline where it's an area of quick change. And then at the top it's quite warm. And they're talking about how… “Oh, Jessica in the chat is saying that it's an inflection point.” Yes, it's an inflection point. And the article talks about how in a lot of organizations there is this thermocline of truth past which truth does not survive [chuckles] up towards the top layers of the organization.
DAN:
It's something I've also found super interesting. Often I'll tell the managers problems I'm having and problems that I think exist in the group. Like if someone has a personal problem they don't want mentioned, I won't talk about that. But just systemic problems. Well, it depends. But almost all managers, there's one exception, but almost all managers seem really thankful. They'll say, “Thank you,” and I don't think they're saying it right. Their tone of voice is like, “Wow, thank you for telling me. This is great.” And often it's problems that have existed for weeks of months, just no one has told them. Because there's this sort of, I don't know why, but things people [inaudible] try to [inaudible] just don't make it to managers. And it's like, this is a serious problem and again I don't have a general solution. I personally try to bubble up problems that I think are important. But apparently people are hesitant to do this.
CORALINE:
If you have a culture that is really strongly about enforcing the status quo then people are not going to want to challenge the status quo. If you have a culture where people are encouraged and rewarded for speaking up then you're going to get more of that behavior. It's a matter of, it's a cultural question, really. And I think larger companies tend to [favor] status quo, which is why smaller companies think that they have an advantage over larger companies because they're more willing to challenge things. Whether that's actually true or not is up for debate, but that's the theory.
DAN:
Yeah, that's fair. I think I've also been extremely lucky or I guess privileged also, in the sense that with I think again one exception, I've had extremely good managers who they take the feedback seriously and they act on it. And when I talk to a lot of my friends, they do not have this. They sort of get smacked down when they bring up problems. And if you've had this happen to you a number of times in your career, you'll just stop doing this with a new manager because it will seem like sort of the same. And I don't know why I've been so lucky and I think that's nothing that I've done in particular. It's just the managers that I've had.
JESSICA:
I've noticed a difference between, especially at large companies, people who expect to work at that company for life versus people who are like, “Well, I'm the senior software dev. If I don't like it here I'll just go somewhere else because there's a zillion jobs for Java people,” in a willingness to speak up. The people who are there for life are all about, “No, I just need to make sure my job stays.” And I'm like, “No, look. This is happening and this is happening. So, do something about it or don't.” But the worst case for me is I go get another job and that's not so bad.
CORALINE:
Yeah, I think that with the industry being the way it is with the availability of jobs, people are not as willing to try and make an impact on company cultures they would ordinarily have been or would have been under different circumstances. Because they're like, “Oh, it's not good for me anymore. I'll just move on and it's not a big deal for me.”
JESSICA:
That's true. That's another consideration. And for me it's like, I'm going to make this effort to change the culture. And if it doesn't take, then I'm going to move on. But I'm willing to take this risk even at great political cost.
CORALINE:
Exactly.
JESSICA:
Yeah, because either…
CORALINE:
Yeah, I find myself in the same position. Because I have a conscience and I want things to be good for other people, not just me.
JESSICA:
Right. And best case, and sometimes, then we get this sort of… I'm acting irrationally in terms of my interests if I want to stay at that job. But there's a possible but unlikely very high payout of we really do have culture change. And it gets better for everyone. And then we stay and everything is better.
CORALINE:
I would say Jessica, you and I are in similar points in our careers and we have a lot of capital. We have a lot of privilege that comes along with that. So, I feel like it's my responsibility to leverage that privilege to make things better, to make a difference in our company culture. Because I have the capital to spend and other people do not.
JESSICA:
That's a great point.
AVDI:
I see this pattern from the manager side in parenting. I can watch as my kids transition from… we tell them. What we say to them is, “You can always talk to us about anything.” But then I can watch when one will start hiding mistakes they made or hiding something that they broke. And it takes a tremendous amount of mindfulness, very deliberate mindfulness to make that transition from observing the, “Oh now they're hiding things that they broke so now their behavior is even worse,” to make that translation to, “Okay, they're hiding something that they broke. That means not only did they do something that they shouldn't, get into something they shouldn't have been getting into, but they also clearly perceive that the disincentive to actually talking to me is so high that it's better to try and hide it than to come and talk to me about it.” It's very easy to just jump straight to, “Oh, now they're even worse. Now they're doing bad things and not telling me about it.”
CORALINE:
Short version: parents make better managers. [Chuckles]
AVDI:
It gives us an opportunity. I don't know if we necessarily make better… [Chuckles]
AVDI:
Necessarily act on the opportunity. I guess my only point there is that it is really difficult from the manager side of things to look at that situation and say, “Oh, this is my problem,” instead of, “Oh, this is their problem that they didn't tell me.”
CORALINE:
Right. So, is there anything else that we should talk about before we get to picks? Dan, did we cover pretty much everything that you had in mind?
DAN:
There's sort of one question I have, which is when you're shopping for jobs… so you mentioned this, I think this is true. A lot of us are pretty privileged and if a situation becomes really bad in some sense we can just leave. But if you're shopping for jobs, how do you find one that doesn't have these problems? Well maybe it's your dream to go into a place that's sort of messed up and fixed the problems. But I think a lot of us, we just want to find a place that we sort of like. And we can make it better but it should have a baseline of being sort of reasonably good. I feel like I sort of lucked into two places really good. And in the third case, I got into a place that is not what I would have gotten into had I known what it was. But it's quite hard to actually, at least in my experience, it's very hard to actually find out what's going on inside a company.
So, if you ask them, this is an example of something that's pretty obvious, but if you ask them, “Are the hours flexible?” literally every company I've talked to will say yes. [Inaudible] the details. The answer a company… well, I think this has changed. So, at this one particular office in [inaudible] Austin, the answer I got was, “Oh yeah, yeah. We're very flexible.” And I asked, “When can people come in?” And the answer was, “Any time before 9 AM. Some people come in as early as 6 AM.” And I'm thinking, “Oh, it's interesting. My scheduled is limited by this meeting I have at 2 PM on Tuesdays that I have to be in for. Otherwise, I can be in at any time.” And these people think that, “Before 9AM,” this is a lot of flexibility. It is compared to IBM in 1980 where you couldn't even come in early even.
So, it's sort of, I don't know. I feel like people always have a positive answer. And I think they believe it. It's not like they're lying to you. If they're lying to you maybe you could figure it out. But they're like, “Oh, no, no. This is actually totally fine. We're great at this.” And then the give, if you actually press for details, they give you these answers like funny and you find out if they're using version control or whatever. But it's hard to just find out while talking in an interview.
CORALINE:
Yeah, I have some tips on that because I give a lot of advice to new people, people who are early career. And so, being explicit in asking for details I think is really important. Also talking to people who either work there now or have worked there to find out some direct experiences that they've had and things that maybe they think could be better or things that are great. I also like to ask questions of the people who are interviewing me because interviews are two-way, right? So, I will say, “So, what is something about the company that really annoys you that you would like to change and how would you go about changing it?” kind of turning some of those questions around, because of course there's something. And if they say, “Oh nothing, it's great here,” then you know they're lying to you. And you can use that as information, use that as data.
Another thing that I found really useful is that I take the interview process as a reflection of what the company culture is. So, if there are really great things that they do during the interview process to make me feel really welcome or to make sure that I'm not highly stressed out or like I'm in a pairing situation they want to make sure that I feel like I'm able to contribute in a good way, those are reflections of the culture. Interviews do not happen in isolation. So, I use everything that happens to me through an interview process as a data point to extrapolate what it's going to be like working there.
DAN:
Oh, that's interesting. Do you feel like the connection between the interview process and the actual work is closer to a small company than a big company? Like my impression at large companies is some of them have this interview process that's very smooth because there's an org basically designed to do that. But that org doesn't get to affect the day-to-day process. And the reverse happens, too. The interview process can be a total mess again because this org basically runs this. But at small companies, I think it is basically true actually now that I think about it that places with good interview process are also generally places that you want to work.
CORALINE:
It probably is different for really large companies where there are people who are dedicated to making the interview process smooth. And it's possible they're probably isolating you from the culture to a degree there. So, I would agree with that.
AVDI:
It does seem like if you send them an email after your interview and you don't get an email back for a week or two, that does seem like an indication if nothing else of how they handle weak signals.
CORALINE:
Yeah, and how they value your time or how they value your emotional state.
AVDI:
Right.
CORALINE:
So, all this just leads me to my favorite conclusion and that is that the hard part of doing software is the people parts because very little of what we talked about today has anything to do with technical challenges or the fact that software and hardware need to talk to each other and don't use the same language or any of those sorts of things. This is all about people having to interact with people. And that's what we need to get a hell of a lot better at as developers and professionals to advance our industry.
JESSICA:
And choosing what to do is a lot harder than figuring out how to do it.
CORALINE:
Yeah, definitely.
JESSICA:
We did talk about having routines and setting aside time for that list of things that you should do but you never really have time to you and you'll never do all of them. That's something implementable.
AVDI:
Yeah. Yeah, but you got to be really disciplined about it. You can't… they just can't be subject to emergencies, except in the most extreme cases maybe. You can't have them be, “Oh, we'll push this back because we got a deadline.” And the only way I've been able to start dealing with some of this stuff in my own life is basically I have time for improving things and I also have a piece of time set aside every day for administrative stuff. And that time, during that hour, that is the only time, only thing that I'm allowed to do. I am not allowed to work on creative work. And I'm also not allowed to work on admin work before that. But it's… and all of my time is use it or lose it. So, if I don't get the thing done that I was doing in my creative work time before that, I don't get to say, “Oh, I have to push this back. Oh, I have to push this back.” Because all of it is, anything I push back, that's pushing back family time at the end of the day. So, it's use it or lose it. And during that time it's all you get to do. And I don't know any other way than to just be very disciplined like that, because otherwise stuff is always going to take precedence, the urgent unimportant stuff as Stephen Covey puts it, or even the urgent important stuff.
JESSICA:
And everything is relative. So, watch out in interviews. Because adjectives like flexible can mean very different things to very different people. That was a good lesson.
CORALINE:
It's been absolutely great talking to you, Dan. This is a really interesting topic and I'm really glad you wrote your blog post and I'm really happy that we're able to talk to you about it on the show today.
JESSICA:
Yeah, thank you.
DAN:
This has been a lot of fun. Thanks for inviting me on the show.
JESSICA:
Okay, is it time for picks?
CORALINE:
I think so.
JESSICA:
Great. Coraline, do you have any picks?
CORALINE:
I have one pick today. It is a really, really cool project. It is called Octohatrack. It is based on a concept called 'Let's All Build a Hat Rack' which is created by a person named Leslie Hawthorne. The idea is this. You go to GitHub and you see your contribution graph and the contribution graphs are great. They show every piece of code that you've changed. But there's a lot more to being an open source citizen than sending in pull requests. So, these kinds of graphs and metrics like that ignore a lot of the non-code contributions that people make.
So, Octohatrack is an attempt to rectify that situation. With Octohatrack you give it a GitHub repo name and it returns a list of every GitHub user that has ever interacted with the project but has not necessarily committed code. Those interactions include raising or commenting on an issue, commenting on a pull request, commenting on a commit, all these sort of social interactions. And GitHub says it's for social coding but we ignore and don't measure the social interactions. So, that's what Octohatrack does. It generates some HTML representation of contributors who have done technical contributions as well as non-technical contributions with thumbnail images. And it's something you can embed on your page which is really, really cool. So, I like the idea of rewarding good citizenship in open source. It's something that's very close to my heart. And I want to start valuing non-code contributions more. And so, I think Octohatrack is a great step in that direction.
So, that is my pick.
JESSICA:
Sweet. Avdi?
AVDI:
I think I just have one pick today. I will pick the Audible book that I just finished this morning, on this morning's run. And actually, it's not a book. It is a course. It's a series of lectures. So, one of the cool things about Audible is that not only do they have books now, they also have a ton of lectures from the great courses series. So, maybe you've seen that catalog at some point where they have all these university courses that you can order on CD. But Audible has them now.
Revolution:
Modern Physics for Non-Scientists' by Professor Richard Wolfson. And it has been just a terrific high-level introduction to all the physics that I never got around to catching up on in a very nonscientist friendly way, non-mathematician friendly way. There's no math in it. It's just a lot of good metaphors and a lot of great explanations for why things are the way they are and where the current state of the art is in understanding the universe. So yeah, that's it.
JESSICA:
Sweet. I'll add one to that and it was a cool article that I read today about how the theory of special relativity impacts GPS calculations. And you'd think relativity is about light speed and things really far away. But now, like satellites and our phones are doing calculations involving relativity. So, that's one of my picks.
I also want to pick a talk by Katrina Owen from Bath Ruby last year and is called 'Here be Dragons'. And [chuckles] she shows some Ruby code that's entertaining and teaches positive coding style things. And then she makes it about the motivation of whether to cooperate with your team members or to push your code forward in spite of its longer term impacts because that's what's rewarded by management. I think that's relevant to today's talk.
CORALINE:
Yeah, very much so.
JESSICA:
Yes. So, those are my…
CORALINE:
I'm speaking at Bath Ruby this year. I'm so excited about it, by the way.
JESSICA:
Cool. Congratulations. Dan, have any picks?
DAN:
Oh yeah. So, there are a couple of things. I'm allowed to pick two things, right? So, one thing that I really like is there's a site, tweet.onerandom.com. You can go to it and it will just give you a random tweet. You can press a button. You'll get another tweet. And I spent 20 minutes just clicking this and getting random tweets. And the thing I find interesting is that I never, literally have never seen a tweet on this site that is like the kind of thing I'd see on my Twitter feed. So, it sort of reminds me how just, I guess niche my interests are. I follow lots of people who have tens or thousands of followers. So, you sort of think, “These people are a big deal. This is a large part of the world.” But in fact, it's tiny. And I don't know, I like to be reminded of this every once in a while. I'll click this again. I got this thing in French. It's some French video game I've never heard of. And every time I click this it's a completely random thing that is super different from anything I'd normally experience.
The other thing I like is there's this paper at ISCA this part year. So, ISCA is a computer architecture conference. And it breaks down Google's actual workloads. Like, what do they spend their time on? And one reason I like this is because it blows through a lot of conventional wisdom about optimization. Like for instance, a thing that you commonly hear is that you should go profile things and you go find the things that are hot and then go reduce that. And they mentioned the hottest 50 binaries take up… that's about half of their cycles. The top 10 is like 10% of their cycle. And the same is true for [inaudible] functions or whatever. And so, it's just sort of in general I like workload characterization papers because they, I don't know. They give you an idea of what's going on. And it's a lot of work to actually do it yourself. So, it's interesting to read in a paper.
Oh yeah, so the tweet.onerandom.com is possibly not safe for work. I have not yet seen that happen but it's literally a random tweet. So, it could happen.
JESSICA:
No problem. Thank you for those picks. Alright, thanks everybody. This was a great episode.
CORALINE:
Yeah, it was a lot of fun. Dan, what are you doing next?
DAN:
[Chuckles] You mean literally next? Going into work and I have to check my meeting schedule to see if I have more meetings. But probably just trying to code and fix some, add some tests. The kind of stuff that I need to do but we sort of don't make time for.
CORALINE:
Cool. Well, thanks so much for taking time with us.
JESSICA:
And when someone wants to learn more about you or see what you're up to, where should they go?
DAN:
Okay. So, I blog at DanLuu.com. This is just sort of random thoughts. And then I suppose I sometimes tweet whatever, @danluu.
JESSICA:
That's cool. We can link those in the show notes. Thank you so much.
[Hosting and bandwidth provided by the Blue Box Group. Check them out at Bluebox.net.]
[Bandwidth for this segment is provided by CacheFly, the world's fastest CDN. Deliver your content fast with CacheFly. Visit C-A-C-H-E-F-L-Y dot com to learn more.]
[Would you like to join a conversation with the Rogues and their guests? Want to support the show? We have a forum that allows you to join the conversation and support the show at the same time. You can sign up at RubyRogues.com/Parley.]