226 RR The Leprechauns of Software Engineering with Laurent Bossavit

Special Guests: Laurent Bossavit

Show Notes

02:03 - Laurent Bossavit
04:52 - The 10x Programmer
13:07 - The Custom Defects Curve
15:33 - Leprechauns and Local Truths (Does Needing to Prove Others Wrong = Fear?)
22:53 - The Feedback Cycles
25:09 - Agile, Waterfall, and The Software Crisis
32:30 - Estimations, Calibration and Assessments
38:16 - Starting Points/Research Skills for Identifying Leprechauns
  • 1. Skepticism
  • 2. Curiousity
  • 3. Tenacity
43:14 - The Value of Leprechauns
46:46 - “Most of our job is learning.”
50:44 - The Definition of “Insanity” => Experimentation
Picks
Special Guest: Laurent Bossavit.

Transcript

 
AVDI:

Yes, it used to be laryngitis. It was like, "Huh, that's funny. I sound like a frog." Nowadays, it's like, "Oh, no! I can't do my job!"

[Laughter]

[This episode is sponsored by Hired.com. Every week on Hired, they run an auction where over a thousand tech companies in San Francisco, New York, and L.A. bid on Ruby developers, providing them with salary and equity upfront. The average Ruby developer gets an average of 5 to 15 introductory offers and an average salary offer of $130,000 a year. Users can either accept an offer and go right into interviewing with the company or deny them without any continuing obligations. It’s totally free for users. And when you’re hired, they give you a $2,000 signing bonus as a thank you for using them. But if you use the Ruby Rogues link, you’ll get a $4,000 bonus instead. Finally, if you’re not looking for a job but know someone who is, you can refer them to Hired and get a $1,337 bonus if they accept the job. Go sign up at Hired.com/RubyRogues.]

[Snap is a hosted CI and continuous delivery that is simple and intuitive. Snap’s deployment pipelines deliver fast feedback and can push healthy builds to multiple environments automatically or on demand. Snap integrates deeply with GitHub and has great support for different languages, data stores, and testing frameworks. Snap deploys your application to cloud services like Heroku, Digital Ocean, AWS, and many more. Try Snap for free. Sign up at SnapCI.com/RubyRogues.]

[This episode is sponsored by DigitalOcean. DigitalOcean is the provider I use to host all of my creations. All the shows are hosted there along with any other projects I come up with. Their user interface is simple and easy to use. Their support is excellent and their VPS’s are backed on Solid State Drives and are fast and responsive. Check them out at DigitalOcean.com. If you use the code RubyRogues, you’ll get a $10 credit.]

SARON:

Welcome to the Ruby Rogues Podcast episode number 226. I'm your host, Saron and with me today, I have Avdi Grimm.

AVDI:

Hello from Tennessee.

SARON:

Jessica Kerr.

JESSICA:

Good morning.

SARON:

And today's guest is Laurent Bossavit. Laurent, do you want to introduce yourself?

LAURENT:

Hi. Hi Avdi. Hi Jessica. Hi Saron. I'm a developer with 20 plus years of experience. Doesn’t actually show in my hair yet and currently working for the French government, of all things on funny things we call state start-ups. That's basically our bid to try and get the government to work in a more agile way except we're no longer calling it agile. Anyway, that's me.

JESSICA:

Wait, why are you no longer calling it agile? I want to hear that.

LAURENT:

Oh, that's passe'.

JESSICA:

[Laughing]

LAURENT:

We're approved with that. It doesn't really matter what we call it as long as we do the right things, did it very frequently. Stuff that works make customers happy. I know that's still a huge cultural shock.

SARON:

Is the happiness shocking?

LAURENT:

Oh, yes. I mean you know what it's like. Maybe you don't know what it's like working in the government. But government and happiness, those two words don't usually go hand in hand. Well, I'm just kidding here but…

SARON:

[Laughing] That's okay. We understand. So one of the things that we wanted to talk to you about was a book that you wrote called ‘The Leprechauns of Software Engineering’. Do you want to tell us a little bit about that?

LAURENT:

Where to start? So I guess the first thing is to explain the title. As someone once remarked to me that straightforward titles work better than cryptic ones but I didn't know that at the time and I kind of got stuck with that title.

So Leprechaun was the name I came up with to call things that "everyone knows" in the software profession specifically. But I think the concept of leprechaun can be reused fruitfully in other domains but I was very much focused on the software domain. So a leprechaun is something that everybody has heard about, that everybody knows is a well-known fact except that it turns out not to be true when you get closer to it.

I came up with the name leprechauns because some of the objections that people came up with, some of the critics I got were along the lines of - but can you prove that this thing you were talking about doesn't make sense or doesn't exist? I was taken aback with that until I started reasoning that this is like trying to get me to prove that leprechauns do not exist. That's not possible. So I can't convince people that leprechauns do not exist. So what I was trying to do in the book was to show the home works that I've done in trying to locate the sources of some of these well-known fact noting the times when I came up empty.

JESSICA:

Can you give an example of some of those facts?

LAURENT:

The first one that I started with was the 10x Programmer myth. This notion that the best programmers outperform in terms of productivity the average programmers by factor of 10, at least, and people should look around. People are fond of quoting that. It seems that lately there's more of a backlash and people were starting to profess doubts about this one. And maybe that's done to me having done my job. I don't know.

AVDI:

I always accepted that one. Well at least for a while. I read it and I think like everyone else. I thought, "Oh yes, okay." You matched it up with your experience of this one programmer that was really bad and these other programmers that were super good and you don't think about it.

JESSICA:

The point about where it was 10 times greater than average is significant because I've definitely seen some 1/10 programmers.

[Laughter]

LAURENT:

That's the thing. When people call that, it's like the business with the codes by Einstein that Einstein never actually said. My favorite one is: the definition of insanity is doing the same thing over and over again and expecting different results.

JESSICA:

I love that one.

LAURENT:

Yes, but that's not the definition of insanity.

JESSICA:

Ahhh.

[Laughter]

LAURENT:

It's the definition of experimentation, right?

JESSICA:

Darn!

LAURENT:

And Einstein never said it anyway. So it's something that people keep saying but they misquote it. So the original research, when I went looking into that was never about comparing average developers with the top developers. It was comparing the worst with the best. I guess you would agree. I hope you would agree to the different proposition.

JESSICA:

Right. Yes, there's a big difference between comparing the worst to the best and comparing the average to the best because if people go looking for that 10x programmer, they're just going to find somebody who thinks they're a 10x programmer.

LAURENT:

Especially if they are comparing themselves to the people who really are not good at programming. So actually that makes the factor infinite in principle. Given any programmer, you can probably find someone if you can look anywhere who are at least 10x less competent than they are.

JESSICA:

Yes, so that makes people think that the whole 10x myth makes sense because they can line it up with something they've experienced.

LAURENT:

Yep.

JESSICA:

But there's a big difference between that anecdote of one person is 10x better than another one. And there's a whole class of programmers who are 10x better than all programmers on average.

LAURENT:

And that's the dream, this notion that somewhere, there is a 10x programmer land where you can go to and basically find people who will save your project or save your start-up or whatever. So I guess one of the things I found not specifically when I was investigating that myth because that was only the start, right? As I was looking to more and more of these things, it struck me that people believe things. I guess primarily because they wanted to believe. They wanted that to be true. It's a comfort to think that you don't have to invest in actually training people, developing competence. You can just go to10x programmer and then catch one of those rare beasts.

SARON:

That's interesting. I'd never thought of it that way. I didn't think of the idea of a 10x programmer meaning that you don't have to invest in training someone. You just need to find someone who magically has these skills.

LAURENT:

Here again, there is the difference between what the research that people sometimes probably would say and what you can read between the lines of people trotting out that claim in discourse about how we should manage programming projects. That's one of the reasons I wanted to really look into the actual sources. I wanted not to be fooled about whether there was actually any research on that but the research actually said what the limitations of that research were.

So for instance, you have to know that most of the research was actually done in the late 60’s or early 70’s, something like that. Practically everything that people cite as confirmatory research of those early studies were in fact indirect citations of the original studies. It's like you have one researcher coming out with the paper saying, "We were investigating something else. Oh! By the way, we found out that there was a 20x actually that the original research said." Factor of difference between the best and the worst in that group and someone else in the world of paper saying, "Oh! There is this interesting research result that says there's an order of magnitude difference between the best and the worst programmers.”

Then a third person on to that saying, "There are at least already two results saying that there's an order of magnitude difference between programmers." By the time you get to the fourth or fifth paper in the chain, it’s like everybody knows and there's tons of research saying that there is this order of magnitude difference. It's like it's all pointing back entirely to the same primary source.

AVDI:

I feel like I've heard somebody call that cytogenesis or something like that.

[Laughter]

You know where a fact is basically manufactured by multiple citations.

LAURENT:

Right, appears out of thin air.

JESSICA:

Yes, and we have that same problem. When I used to work at PR, we have that. There was a problem that we looked out for too because in journalism, we call it the echo chamber. You have this one story with this one source. It gets picked up by one other publication and another publication picks up from that publication. All of a sudden it's like yeah! This really big thing is happening when really it was just this one tiny source that everyone is just piggybacking off of.

SARON:

So it goes viral. It's like a tat picture. [Laughter]

AVDI:

And the shocking thing is that it's viral even in professional paper or scientific papers.

LAURENT:

Yes.

AVDI:

Not just like Facebook.

LAURENT:

Right. As I put it, it crosses the layman researcher barrier.

AVDI:

One of the things that I was shocked by as I was reading your book is realizing that some of the things you addressed, I can't remember if it was the 10x Programmer but some of the other things like the Defect Curve. I'm not sure if it's the right term but they were things that I took as gospel because I read them in books like Code Complete which is famously one of the better researched books on software development.

LAURENT:

Yes, that's one of my favorite books as well which put me in an awkward position when I get into a fairly intense row with Steve McConnell over the 10x thing. So I think when we hear the word ‘research’, we make different assumptions as to level of rigor involved. So there's the level of scrutiny that the claim deserves if you're in actual structured academia.

But most of the sources that we have as working programmers are people who - that's going to sound harsh but I mean it also for myself, people who are propagandists rather than the actual scientists. We don't have the same expectations applied against us. We are trying to convince rather than to find truth or unshakeable evidence of something.

JESSICA:

That's very true. As a conference speaker, when I talk about some ideas or some experience I have, it's completely anecdotal. I'm like, "You should do it this way. You should use immutable values because it makes life easier." What I really mean is my life and my particular experiences. I just want people to take what's useful to them. But you're right. As soon as anyone with even the appearance of authority says something, that can start the cytogenesis.

LAURENT:

Yeah. I think the problem, you mentioned Custom Defects Curve so that was probably the second thing that I started investigating. I was in a really different position on that one. I mean, the 10x thing always rubs me the wrong way because it assumed that people had this innate capacity to be a good developer. I thought that was pretty much a crazy thing to propagate that we should invest in developing people when training people and developing more skills than just technical ones.

So I was biased against that claim initially which means I was probably looking harder for evidence against it than I was looking for evidence for it. But the Custom Defects Curve was an interesting case in that sense because that was actually probably one of the people firmly convinced that this thing was true that the…

JESSICA:

What is this thing?

LAURENT:

The claim is that – there are again different interpretations because it's suffered this telephone game thing where as you go farther from the source, it becomes more and more distorted. So the way I would have raised it a few years ago was everybody knows that the cost of fixing a bug grows exponentially with time. So that the longer a bug has been around in your code, the more expensive it's going to get to fix it. I was using that as the justification for practices like Test Driven Development and Pair Programming. Catch bugs at the source before they become big problems.

I even found myself at one point getting into an online argument on the forum with - it's the ‘someone is wrong on the internet’ phenomenon. I was arguing my side by saying it's very wellknown. It's in fact probably one of the few well-accepted facts in software engineering that bugs cost more to fix the longer they stick around.

It's funny because after I wrote the book, I went back to that forum and I apologized in public to the person I was arguing with him at the time saying, "Oh, I'm sorry I was wrong. You were right." That was a measure to me of degree of effort that it takes to actually change your mind.

SARON:

So when you're looking at these different leprechauns, is that the common theme? Is it that just this one little fact got blown out of proportion and went viral and was taken out of context? Is that what they all have in common?

LAURENT:

And gets distorted in various ways and also that people tend to use them as a bludgeon, as a way of hitting people over the head to get them to believe in something. I think the solution to that is actually pretty simple. It's to say rather than base your claims on authority, you can just say, “I found that this or that thing works for me.” It's interesting to inquire about why, why does it work for me? There may be a reason why it might not work for you and that would be also an interesting thing to happen. So I found that I didn't actually have to have the authority of decades of research behind me as I was arguing for things.

JESSICA:

So now you speak from the authority of your own experience?

LAURENT:

That and trying to - I mean probably not everything in software engineering has to be anecdotal. We can probably do useful research and find out things which are reliably true under various conditions. But in doing that, doing that sort of looking for what I would call local truth, it requires a frame of mind where you're not simultaneously depending on those truths, those arguments, those claims to convince other people.

JESSICA:

Can you say more about that local truths?

LAURENT:

One of the problems with things like the Custom Defects Curve or anything to do with defects in general, if you come across an article saying, “On average, it costs this amount of dollars or man hours to fix the defect.” That's very likely to be a leprechaun because every project is different so there is going to be a huge range of variation between products. In this case, it's going to cost maybe, I don't know, 10 bucks to fix a typical defect, if you can even say that there is such a thing locally in a project. And in a different context, maybe I don't know, aerospace or embedded medical devices then here it's going to be much more expensive. So, it doesn't even make sense to formulate statements as if it could apply to all projects everywhere, whatever the domain may be. Right there is a good sign with leprechauns, something which is too universal-sounding.

But you could look at your own project and say, "In general when we create a bug, what tend to be the causes?" And you could say, "Maybe half of the time it's because we have miscommunicated something in a dialogue with a customer or users. The other half of the time it's because we're working with the… I don't know maybe it's because we're using Java and Java is a crap language.” Something like that. I'm not suggesting that Java is a bad language.

JESSICA:

But it could be bad for a particular project.

LAURENT:

Right.

JESSICA:

Or a particular group of developers.

LAURENT:

I mean that would be useful insight if you could uncover it. That there is some kind of a mismatch between the competencies of a group and the language they are using. And then you could come to a useful decision which would be to, I don't know, maybe switch to Ruby or something or I don't know, maybe COBOL.

[Laughter]

Probably not COBOL.

AVDI:

It's interesting how attached to these things we get. I tried to think about why - I know I've had these arguments before. I know I’ve had these arguments about like you should do this. Here's the research that says that you should. Somebody's wrong on the internet and I can't rest until I prove it to them. It's strange introspecting on that and trying to figure out where that urge comes from. I really think it comes from fear, pain that I've experienced in the past. Fear that somebody out there is inflicting or is working on inflicting that pain on somebody else by their ignorance.

JESSICA:

Maybe. When you do experience something like those really painful bugs in production and you want to save other people from that. I love that Laurent called that a local truth because it is. It's very true in your experience. Yet, something completely different could be very true in someone else's experiences. When you can't scope that when something has been universally true for you, it's really hard to recognize that that might not be universally true for someone else.

LAURENT:

I think to pick up on the fear aspect, there's an emotional angle to the whole thing. We have a very well-developed capacity to fool ourselves basically. Then that shows up in programming in probably more frequency than in other professions. We think that something is simple. We code it as if it were simple. We conveniently forget the things we know about that thing not being so simple. So we can just rest on our laurels until the thing gets to production.

And of course the longer the cycles, the farther that reckoning is put off, right? Which is why I think as a rule in one of the things that I have been in fact convinced of is that type short feedback cycles are actually a big deal in programming. I have this opinion. I could be disabused of it but it would take some work.

But anyway, we are good at this fooling ourselves thing. So when we are confronted with evidence that we were not in fact correct in assuming, that username is always first name and last name. Maybe you've had that kind of wake-up call like someone tries to input a one-component name. They're not able to even use your app because their name is just something, not something something, but just something.

I have a friend whose name is [inaudible], just [inaudible]. So his very existence defies probably the dissention of probably 99% of programmers out there.

JESSICA:

Yeah and worst case, he can't even use your app. You never learn that because he can't use it. [Laughter]

LAURENT:

Exactly.

SARON:

Yep.

LAURENT:

I think it actually takes active effort to go seeking for things which will invalidate your operating assumptions. When you do find it, it's usually something which… it's not we're cheating. It's someone tells you, "Oh, you're wrong." Most likely you're going to take it as a personal attack. So it's very difficult to put yourself in the frame of mind where when someone tells you you're wrong about that, you jump for joy and say, "Oh great. I'm so happy you told me I'm wrong."

SARON:

Because that's an opportunity for learning.

LAURENT:

Exactly.

JESSICA:

You mentioned that the feedback cycles, I think you have a really interesting point about how the longer the cycle from development to production is the longer we get to keep our illusions about the simplicity of what we just coded. Yeah, totally I agree that the short cycles are super important for learning because if it's been more than two weeks since you coded the thing, you don't remember that experience. How are you going to learn anything from the new piece of information?

LAURENT:

Right, which by the way is one excellent argument for why having bugs stick around longer make them more expensive to fix? But you have to scope that argument properly. It's not going to affect all bugs. So something which is a key architectural assumption buried somewhere deep in your code or even worse, copied and pasted in 20 places and then you wait six months for that to come to light. Yes, that's going to be a big problem.

JESSICA:

One thing we can do is make it cheaper to fix bugs that do make it to production because they're shorter development cycles. When we make it cheap to make changes in production, we've drastically lowered those costs. That's a win no matter what.

LAURENT:

When you start down that path, I think there are lots of things that come up which are useful. Like you should try and make yourself consciously aware that you are making all sorts of assumptions about how your users are going to respond to the app. You go out and look for ways to invalidate those assumptions rather than close your eyes and say everything is going to be okay because I have a pretty good idea of who the users are. That's where we're going to change things a lot.

So you're going to look for ways to for instance, monitor production and usage. You're going to control your production logs for any kind of insight that it provide. You're going to take a very different attitude to developing the product. So rather than having the usual build it and throw it over the wall to operations, maybe that's one of the things that caused you to take more of an interest in - say things like dev apps.

JESSICA:

Yeah.

AVDI:

I'd like to hear a little bit more about some of the other leprechauns you discovered. I think the 10x and the defect curve are some of the biggies. But are there any other claims that you found that just don't hold up?

LAURENT:

Again, a bunch of things about the economics of software defects, about what we think we know about the drivers of project success, the one that figuring out the cost of defect has a very limited data set. That there are claims made only a very limited sort of sense was one of the most liberating moments. Then there are others that I went after more for fun that I just doubt from the start that they were bogus. But it was more the fun of the chase, figuring out where the original claim could have come from. One of the funniest ones is the thing about 70% of the Department of Defense Projects having ended it in project failure.

Then this thing has been picked up by a few of the agile gurus as a way of arguing that Agile works better than Waterfall. So this is one of those claims that people have repeated left and right because based on borrowed authority, they felt like it put to rest arguments about Agile. When I looked into it, it turned out that the actual source of the claim was one study in the 70s of a group of seven projects that were examined by the government accounting office of the US because they were in trouble. So that was a biased set right from the start.

Then somebody sort of mixed up that study again in the 70s on a very small, very limited set of projects for I think the total budget was $7,000,000. But they mixed it up with another slide in the same conference presentation which showed the overall size of the Defense budget on IT which was $35.7 billion. They mashed up the two slides and came out with this notion that the study of Waterfall projects in 95 have shown that 70% of that 35.7 billion had been spent on fake projects.

JESSICA:

So what a great mean! It's even a mash-up.

LAURENT:

Yeah.

[Laughter]

JESSICA:

The ones with the TAT pictures have a more accurate impression of real authority.

LAURENT:

Yeah. You have to remember that this was back in the 2000s. Way before the internet gave us this wonderful set of tools to spread means. [Laughs]

AVDI:

That's one example of what people call The software crisis, right?

LAURENT:

Well, that turns out to be another one. The notion of a crisis is again the same deal. While editorial is slightly off hand in this case, the original conference that presents software engineering movement was held in 1968. They convened a group of people to talk about various problems in software but pretty much nobody in that set was intent on [inaudible] any practical set of pathologies a software crisis in those exact terms.

They spoke about a crisis more vaguely once or two times in the conference proceedings. But it turned out that the person who edited the proceedings and then later made sure that they were popularized among software engineering academia made a point to say that the conference was convened in response to software crisis. And software engineering was the response of the community to that software crisis.

Then a couple of decades on when you know the history of software engineering, everyone was repeating that claim that the NATO Conference in 1968 was mounted as a response to the looming software crisis. People like a good story. I like a good story and the throwing stones here. But after a while you have to think, "Have we really been living in a crisis for over 40 years now? Is that even possible?"

JESSICA:

Oh, oh, oh! We were talking earlier about how these claims that get cited in a whole bunch of places and there's a logical fallacy. I'm sorry, a cognitive bias. Yes, yes, a cognitive bias that's discussed in Thinking, Fast and Slow about how if we hear news of something multiple times, it gets stored in our head as multiple incidents. Like if you see on the news eight different times within a week about a particular kidnapping, you'll get it into your head not consciously but subconsciously that eight kidnappings occurred. So you drastically overestimate the number of kidnappings that are happening. We're really [inaudible] statistics. Yes, it was exactly like…

SARON:

Oh wow! That’s so interesting.

JESSICA:

Yeah, yeah. It works with this, too because if you hear from six different papers that 70% of Waterfall projects fail but really they're all citing the same one which may or may not be accurate then you feel like there were 10 sources.

LAURENT:

Yep, although I have to interject a memo to the audience here which is go and fact check what was just said.

[Laughter]

JESSICA:

Yes, read Thinking, Fast and Slow.

LAURENT:

Right. So take the source. Go see, go look for that particular study even if the exact details are not tried. There's a whole bunch of literature which shows. And Thinking, Fast and Slow is a great introduction to that old topic that we are subjects to what I call bugs in the brain. That's my pet name for cognitive biases which is I think that cognitive bias is more academic sounding.

It's nice to think of them as bugs because then we can invoke the well-known to programmers anyway.

[Laughs]

The reality of bugs and features, right? Some of the things that we think are bugs come to be experienced by users as features and vice-versa, right? So in that case, it's the way the brain evolved over evolution, over time. We were equipped with things which worked well in one specific context which was basically the savannah but which turned out not to be so helpful in the modern world. Evolution hasn't caught up with the modern world yet. So we have this disconnect between how we know we opt to think and we tend to think that we think the way we opt to think, if that make any sense. But in fact we think in more mistaken ways most of the time.

JESSICA:

Those bugs in the brain, those are really expensive to fix.

LAURENT:

[Laughing]

Right, because we don't really have access to the source code. [Laughter]

AVDI:

Yeah and sometimes I feel like programmers are particularly prone to estimating that, that they have determined something rationally, that they’ve determined something empirically; when in fact, they haven't but not even seeing that it was a biased process.

LAURENT:

Well, I think you just said the word estimate?

AVDI:

Yeah.

LAURENT:

Estimations is one of those hot-button topics. I've stayed on the sidelines of the whole no estimates debate. I guess mostly because few of the people participating in that are actually trying to fool the study show or research shows cards. As long as that's the case I don't intervene. But there is actually some very interesting research about how, not estimations specifically but the topic of calibration.

When you say I am 50% sure that something is going to be the case. For instance, like everybody who has been paying attention to the predictions about an event of a certain phone manufacturer that's going to take place tomorrow and you made me convinced by what's been said about the name of the product or the features of the product. You say, "Well, I'm 90% sure that they're not going to announce a new iPad tomorrow.” Ooops, I said the name. A good question to ask, when will you come up with that sort of judgment. If I say 90% sure then 9 times out of 10, when I make a claim at that level of confidence, I should be correct. I should be incorrect one time out of 10.

JESSICA:

This is in How to Measure Anything, right?

LAURENT:

Yes.

JESSICA:

Yes! I was going to pick that today.

LAURENT:

Among others.

JESSICA:

I’ll definitely pick it today.

LAURENT:

Cool! Another book which I really liked about that subject is the one by Philip Tetlock. There’s one called Expert Political Judgment. So it’s nothing to do with software. He studied experts in the field of international policy. He found that the same findings that applies to lay people which is that we are extremely poorly calibrated also applies to so-called experts people who are actually paid to have opinions. So no one is immune.

The effect is that when you say 90% confidence that something is true or is going to happen, it turns out that the actual number of times that this is going to be correct out of 10 is typically closer to a 6 if you’re good. For most people it’s closer to 4, so an expressed level of confidence at 90%. When you investigate that and I had a workshop at one time which was called The Art of Being Wrong. So I [inaudible] with that workshop until I grew bored with it basically. You give people a list of 10 questions. You ask them how confident are you that your answer to that question is correct.
So there are sort of trivia side questions so that you can actually check the answers afterwards.

Estimation:

Demystifying the Black Art which includes exercise. I think it’s a very good starter to have that kind of conversation. It’s a very good way of revealing a local truth to use that term again to a group of people. Say we’re going to take a group of 10 people is more than enough to have some confidence in the results. We’re going to run a small experiment today. So we’re not going to read about a study. We’re going to perform an experiment. We’re going to show you, to put you face to face with the truth of what it means when you say I’m 90% confident of something. Then you show them what usually happens is you show them that it’s closer to 40% actual degree of accuracy. That makes you feel on a more emotional level. What’s going on and what a cognitive bias feels like.

SARON:

So that point that Avdi brought up a little while ago about the tendency for us to think that our conclusions are very logical and that they’re fact based. Do you think that that’s something specific to the tech industry? Is that something about being a programmer that makes that more likely to happen or is it just a human thing?

LAURENT:

If you go by Tetlock’s book, there’s no reason not to go by it, it affects all domains of expertise. So there is one situation where you can expect people’s assessments to be very accurate. This is when you are close to the very core of their demand of expertise. So for most people that is a very narrow thing. But as soon we astray even a little bit outwards from that calibration, the accuracy of people who starts to fall off dramatically whatever the domain is.

SARON:

So it’s really about being an expert or thinking that you’re an expert. Like that’s where that comes in a bit.

LAURENT:

Yes, I guess there’s a trap because if you think you know more than you actually do. That leads you to being way overconfident on a bunch of things that are related to your actual core domain of expertise which actually you’re not that good in those topics.

AVDI:

I’m curious if you have some starting points for people that want to get better at identifying these leprechauns and hunting them down.

LAURENT:

Okay starting points in what sense? Sorry, just need a bit of clarification.

AVDI:

What kind of skills of do you need? What sort of things do you need? Are there research skills that you employ to hunt this stuff down or how do you get better at this?

LAURENT:

Well the way I used to express it was three things – skepticism, curiosity and tenacity. Skepticism goes without saying. It’s the skill of whenever you come across a claim of taking a step back and thinking first do I actually understand what this thing is about. Are we talking about average developers versus best developers? Do I even notice that half of the people are saying one thing and the other half is saying other? Is the claim actually true or is the evidence? Assuming that it was true, what’s the deal? What does that mean for me? What are the consequences? That’s the three subtopics in being skeptical but you have to be skeptical in a constructive - Tetlock calls that actively open-minded.

For short, I would say be curious. Be actually open to investigating and learning new things.

Obviously there’s a great tool which is Google or your favorite search engine or whatever that is. So more specialized search tools like whatever your library index, Google Books, Google Scholar. I’m trying not to sound like an ad for Google here but they really have a great deal of tools which makes searching for information very easy. Everyone can actually do that. So curiosity is essential. You have to want to get to the end of things.

So that’s one of the things I found was really difficult when I was writing one factor or another of leprechauns. That sometimes my search would seem to terminate in a paywall or just an article you call that I couldn’t find anywhere in the web. I could have given up there and say that’s it. I’m not going to learn more about that. Then the idea is to try to just go a little bit beyond that. So, if you have the email of the author then you’re trying to contact the author, you should know someone who might know someone in the university or where the author works, you can ask them to get a copy of the article for you.

That’s an actual example of something that happened to me between December and a week ago. In the summer I did a workshop and we were dissecting this horrible study by NIST, the National Institute on Science and Technology on the cost of drugs. We were going through all the references and the citations and trying to find the primary sources. There was this one thing about another universal claim about how effort is allocated in software projects like 10% on the requirements or 20% on requirements and 70% on actual coding and 70% on testing.

We couldn’t find the actual primary source. It was not findable anywhere in the web. The workshop was happening in Sweden and the authors were at another University in Sweden. So we set the group homework to those who are from Sweden to try and find the original paper. We’ve given up hope but then a week ago, someone emailed me the PDF scan of the original study and said, “I found it.” I was, “Great.” I was really proud of actually having managed to teach someone else besides me not to give up on that kind of search and go look for primary source.

AVDI:

I think that’s a great point. I think it’s really easy to get the idea that everything there is to know about computer science and about programming is available on the web. I certainly felt that impression in the past but as I’ve spent more time doing research, I’ve discovered that there is a tremendous amount of work that was done in our field. Just even as recently as the 80s and 90s that is not indexed on the web. It’s not readily available. There are works that are in books that are out of print, have never been scanned. There are papers that if you’re lucky, they’re behind the paywall of the ACM. If you’re unlucky, they’re behind the 404. If you’re really unlucky, they just never were scanned in. It’s surprising how much work even in our field has not been indexed is not readily available on the web.

JESSICA:

There’s another question of all this tenacity. You do this and you publish it. You have a book that comes out of all this research for you. Clearly you get a lot of satisfaction out of creating a narrative that is how this MIM got started. What about people who are just regular software developers and when we hit something that we’re able to identify as an assumption that we have or that are team has or that our manager has, how do we decide how much effort to put into the particular possible leprechauns that we find? Is there a way to come up with a value of that information?

LAURENT:

So for one thing, I think of myself as just a regular software developer.

JESSICA:

Awesome.

LAURENT:

I’m not an academic and I think that shows in the tone of the book. I don’t write like an academic. I don’t try to write like an academic. I’m just trying to write as I would talk to someone sitting across the table from me and saying there is this interesting thing that I’ve found. I think of it as I’ve mentioned bugs in the brain. To me, I approach it the same way I would approach finding a tricky bug in a program that I wrote or a program that I was in charge of. That someone on my team calls me in and say, “Hey what do you think of this behavior?” I admit that sometimes maybe we’ll go way beyond the call of duty, way beyond what would be considered reasonable in finding – I’m talking about software here and finding a bug I want to understand. I want to get to the bottom of things and figure out what could be causing it to behave in that strange way?

Some people might say well it’s a cosmetic thing. It’s half the fun [inaudible] offset in something that appears on screen. But it nags at me. So I try to go to the very bottom of things and most of the time I will learn something, maybe more than one thing. I will learn a lot about the API behind that little thing that’s displayed on the screen that will learn a lot about the architecture of the program. So there is a lot of serendipitous learning that happens as a result of investigating stuff, which for me tends to pay off. Sometimes you just make a note of it and say well it doesn’t have to be perfect. You do risk assessment and you say no user of this software is ever going to suffer from that. It doesn’t appear in a place where it puts our image as software crafts’ people at risk. So sometimes you just let it go.

What I’m saying I guess is it’s very personal. But if you show me someone who is always unconcerned, just say, “Oh yeah, that’s a bug,” and moves on to the next thing, I’m not sure I want to work with that person as a software developer

JESSICA:

I agree.

LAURENT:

Does that answer the question?

JESSICA:

Yes, what I took away from that was if you dig, if you investigate, if you really figure out why something is happening then you’re going to learn all kinds of things that you didn’t expect to learn. You won’t just answer that question. You’ll be developing yourself and your whole body of knowledge and there’s all kinds of like you said serendipity that comes out of that.

LAURENT:

Yup.

JESSICA:

That segways into what I wanted to ask you about. A long time ago I picked an interview with you for picks that was posted online. My favorite thing out of that was how you talked about most of our job is learning.

LAURENT:

Yes! I guess that’s the reason I get worked up about these things is I see that the craft of software development not so much as a process of turning out lines of code that happens in the process of figuring out what our users need. What makes them tick? What motivations they have based on incidental of those things; figuring out the reality of the world that we’re trying to make an impact on. That’s the real thing.

JESSICA:

I think it’s beautiful that you express software development as it’s really a process of figuring out the world. It’s a process of digging into what seems simple on the surface and finding every nook and cranny that we need to account for in all the situations that our software might need to deal with.

LAURENT:

That’s very nicely put, as well.

JESSICA:

Once you do that encoding it into whatever programming language is the easy part.

LAURENT:

Maybe easy is too a strong a word there.

JESSICA:

Yeah.

LAURENT:

There is a whole lot – there is learning about the world and there is also a lot of learning about ourselves. So one of the things that we struggle with when we program which is when we try to get these insights about the world out of our heads and into something which has a behavior, we run into all sorts of limitations of our human brains. Again, the human brain is a kludge. It wouldn’t pass an architectural review basically or code review for that matter.

[Laughter]

LAURENT:

So in the process of figuring out what goes well and what goes wrong when we capture insight about the world, we also learn a lot of useful stuff about ourselves. So, there are always these two levels at least of learning going on.

JESSICA:

That’s beautiful. I’m reading right now Avdi’s favorite book about software development and reality construction about how the artifacts we create are a means of learning. That diagrams are when we draw them, we’re interacting with them. It’s part of our learning process much more than about transferring that knowledge to others. Maybe the code is the same way. As we code it, we’re learning about – the coding is part of the learning about the world. It’s an external memory that our brain uses.

LAURENT:

Very much so. And it’s also I think that those things we’re talking about explain why literally drive me up the wall, well not literally.

[Laughs]

LAURENT:

It figuratively drives me up the wall when I see people acting as if for instance that modeling and diagraming and cranking out software at the end of that process of documenting and diagraming and modeling was just a mechanical linear thing entirely predictable. To me that’s like no. That’s just wrong. For some reason, those leprechauns, those silly – if you take the time to think about them fact of it is about 56% of the bugs in the projects rise in the requirement space. They really seemed to comfort precisely those people who rely on that illusion that the whole thing is a mechanical… just turn the crank and software will come out the other end.

JESSICA:

That is black and white that we can find truths beyond local truth for various degrees of local. I wanted to bring one thing back from the very beginning of this episode. The quote that came up about the definition of insanity is doing something over and over again and expecting different results. You said that’s a misquote or is it just wrong?

LAURENT:

The story behind it is… I’m sorry because I cannot become this encyclopedia of useless facts after a while. So I like the sentiment of the quote. There are contacts where the code has meaning and you can use it to convince people that maybe they should look at things in different way.

Again as I said in the beginning, to me doing the same thing over and over and expecting different results. That’s the definition of experimentation. So it’s also wrong. But the true story seems to be that the misquote was in an alcoholic anonymous document from the brochure from the 80s. It was not attributed in the document. Then a few years later, it spreads into – it’s a pre-internet mean. It was attributed to Einstein. It’s one of the features that the MIM gained as it morphed and spread that made it more palatable and more acceptable.

It’s lovely to even know that this story exist which we even know that there is this way that ideas arise. I don’t know who came up with that phrase but first probably not Einstein. Then the fact of attributing it to Einstein made it more popular and it became the viral misquote that we know today.

JESSICA:

But one of the reasons it got so popular was because as you point out in a lot of situations it’s useful.

LAURENT:

True.

JESSICA:

That the method of the 10x developer is useful to certain people who don’t want to train their devs. They just want to hire magic.

LAURENT:

True.

JESSICA:

So you have to ask. Follow those factoids. Who is this useful to?

LAURENT:

[Laughs].

Right. That’s a great question to ask in the course of investigating one of those is maybe a more cynical way of putting it would be whose agent does it serve?

AVDI:

[Inaudible].

LAURENT:

Yeah, exactly. When you asked that question you’re already halfway through the truth.

AVDI:

There are less selfish interests at work too, sometimes. One of the things that this discussion has reminded me of is one of my recent discoveries of a sort of cytogenesis factoid that I ran into which is that in programming one of the popular things for hackers to do is to play around with weird keyboards and weird keyboard layouts. For years and years, I was exposed to the MIM that Dvorak is more efficient than QWERTY and that QWERTY was deliberately made to be inefficient to keep the typewriter heads from jamming.

LAURENT:

Wait what? That’s not true?

JESSICA:

No.

AVDI:

I think there is less truth. So this is something to go and investigate. It’s been a little while since I investigated it. I’m not sure about the part about the QWERTY part. I understand that there is a number of competing keyboards around the time that QWERTY came about. Its ascension was - it really sounds like it was one of those things where one company market is a little better and it becomes the standard.

JESSICA:

Historical accident.

AVDI:

Yeah, exactly. But what was really interesting to me is that all of the Dvorak papers out there and almost all the documentation on Dvorak being better traced back, back and back through site after site after site to one really small, really flawed study like in the 50s or 60s. I can’t remember when Dvorak was first created. That was created by Dvorak and basically commissioned to sell keyboards or something along those lines. But it’s just a really flawed study, a very small sample size and various other things wrong with it. Very little research had been done apart from that.

But the interesting thing to me about that is that it may be that it’s not more efficient or it’s not much more efficient the way it’s claimed. But it may also be that people that learned to type Dvorak are more comfortable and that’s a little bit harder to measure. I’ve talked about in a show before I acquired this Kinesis keyboard which I love and is literally I never realized that typing could be that comfortable before. I have no idea if I am more efficient on it or not and I don’t care because for me, it’s about the comfort of it. Sometimes I think we glom onto these things. We want empirical basis for the things that honestly just make us comfortable.

JESSICA:

[Laughs].

That’s it. That’s it

LAURENT:

Confirmation by us again. I’ve probably done the same thing. There are lots of domains which are completely rife, that sort of thing like nutrition, exercise, sleep.

JESSICA:

Agile.

LAURENT:

There you go; that too. Probably part of one way to get less worked up about that sort of thing is to say there is room in here for a lot of wide range of variations so why not allow that. People can choose to do whatever their thing is and just be content. Not everybody is marching to the same drummer. So again, there’s this theme of universal versus universal truth and doctrine and prescription versus local custom. I’m not even talking about local truth here. This is the way we roll and it’s fine with us. You can choose to work here or not basically. I guess to some people that’s threatening because it works if we are open to the possibility of living it all up to autonomy, to people making their own choices as adults. I guess some people find that notion a bit unsettling.

JESSICA:

Does that really back to these state startups within the government?

LAURENT:

Huh! There’s quite a bit of that going on, yes. But I guess it’s not limited to government. Any organization that’s grown big enough to have a – another way what they’re called in English is Program Management Offices. There always seems to be ends up to be someone’s job to make sure that everyone uses the same ALM tool, blah, blah, blah.

JESSICA:

Enterprise architecture.

LAURENT:

Yeah, that sort of stuff. I can’t even begin to imagine the horrors that would befall us if everybody were left with their own devices and their choice of an IDE. Someone must step in. Make sure that everyone does the right thing. So the only thing that saves us here is it takes about three years to do the survey of all IDE tools on the market and decide on what the best one is by which time obviously the survey has become obsolete and they can do it over again.

I’m sketching a caricature here but I don’t think it’s very far from the truth. One alternative is to just try and treat people one and teams as adults. Give them free reign. Trust them to do the right thing. Give them a small budget so that failure is not a big deal.

That’s another thing which comes with the train of experimentation. If you actually do experiments, a lot of the time you’re deemed to fail. This is something that scientists know or they should know that most of the times, you’re experiments doesn’t pan out. That’s okay. You say the next one and you move on to that. So to do that, you have to say no you’re not allowed to spend 1 million Euros in public money to build the behemoth of a project and load it over hoards of programming minions.

[Laughs]

You have our blessing to do your thing on a small scope with a small team with a small budget. If it works, we will amplify it. That’s what the state started. That’s what the deal is with the so-called state startups. But I think France is not really leading the charge on that. We’re imitating, I guess you might say stuff that’s already going on in the UK, in the states. So I’m taking that as a good sign. Even government is waking up to the fact that short cycles, experimentation and treating people as adults is actually a thing.

JESSICA:

Both at an individual and institutional level, we need that constant learning.

LAURENT:

Yes, exactly.

JESSICA:

Awesome. Avdi, do you have anything else?

AVDI:

No, I think that pretty much covers it for me.

JESSICA:

Laurent, was there anything else you wanted to say or shall we get to picks?

LAURENT:

Oh! Let’s get to picks!

JESSICA:

Okay!

Before we get to picks, I want to take some time to thank our silver sponsors.

[This episode is sponsored by Code School. Code School is an online learning destination for existing and aspiring developers that teaches through entertaining content. They provide immersive video lessons with inbrowser challenges which means that each course has a unique theme and storyline and feels much more like a game. Whether you've been programming for a long time or have only just begun, Code School has something for everyone. You can master Ruby on Rails or JavaScript as well as Git, HTML, CSS, and iOS. And more than a million people around the world use Code School to improve their development skills by learning or doing. You can find more information at CodeSchool.com/RubyRogues.]

JESSICA:

Avdi, what are your picks?

AVDI:

Well first of all, I think I’m just going to pick the book in question Leprechauns of Software Development.

LAURENT:

That’s very kind of you.

AVDI:

I think the way this episode happened was a while back, we had episode 184. The one about where we actually know software development with Greg Wilson and Andreas Stefik. After we released that episode, somebody reminded me of the existence of the Leprechauns’ book. It had on my to read list for a long time and it slipped my mind. So I went and I said okay, I’m just going to sit down and read this.

It’s a relatively short read. It’s a very easy read. It’s a very, very pleasant read. You have a great writing style and it just blew me away. I was floored that some of the things that I always taken on authority really have very little basis. So I insisted that all the other rogues go and read this and that’s how this happened. So yeah, I really recommend this book. I think every software developer should read it.

JESSICA:

Yeah, he went a little farther than all of you should read this. He said, “I will buy it for you if you will read it.”

AVDI:

Yes, so go get it. It’s great. Apart from that, I do have another technical pick. I will pick The Crystal Programming Language. I’ve been fiddling with that lately. I’ve been noticing an uptake and interest. It is an alpha level language. How to characterize it? It’s a statically type compiled programming language that is around the same level of obstruction as Go. I think you would probably or when it’s mature, you’ll probably use it for the same sort of things that you would use Go for. Whether it’s distributing executables out to people or network middle wares and the things that need to run fast or things that need to interface closely with C code because it does that really well.

But it is almost interface compatible with Ruby. It is type inferencing so most if the time, it figures out the types and you don’t really have to give it much extra information than it would already be in a Ruby program. So it basically gives you all the static compile time checks of a compiled static programming language if neutrally compatible syntax to Ruby. You can take Ruby code and munch it a little bit and you’ll have a valid crystal program. So I’m really pretty excited about this. Like I said, It’s still alpha. It’s still under heavy development. Its APIs are still being worked out. Libraries are being filled in but it seems like an interesting candidate for things like that one service that we would write and go except let’s write it in something that’s practically Ruby. It might even be an interesting way at some point of doing C extensions except not in C, now we do them in crystal. So yes, check it out.

As far as non-code picks, I’m just going to pick the Zojirushi line of appliances. As most listeners know, we’ve got a big family. We’ve realized we need to go industrial strength with some of our cooking so we got a Zojirushi rice machine which is great for those big batches of rice for big, cheap meals. That thing worked so darn well that when we went looking for bread machines that we could just have bread baking basically every day, we went ahead and got the Zojirushi model.
We’ve been really impressed with their performance. I think that about wraps it up for my picks.

JESSICA:

Okay great. Laurent.

LAURENT:

Well Avdi, I will see your Crystal and race you on ELM. So that’s ELM Programming Language.

[Clapping].

AVDI:

Yes!

LAURENT:

I don’t even remember how I came across it. But I’ve been having fun learning in ELM for six months yes. I’m very happy with it. It also has that same general type inferencing that seems to be designed to put an end to the holy wars between the studied typing and dynamic typing folks. So that’s my pick as far as programming languages. I’d love to go to an ELM conference.

Moving on to something completely different, a site called Smarter Every Day where the must watch thing is called the Backwards Brain Bicycle. If you haven’t seen that video, rush and see it now. It’s along the lines of what I was saying earlier about the experiment that gets people to feel when it’s trying to have a cognitive bias except through the bike. So I’m not going to spoil it any further.

Just for the heck of it I wanted to have as one of my picks. I’m allowed whatever I want, right?

SARON:

Right, right, right.

LAURENT:

So I picked one of my favorite science fiction books that I’ve read lately which is called Station Eleven by Emily St. John Mandel, I think her name is. It’s a lovely little gem of a book.

JESSICA:

Awesome! Thanks! Can’t wait to read it and watch the bike video. Okay.

JESSICA:

For my picks, I have to pick how to measure everything which was totally on the list of things you picked today already. It’s a great book. It’s technically about business but specifically the business of IT. Just pick it up. Look at it. It teaches you about calibration. It teaches you about how to give investments and intervals and how give accurate estimates and intervals and also how to measure the value of information, so you know how much work to put into learning the things that you don’t know and verifying your assumptions. That’ll do for picks for me.

Laurent, thank you so much for coming.

LAURENT:

Thanks for having me.

JESSICA:

Saron had to drop off but she says thank you again.

AVDI:

Yes, thanks so much.

JESSICA:

Yeah.

AVDI:

It was great to finally get to talk to you.

LAURENT:

I had a great time. I hope the whole thing will not come across as too rambly and distorted.

JESSICA:

That’s what podcasts are for.

LAURENT:

Yeah, cool!

JESSICA:

Cool! Okay! So see you everybody next week.

[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]

[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit CacheFly.com to learn more.]

[Would you like to join a conversation with the Rogues and their guests? Want to support the show? We have a forum that allows you to join the conversation and support the show at the same time. You can sign up at RubyRogues.com/Parley.]
Album Art
226 RR The Leprechauns of Software Engineering with Laurent Bossavit
0:00
1:09:09
Playback Speed: