JOSH:
Okay. I'm going to pour myself a mug of coffee and I’ll be back in two minutes.
KATRINA:
Cool.
CHUCK:
Geez, he’s just like family, always making me late.
ELISABETH:
[Laughs]
[Hosting and bandwidth provided by the Blue Box Group. Check them out at Bluebox.net.]
[This podcast is sponsored by New Relic. To track and optimize your application performance, go to RubyRogues.com/NewRelic.]
[This show is sponsored by Heroku Postgres. They’re the largest provider of Postgres databases in the world and provide the ability for you to fork and follow your database, just like your code. There's easy sharing through data clips or just for your data. And to date, they have never lost a byte of data. So, go and sign up at Postgres.Heroku.com.]
[This episode is sponsored by JetBrains, makers of RubyMine. If you like having an IDE that provides great inline debugging tools, built-in version control, and intelligent code insight and refactorings, check out RubyMine by going to JetBrains.com/Ruby.]
CHUCK:
Hey everybody, and welcome to Episode 111 of the Ruby Rogues podcast. This week on our panel, we have Josh Susser.
JOSH:
Good morning everyone from sunny San Francisco.
CHUCK:
Katrina Owen.
KATRINA:
Good morning.
CHUCK:
Avdi Grimm.
AVDI:
Good morning from Pennsylvania.
CHUCK:
I'm Charles Max Wood from DevChat.TV. And this week, we have a special guest and that’s Elisabeth Hendrickson.
ELISABETH:
Hi.
CHUCK:
You want to introduce your self for the folks who don’t know who you are?
ELISABETH:
Sure. I'm Elisabeth Hendrickson, otherwise known as @TestObsessed on Twitter. I am in fact test obsessed in belief that empirical evidence trump speculation every single time. However, I'm not actually the tester that everybody thinks I am. So anyway, I currently work at Pivotal on Cloud Foundry.
JOSH:
Good morning and welcome to the show. Elisabeth and I kind of go way back. We went to school at Caltech at roughly the same time.
ELISABETH:
Yeah.
JOSH:
Then we encountered each other at Pivotal Labs a few years ago. I keep running into her. Then she wrote this book and I'm like, “Hey, this is an interesting book. We should do it on the show.” So, that’s why she’s here.
ELISABETH:
Thank you so much. I'm really glad. By the way, Josh, I don’t know if I told you this and I don’t know if you’ll remember this tradition. But there was a tradition that people who went to Caltech who wrote books would sneak in a certain three-letter acronym somewhere into their book, into a false entry, in the index. And you?
JOSH:
I completely failed, when I wrote a book, to do that.
ELISABETH:
Okay. But you remember the tradition?
JOSH:
Oh, yeah.
ELISABETH:
I totally snuck mine in.
JOSH:
Oh, I got to go back and look for it now.
[Laughter]
ELISABETH:
I could tell you where it is.
JOSH:
Do you want to tell the world where that is?
ELISABETH:
Well, I think most of the world isn't going to have any idea what the tradition is.
JOSH:
[Chuckles] Okay. Well, let’s just get it out of the way and move on. [Chuckles]
JOSH:
Where is it?
ELISABETH:
The tradition is to sneak in a three-letter acronym DEI. It stands for Dabney Eats It. Dabney is one of the houses on the Caltech campus. And I managed to sneak it in on page 112 where I show an example of a five-tile letter game and three of the tiles have the letters.
JOSH:
[Expression] There it is. If it was snake, it would have bit me.
[Laughter]
ELISABETH:
I was very proud of myself.
JOSH:
That’s nice. My favorite bit of DEI trivia is that Harrison Schmitt left the letters DEI spelled in the lunar dust using his footprints.
ELISABETH:
No way! I didn’t know that piece of trivia. Sweet.
JOSH:
Yeah.
CHUCK:
[Chuckles] Anyway…
JOSH:
Anyway, so about the book.
CHUCK:
Yeah, you wrote this book! [Laughter]
JOSH:
It doesn’t quite take us to the moon although it mentions it once or twice. [Laughter]
ELISABETH:
I thought that was Mars. But anyway…
CHUCK:
Yeah. I'm really curious what everybody else’s take was on it. I was a QA Engineer for about six months way back in the day. So, a lot of the stuff that you're talking about is stuff that I had to do for my job.
AVDI:
I want to lay a little foundation if we could.
KATRINA:
Before we do that, did we actually say what the book is called?
[Laughter]
KATRINA:
I don’t think we did. So, the book we’re talking about is ‘Explore It!’ It’s published by the Pragmatic Programmer’s Bookshelf. And the author is Elisabeth Hendrickson.
ELISABETH:
Thank you, Katrina. [Laughter]
CHUCK:
Oh, was that why you're on the show?
ELISABETH:
Yeah. Actually, either that or because Josh is a great person.
CHUCK:
A little of both maybe.
ELISABETH:
Okay.
KATRINA:
Sorry, Avdi. I cut you off.
AVDI:
No problem. That was probably good to address. I thought we could lay a little bit of foundation here because a lot of this book addresses a role which, in my observation, a lot of programmers or some programmers these days might not actually be that familiar with this role. And that is the role of a software tester, which is something that some people do full time. Could you fill us a little bit on what that is?
ELISABETH:
Sure. [Crosstalk]
AVDI:
I realize you're not just addressing testers. I felt like the book mostly did and maybe that was a misperception on my part. I realize you're addressing a lot more than that. I've ran into this more and more recently where a lot of people that have kind of come up in software development more recently, it’s just not a role that, you know -- there isn't that context that that was a thing or is a thing.
ELISABETH:
Oh, my goodness. It’s a thing. There's an entire community; in fact, multiple communities and [inaudible] fighting between the communities within the software testing community. Software testers is, for many organizations, a specialist role which is why I started reacting when you said this book is all about a role because actually, there's a whole discussion we could have there that we’re going to skip for right now. [Laughs] Because basically, it means I failed if people think this is really only about the test or a role as opposed to the testing activities.
AVDI:
Let me put it this way. I felt like the book assumed familiarity with the fact that that was a thing. And what I've observed is a lot of -- especially if you got your start in a small startup or something like that, testing for you might just be synonymous with like writing some unit test and that’s it.
ELISABETH:
[Sighs] And that makes me so sad, which is why I wrote the book.
AVDI:
That’s what I thought it would be good, to get a little bit of that historical background in.
ELISABETH:
Okay. If we want to go back to way, way, deep dark history back in 1978, this guy Glenford Myers wrote a book called ‘The Art of Software Testing’. And there had been a wide variety of books published specifically about software testing since then. But starting in 1978, really starts to peg the point when the industry started thinking, “Maybe we should have this specialist thing.” And then, during the -- I joined the industry in the late 80’s. And in the early 90’s, it was already a thing to say that you cannot have the testers and the developers on the same team because it’s like the fox watching the hen house. So, you have to have your testers or part of your quality assurance group and quality assurance is way more than just testing. And they should report all the way up the food chain to a separate executive just to make sure that we keep things completely separate. And thank goodness, starting more around the late 90’s, there was a general realization that there was something fundamentally broken with that model. And the whole Agile community and the Agile movement has helped tremendously to bring things back together again. I consider the separation of QA from software development to be one of the worst wrong turns that our industry has made because it led to incredibly long feedback cycles. And ironically, way worse quality than we otherwise…Anyway, it was just that. [Laughs]
JOSH:
Wait. Elisabeth, are you saying that an adversarial relationship with people trying to construct the same product is not helpful?
ELISABETH:
Shockingly, yes I am. [Laughter]
JOSH:
I have a similar kind of reaction that Avdi did of reading the book and saying, “Hey, this is a really interesting discussion of this role that gets underplayed a lot in the Ruby community.” And I wanted to ask, you talk about checking versus exploring as different ways of testing software. And Rails has this amazing amount of support for doing TDD and automated testing. We call it automated testing. It seems like you would call it automated checking or regression testing or -- I don’t want to put words in your mouth, but something like that. [Laughter]
JOSH:
And there's just like no real infrastructure or explicit support for doing exploratory testing within a Rails project. I think from a lot of developers’ point of view, the unit testing that we do in Rails as a matter of course is obvious but the other part isn't. So, I think it’s just off our radar mostly.
ELISABETH:
Yeah. I find that with a lot of developers who practice test-driven development, which by the way is totally awesome. And I practice test-driven development as a developer but it only gets to one side of the equation. It creates a mechanism by which I can check that the things that I thought were true about my software are in fact still true. But if there were stuff that I completely forgot, those pesky errors of omission. Like I didn’t even think about the fact that I should be doing error checking on a particular field that I really need to have a given format because somewhere else, I've got a method that’s going to do some processing on that and assumed that that field wasn’t empty or assumed that it had a particular format like a phone number or something. If I totally forget to write that code, then obviously, I'm not going to have any tests that check the behavior of that non-existent code. And that’s where exploration comes in.
AVDI:
Can you just give us like a summary of what you mean by exploratory testing?
ELISABETH:
Sure. It is a practice of simultaneously designing little tests and running those little experiments to learn how the software behaves and using your observations about how the software behaves to design your next little tests all to steer towards the risks in your software. So, rather than confirming that it does what you intended it to do, discover ways in which it violates expectations or does really bad things or [inaudible].
AVDI:
I want to tell you about my initial reaction to your book. And it’s going to sound mean.
ELISABETH:
[Laughs]
AVDI:
It is not intended that way.
ELISABETH:
Thank you for telegraphing ahead.
AVDI:
[Laughs] There's actually a point here. My sort of visceral reaction, as I got into it, was this sounds really boring. And that’s kind of the problem, isn't it? That I, as a developer, looking at this idea of exploring for the risk cases and edge cases and the things that you didn’t think about, I might be alone in this but I suspect it strikes a lot of us as tedious. I guess, just kind of a leading thing because that’s kind of a problem, isn't it? I mean, it’s not good that we look at that part of software development with such a jaded eye.
ELISABETH:
So, my first question then is that was your first visceral reaction. But having finished the book now, do you still think it’s boring?
AVDI:
There's a part of me that does. And I'm not saying it’s a boring book.
ELISABETH:
Right.
AVDI:
I'm saying that it’s the idea -- as a developer, I want to build features. And when I have one that I think is done, I want to get on to the next one. I hate the idea of going in and writing notes about, “Okay, I should check this with this value, this value, this value, and this value.” And, “I should try yanking the plug out while I'm importing a dataset,” and all that stuff. Maybe, it’s just me. It feels very tedious to me.
ELISABETH:
Well, you're not alone in that and that is one of the reasons I think why our industry -- there are two reasons I think why our industry made that horrible wrong turn in the early 90’s. One reason was because there was certainly a quality problem in a lot of organizations and they were trying to address the quality problem by separating responsibility for gauging or for assessing quality. So, there was a good intention behind that. But as somebody who has ran quality assurance departments that separate siloed org, I can also tell you that there was a fair amount of the programmers not wanting to do the testing and wanting to find somebody else who would do all of the dirty work they didn’t want to do.
AVDI:
Right.
ELISABETH:
You're totally not alone in that. And I might never be able to get you fired up about how fun it can be to explore your software, even your own software, to discover the hidden implications in design decisions that you made or discover side effects of the particular framework that you're building on top of. Personally, I find that kind of discovery really fun because I think it’s fascinating when I can be surprised by something I wrote.
AVDI:
I’ll hand the mic over to you in just a second, Katrina. I'm glad that we have you on the show because I think one thing that a book has a hard time getting across is that kind of excitement and that kind of passion for it. Again, I think this is more of a character flaw. And if anything, I think I'm just sort of trying to stand in for maybe other programmers as well who have that same feeling towards it. As this talk goes on, I hope that you will continue to talk about the pleasure and the joy of testing as you experience it because I think that is probably the key to making this more widespread.
KATRINA:
I think there are several things at play here. One of the things that I've noticed over the past few years is that there are a lot of developers who seem to kind of disdain the testing or tester role as something that requires less skill or less intelligence. I think that one of the things that your book addressed, Elisabeth, is that this is entirely untrue. Like it takes an entire very, very deep, wide skill set that incredible amount of nuance. And it’s a skill set that’s very different from what developers typically do. So, I think that maybe a lot of developers just are unable to see what is required. I've been thinking a lot about this because I don’t find it easy to test my own software. And one of the things that I think I'm missing is I don’t have this blood-thirstiness that a lot of good testers have. [Laughter]
KATRINA:
I'm so freaking optimistic all the time about how easy it’s going to be, how well it’s going to work. And testers have this amazing ability to sniff out the weaknesses and the interesting and weird and edge cases and the craziness and just all of the things that don’t fit together. I think it just requires certain -- of course, skill set, something that you can develop. But also this, I don’t know what else to call it. Blood-thirstiness seems to cover it pretty well for me. [Laughter]
CHUCK:
Yeah. I think the term I always use is malicious creativity.
JOSH:
I think that blood-thirsty is the best adjective for describing Elisabeth I've ever heard. [Laughter]
KATRINA:
Only in a fluffy bunny kind of blood-thirsty.
[Laughter] [Crosstalk]
JOSH:
[Singing] [Laughter]
CHUCK:
Anyway, I want to jump in here really quickly because Avdi was talking about how tedious it was. I remember when I started doing TDD; I thought it was really tedious. And then, most of it came down to ‘I just didn’t know how to proceed’ because I hadn’t developed the skill set that it requires, having been a tester as well and doing that professionally for six months. In those cases, we weren’t writing scripts because our boss didn’t want us to waste time out of meeting our test suites.
ELISABETH:
[Laughs] Sorry.
CHUCK:
No, no, no. You can laugh. That’s exactly what he said, “I don’t want you wasting time writing scripts.” Anyway, I mean, it was all exploratory. Everything we did was exploratory. We’d write down the steps to reproduce the bugs that we found. But ultimately, it was something that you have to learn to do. And if you don’t develop those skills, then it’s really hard. And it’s tedious in the same way TDD is because you don’t know how to proceed, you don’t know what the next step is, you don’t know how to handle the bumps or wrinkles in the landscape that you find. “Okay, I found a bug. Now what? Do I file it? Do I keep exploring this path?” Or, “Have I tapped it out and I need to go and essentially create another charter,” which was a concept I really love, by the way, from the book. And so, it’s tough, it’s interesting. It’s a skill set that’s really, I think, important for us to develop not just in the sense of finding problems in our software but just understanding what it does.
AVDI:
Can you tell us a story of sitting down with a developer and having them grok, maybe start out a little skeptical and then having them grok the pleasure of exploratory testing?
ELISABETH:
Sure, I will tell you my favorite story of that, that’s also in the book, by the way. I was sitting down with a developer who was working with me on a project actually, at Pivotal Labs. And I have been given permission by the guy who was the client for the project to talk about it. The project was Bring Light and it’s an awesome website for donating to charities. And its whole concept is that you're not just donating to a charity. There's a specific project that you're donating to. So, it’s a great cause. I was working on that project. I was the worst programmer on the project by far but the best explorer. And we had this really nice mix of diverse skill sets on the project. And so, I was pairing. I had been pairing all morning with one of the programmers on the project and we had built the feature and we checked in and we had like half an hour before lunch or something. We looked at each other and said, “Well, what are we going to do?” Because we didn’t really want to start a brand new story right before lunch and then lose all the contexts over lunch. So, I said, “You know, I've been wondering about some of the side effects of some of the decisions that we’ve been making.” This was back before Rails actually checked any of your inputs for the possibility of code injection. I said, “Why don’t we just spend half an hour just doing some exploration?” And he kind of looked at me funny. And I said, “Here, let me show you what I mean.” And I brought up the website and started entering values that would represent JavaScript injection attacks like sticking in a value in a profile for a user on the site of scripts alert high and saved that. And then, because this thing had social networking features and you can have friends and new people showed up on the home page like ‘so and so just signed up’ showed up on the home page of the website. We go to the home page of the website and it’s alerting high. [Laughs] He looks at me kind of funny, my pair, and he said, “But why would anyone ever do that?” And it was so cute like it didn’t even occur to him that this represented severe security vulnerability. So, we finished the exploration session and then, I was thinking to write a bunch of automated tests to check the behavior further and to find other examples of that. And the thing that was awesome about pairing with a really good developer on this was he didn’t see the test automation from the same perspective that I did. I was just going to go straight through the website and do it the brute-force way. And he said, “No, no, no. Let’s not do that. Why don’t we put all of this data that have a bad actor in our seed data for the test suite and then run the whole test suite so that if there are vulnerabilities related to the things that we just found, we’ll have a way of seeing them wherever they exist in the system.” And we did that right after lunch. Within an hour, we had found and fixed probably like 40 or 50 instances of problems with code injection attacks. He was skeptical about the exploration. He was skeptical about the information that the exploration was revealing. And when I explained to him the security vulnerability, he then got right on board and figured out a better way to explore than what we’ve been doing.
JOSH:
That is awesome. I love that story, Elisabeth. A couple of weeks ago, we had Will Reed on the show and we were talking about Agile and Extreme Programming. The thing that I said there was that Agile is really a response to not being able to predict the future.
ELISABETH:
Right.
JOSH:
If we had perfect knowledge of the future, we wouldn’t need to go to all of these lengths to always be correcting what we’re doing because we don’t know where we’re going exactly. And I think that the story you related was great. It fits so well with the agile process. At one level you could say, “Well obviously, the requirements for that software were insufficient because they didn’t…
ELISABETH:
[Laughs]
JOSH:
Yeah. It’s like they didn’t address the problem that any site that has user-contributed content has to deal with security and injection issues.
ELISABETH:
Right.
JOSH:
And you just think that the product owner wouldn’t make sure that that requirement was in there.
AVDI:
This is jumping forward a little bit but that was one of favorite sections of the book, was when you talked about the power and importance of exploratory testing in the requirements gathering and design process.
JOSH:
I'm pretty sure that’s the part of the book that I will remember most is how to use -- that whole finesse move of, “Oh, let’s have a test planning meeting.” [Laughter] AVDI: You should tell that story.
JOSH:
That’s just great.
ELISABETH:
Oh, that story. Well, the company in that story actually is now dead, and for good reasons. [Chuckles]
JOSH:
[Laughs] Oh, no!
ELISABETH:
The story indicates a company that was really dysfunctional beyond dysfunctional. So, I was working for a company where we had implemented the same feature. We were going on three times implementing the same exact feature. And each time the specification had come into the development team because we were doing waterfall and so, the product managers were writing giant MRDs. They would spend weeks and weeks on these MRDs and then throw them over the wall to the development, the programmers. And then, the programmers would spend weeks and weeks and weeks implementing, not doing anything even remotely like continuous integration. They would only integrate stuff at the very end. It’s entirely possible that there were programmers who were checking in code that had never actually compiled. I don’t have evidence for that. But there were enough units of that. And it would take weeks to integrate it all together and then finally, they would throw it over the wall back to QA. So, this is the context. This is the environment that we were in. So, the specification had come through and I had started working on figuring out how we were going to test this thing. And so, I had gone to the programmer who was working on it to say, “Hey, I've got the specification. Let’s talk about how we’re going to test it.” And the programmer was telling me all about how it was implemented. I'm looking at the spec and talking to the programmer and thinking, “These two things don’t really match up.” So, I went to the product manager who had written the specification and I said, “So, this feature, exactly what are you expecting? I've got the specification. I just talked to the developer. What are you expecting to have happen?” And he starts walking through the scenarios that he wanted to use the feature for. And of course, the scenarios are not in the specification. A specification is a very literal ‘this is exactly what I want the software to do’ with no why at all. By the way, I'm going to invoke Dude’s Law here. Oh, God! I'm blanking on his name, that’s really embarrassing - David Hussman. David Hussman has Dude’s Law and it’s a ratio that I should totally be looking up right now instead of trying to remember it. Apparently, I'm not sufficiently caffeinated but basically, the less why you have, the worse your stuff is or the worse value you’ve got. So, there's no why in the specification. But he starts walking through the why. He wants to be running this A/B test and he’s got an entire set of things that the minute he gets this feature, he’s going to be able to run through. And I'm thinking about what the programmer just told me he’s implementing. He’s not going to be able to do any of that, like none of it at all. And remember, we’re on implementation attempt number three. So, it’s not like we haven't tried to do this before and had these conversations before. So, recognizing that there's a huge discrepancy, I say to the product manager, “Hey, so I just talked to the guy who is writing this software and I think that there is a disconnect between what you're expecting to be able to do and what the software is actually going to do.” And the product manager cut me off at that point and said, “No, no, no. I just talked to him yesterday, it’s awesome. Everything is going to be great. It’s going to work fantastic. I'm so excited. As soon as your department finishes testing, he hands it off to me. I can't wait to use this thing.” And I'm thinking, “This is not going to work.” So, I go back to the programmer and I'm like, “So, I just talked to the product manager and great conversation. However, I think there's a disconnect between what you're implementing and what he actually needs.” And the developer said, “No, no, no. You don’t understand. I had a meeting yesterday with the product manager. We went through everything and he says a lot of stuff but he doesn’t actually need that. What he really needs is this thing that I'm writing for him.” By the way, he didn’t invoke the words ‘simplest thing that could possibly work’ because that wasn’t a thing at the time. But he might as well have and that would be the simplest thing that couldn’t possibly work.
CHUCK:
[Laughs]
ELISABETH:
But he was trying to tell me that, “No, no, no. He doesn’t need the complicated stuff. He just needs this simple thing that I'm going to give him.” So, I tried to call a meeting to discuss the requirements. And I sent the meeting request to both of them with the subject of ‘requirements’ or something. I get back two rejections in Microsoft Outlook saying, “No, no, no. We don’t need to have this meeting. We already had this meeting without you. We know what we’re doing. You just need to get [inaudible].” I was feeling a little bit frustrated at this point. But I decided to try a ninja move. And clearly, the issue was I did not know what was going on so I will invite these two people to a meeting about test planning. So, I did. I resubmitted the meeting invitation with the title ‘Test Planning’ and an explanation in the body of it saying, “Fine. I'm willing to believe that I have no idea what's going on. Tell you what, why don’t you guys clue me in. Let’s all get together in the conference room.” So, about eight or ten of us assembled total, including multiple representatives from programming and a couple of product managers and me. And I started the meeting with, “Hey, product manager, you had given me a whole bunch of scenarios for what you wanted and what you're going to use it for. Let’s walk through one of those scenarios.” And he walks through, “Okay. So, I'm going to create two pages.” And this was back before A/B testing was a thing also. So, this was a feature to do something like A/B testing in a particular context and not really exactly a website. But that’s close enough is to say A/B testing. So, he walks through and says, “Okay. I'm going to create content that looks like this for one test and then content that looks like that for this other thing. And then, I want to be able to measure the statistics about this, da…da…da…, user action, blah…blah…blah…and compare the two.” And so, we walked through the scenario. And I turned to Mr. Programmer guy and I say, “So, how do I actually test that?” And there is dead silence in the room. And you see the programmer shuffle his feet and look down. You see him just deflate. And he looks at the product manager and he says, “You can’t.” And the product manager says, “What?!”
[Laughter]
ELISABETH:
At that point, my work was done. And I left the room to let them sort it out. [Chuckles]
CHUCK:
That’s funny.
JOSH:
That’s kind of like the equivalent of, “Mom, Dad said it was okay.” [Laughter]
AVDI:
That’s kind of a fascinating example of like you’re exploring the software that doesn’t exist yet, right?
ELISABETH:
Exactly.
KATRINA:
It’s also really interesting to note that this exploratory testing is not a technical thing. When earlier, Josh was saying that Rails doesn’t seem to set the stage for exploratory testings because no software or framework or technical thing sets the stage for exploratory testing. It’s a person thing.
It’s a people or a process thing.
ELISABETH:
Well, there’s totally that. I will suggest though that there is a technical component to it in the sense that if your software is not testable, by which I mean, it does not offer the possibility of getting visibility into its inner workings, nor does it give you any control, then it’s really, really hard to do exploratory testing. So, you can identify all these things that you could vary theoretically, but you have no mechanism by which to vary it. And that’s why I was going to actually mention about Rails that with the Rails console, you’ve got infinite -- no matter what it is that you’re building or how much you expose or don’t expose on the presentation layer in the views, you can totally get in there and start exploring the interactions between your models or between your controllers and your models because you’ve got the Rails console. So, you can do anything. Rails is actually great for exploratory testing.
JOSH:
I’m going to suggest that Avdi and Elisabeth do a pairing session of doing some exploratory testing in the Rails console sometime.
AVDI:
That would be awesome.
ELISABETH:
That would be so much fun.
AVDI:
That sort of leads into one of the things I want to -- I’ve got a lot of things I want to talk about. [Laughter]
AVDI:
But it sort of leads into one of them, which is like I keep thinking, like Katrina said, I’m so optimistic about my software and I’m so sort of automatically easy on it. I click a button and I know what’s going on in the background and I know it takes a little bit of time. So, I would never dream of trying to hit refresh or back or something in the middle of hitting that button because I’m like, “Okay. Now, it’s doing its thing and I’m going to wait until it’s done its thing.” And then it’ll be done. And then I’ll check the outcome. Do I need to go and take a walk and take some deep breaths and get into character and put on a different hat?
ELISABETH:
Yes.
[Laughter]
ELISABETH:
Personas.
AVDI:
I almost feel like I need somebody next to me who isn’t involved in writing my software just because it’s so hard to get into that critical frame of mind.
ELISABETH:
Well, I totally see that. It can totally help to have somebody sitting next to you who thinks differently than you do, which is why I strongly advocate exploratory tester and programmer pairings because I think that that can work incredibly well, as my experience on Bring Light showed. But there is an entire chapter in the book about personas and that’s the other thing that you could do. If you don’t happen to have an exploratory tester person handy, you can always put on another hat. It’s really about exercising your empathy and picturing yourself in a completely different role. So, you are that impatient, type A, inventory manager person standing in a warehouse and you are being asked, “Do we have at least 72 widgets in stock? Because we’ve got a big order from ACME Corp?” And you have to look it up right away and you’re late for a meeting and you’re a type A, so being late for meetings is so not okay and you’re just stressed to the max and the stupid website won’t respond because you’re on a flaky wireless. If you channel that while you’re using your software, you’re more likely to find the things that users are going to actually experience rather than thinking, “Okay. Well, I know that the controller is going to go make the AJAX request and da…da…da…and I’ll just wait until everything’s back,” because you’re going to be channeling that, “Holy crap. I got to get to this meeting but I’ve got to look up this number and da…da…da…” Makes sense? AVDI: It does. Although you realize you’re making a controversial statement there when you say that programmers should have empathy.
[Laughter]
ELISABETH:
How is that controversial? You can’t pair without empathy. You have to have empathy just to do pair programming.
AVDI:
Yeah. Well, there are also a lot of programmers out there who think that pairing is crazy.
JOSH:
I want to shift gears just a little bit. One of the things about the book that’s so great is all the stories. As a sidebar here, Elisabeth, I have to ask, you have this amazing collection of details of little anecdotes or reminiscences about particular bugs in projects from ten or 20 years ago. Have you been writing that stuff down for years and years just so that you would have good fodder for a book eventually?
ELISABETH:
Well, yes and.
[Laughter]
ELISABETH:
So yes, I’ve sort of been writing this stuff down. People ask me how long did it take me to write the book and it really depends on when you start counting from. Because I had a class that I used to teach back in 2000 called Bug Hunting. And I’ve had presentation material and class material where I’d captured that. So, it’s not like I’ve been keeping these little private journals of bugs I have known for years and years and years and then only now have chosen to reveal them. I actually have a lot of material that I had written down and published, just not in book form, about this stuff. So yes, I had written a lot of stuff down. But the other thing about good explorers is they tend to have this -- James Bach refers to it as ‘you’re a mental pack rat’. You remember every bizarre thing that you’ve ever seen because that provides you with an entire inventory of things that you might consider trying in your next context if you see similar potential behavior patterns. So, I also have that weird mental pack rat syndrome where I tend to make friends with bugs and I want to understand them deeply and understand how they got into the system and understand how I can prevent them next time.
JOSH:
Before you squash them. [Chuckles]
ELISABETH:
Right, and then I squash them like the bugs that they are.
JOSH:
[Laughter] That’s great. Okay. So, that was actually a sidebar. But thank you. [Laughter] You’re funny. My favorite story from ‘Surely You’re Joking, Mr. Feynman’ is the one that you included a reference to, which is the one where he prevents the Manhattan Project from exploding by asking a stupid question.
ELISABETH:
[Laughs]
JOSH:
I loved your perspective on that story. The thing that I loved about that story, when I read the essay about that in ‘Surely You’re Joking’, the point to that that I took away was don’t be afraid to look stupid by asking dumb questions.
ELISABETH:
Right.
JOSH:
And the bit of context right before that story was he was brand new. It was his first day on the project. He had no idea what was going on. He didn’t even know how to read one of these blueprints. So, he figured he would just start exploring the space by asking a question that he was sure would make him look like an idiot. And the value in that question was huge. If it had gone much further, it would have been a big catastrophe or something like that. At least, it would have been very expensive to fix, like our Bay Bridge here in San Francisco is trying not to be.
KATRINA:
I’d like to take this in a slightly different direction. One of the things I really appreciated about the book was that you provide heuristics for how to deal with very subtle things, like timing issues or what seems to be non-deterministic behavior, things that happen once and never again or never when you’re watching it. And I thought that was extremely helpful. I cannot count the number of times I have had a system that seems buggy but I have no idea how to approach it. I thought that it was very helpful to read about how you would approach that type of situation.
ELISABETH:
Thank you.
KATRINA:
I probably should have included some sort of question.
[Laughter]
ELISABETH:
That would prompt me to talk more. But as you’ve discovered, it’s actually harder to get me to shut up than to get me to talk.
[Laughter]
KATRINA:
Okay. So, talking about a little bit of random stuff, I want to bring up something that James mentioned. Are you familiar with fuzz testing?
ELISABETH:
The way that I would see fuzz testing is that it’s taking a whole bunch of random inputs into a system to see how it’s responding to it. And I would not consider myself an expert in that. There are people who actually just study that and there was a program at I think it was Carnegie Mellon called the Ballistics Project where they took a whole bunch of different implementations of Unix and started throwing bad inputs at it just to see when it would fall over dead. And that was kind of fun to watch the results from those experiments. But this is a really long-winded way of saying ‘yes’.
KARTRINA:
Do you find that this is a part of exploratory testing or complementary to it?
ELISABETH:
It totally could be. I would consider that an example of identifying a variable. So, one of the chapters is all about how to identify variables and as a hint, it’s not that thing where you say int foo, but it’s the things that you can vary. So, you’ve identified a thing that you could vary and the kinds of inputs that it takes. And now, you’re taking a whole bunch of possible values and throwing it against that variable to see what the behavior of the system is. And so, I think that it’s a great example of using test automation to support your exploration.
JOSH:
Hey, can we talk about tools a little bit?
ELISABETH:
Sure.
JOSH:
So, you mentioned the Rails console or I guess irb or pry if you’re not doing a Rails project. I think that’s kind of like the -- I won’t call it the nuclear option, but it’s like the big god, right?
ELISABETH:
[Laughs]
JOSH:
You can do just anything in there. So, I think it’s an awesome tool. What other kinds of tools are useful for doing this kind of thing?
ELISABETH:
Oh, wow. Well, since I’ve started doing a lot more programming in the last several years than I had been doing when I was focused exclusively on testing, it’s harder for me to think about the tools because they’re just part of my programmer toolbox. Like you said, you’ve got irb; anything, any kind of console thing where you can get into the guts of the system. I use, if I’m doing web development, I’ll use Firebug or Chrome’s Developer Mode to see exactly what’s on the page and to be able to change values, and then JavaScript Console and stuff. That’s the kind of thing that when I was much more in that isolated and siloed role of tester, I kind of viewed as my like black magic tricks that, “Look, gray box testing. I can sneak stuff in.” And now that I do a lot more development, they’re kind of not magic anymore. They’re just like how I do my job. So, it’s harder for me to think about them and start listing them off. But other examples would be, back in the day when I was doing a lot of Windows-y stuff, Fiddler is a great HTTP proxy so that you could get in there and start sending your POST requests. But then of course, my other half of the brain goes, “But wait, it’s just a POST request. I can write a Ruby script for that.” So, you start seeing a theme here that the tools that I used to think were cool to mention were all tools that were really developer debugging tools that I was conscripting for testing purposes. So, any debugger-like thing you use as a developer is going to enable you to do really cool exploration.
JOSH:
Okay. That’s cool. What about things like mocking or test doubles? Do they get used differently or more or less than in TDD?
ELISABETH:
Honestly, I don’t use mocking when I’m thinking about, I use mocking when I want to express an expectation. I express the expectation that, “Hey, this call should be made once and only once.” But now that you’re making me think about it, I could express that expectation and then have a test like if I was to create that mock, so I’m mocking out the external service -- let’s take the notion of always and nevers. Your system always has rules that represent always and nevers. If you’re doing an accounting system, there should be no way to create or destroy money, kind of one of the fundamental things that should be just true, sort of like gravity is true. And if you ever find any behavior in your system that violates that expectation, it’s potentially an incredibly serious bug. So, it occurs to me, now that you’re suggesting it, that if I were to think about always and nevers of the system and put in place mocks in appropriate places and then write just little throwaway. Like I’m not going to check these in but I want to explore the behavior of the system at a code level. I’ve expressed my expectation in my mocks. This thing should be called once and only once. So, there should be no mechanism by which I can make this particular external call get called multiple times. Let’s say I’m calling out to a payment gateway and there just should be no circumstance under which a user gets double charged. Then I would be creating the exploratory tests inside. Things that otherwise would look like unit tests, and they actually might look really honky, ugly unit tests. Like I said, never going to check them in. But the whole idea is to see if I can exercise some aspect of the system or the API or the collaborating classes within the system to cause that mock to suddenly say, "Hey, you tried to call this thing twice.” Does that make sense?
JOSH:
Yes. [Laughs]
ELISABETH:
You give me a tool, I use it. [Laughter]
AVDI:
One of the most interesting tools that you mentioned for me was when you mentioned characterizing a system that was new to you and one of the tools you used was to set triggers on a database table.
ELISABETH:
Oh, yeah.
AVDI:
To find out if certain interactions touch that table. At that moment, I felt like such a complete amateur because I realized I don’t know how to do that.
ELISABETH:
[Laughs]
AVDI:
That would be incredibly important, especially when I’m diving into a system that I’m not familiar with.
ELISABETH:
Honestly, I’m not sure if I remember how to do that anymore because it’s not the done thing anymore. This was back in the day of client-server computing, before all of this fancy new-fangled web stuff. But I also had the advantage of -- I had previously worked at Sybase a long time ago. And so, I kind of knew my way around a relational database. But again, it comes back to that idea that whatever you’ve got available to you that’s going to create visibility, seize it and use it. It’s not always necessarily easy to see what you have that creates visibility, but something like a trigger that’s going to enable you to hook in and cause something else to happen so that you can observe where things happen. It’s just another example.
AVDI:
You mentioned listening for funny noises coming out of the computer. [Laughter]
AVDI:
It’s great, because that’s actually part of my, I guess unconscious relationship with my computer, not necessarily for my own software. Sometimes for my own software, but sometimes for other stuff. It’s like, “Hmm, the hard drive is crunching really hard right now. I bet I know what process has gotten out of hand.” But are there any funny noises, equivalents to funny noises for web applications? Since a lot of our listeners are developing Rails applications.
CHUCK:
I just want to jump in here real quick and say Avdi, if you have funny noises coming out of your computer, take the modem out of it. [Laughter]
JOSH:
Or just close the YouTube window.
[Laughter]
ELISABETH:
It’s funny you mention web applications and funny noises. Firefox, for some reason, makes my computer just a little bit slow and Google Hangouts on Firefox make it sound like a jet engine taking off. My little Air is trying to take flight. So, I would say that listen for your fan going off because it indicates that your CPU is crunching really, really hard.
JOSH:
[Chuckles] Okay. Another poor segue here, but the day to day project pace and exploratory testing. By the way, do you call it XT for exploratory testing?
ELISABETH:
ET.
JOSH:
ET. Okay.
ELISABETH:
ET, yes.
JOSH:
Okay.
ELISABETH:
And by the way, that’s a thing. I didn’t make that up. It’s been called ET for quite a while and there are some other folks in the community, James Bach and Cem Kaner are two in particular, also John Bach, James’s brother. They’ve done an awful lot with exploratory testing. And Cem Kaner coined the term and they’re the ones who called it ET. So, coming all the way back to the beginning of this conversation with, “Hey, so this is a thing?” It’s a thing. [Laughs]
JOSH:
Yeah. By the way, the easy button story that you tell in the book, I actually just saw that video recently. It was hilarious. It was great. [Chuckles]
AVDI:
The book introduced me to that video and I immediately tweeted it and we’ll put that in the show notes.
JOSH:
That’s probably where I saw it, was in your tweet stream.
AVDI:
Probably.
JOSH:
Okay. So, exploratory testing in a project. As a Pivot, my day to day existence was I lived in Pivotal Tracker and I’d look at what story was top in the current iteration, I’d click start, and I’d work on it. Eventually, it would get done. And if there were things like provisioning servers or whatever, we would do them as chores, not at as featured stories. Do you actually put stories in the backlog to work on in Tracker?
ELISABETH:
No.
JOSH:
Well, why not? [Chuckles]
ELISABETH:
Well, there’s a very good reason. Let’s say you’re implementing a story. So, it’s going to be a tiny increment of value, of capability. The question then is when do you consider it done? Is it done when you’ve written a bunch of tests? You’re doing test-driven development of course so you write your little tests, you write your small amount of code. Write your little test, write your small amount of code, run all of the tests, check in, continue the story development, very typical developer workflow. But in Pivotal Tracker, when do you decide to click finish so that it will go into the deliver state? And for me, I’m not done until not only have I expressed my expectations and seen that I have created code that meets those expectations, but I’ve also actually looked at its behavior from that outside point of view. I’ve actually tried to use it. I’ve considered some possibilities and ways to vary how I use it. And I consider that part of being done with the story. So, I don’t want to separate the notion of the exploring from the notion of the creation by creating a separate story for exploratory testing every single story that we’re working on from a development standpoint because that’s just another example of separating the exploring from the creation. Even if it’s all in the same Tracker, it’s too easy to say, “Hey. Well you know, we don’t really have time for this so we’re going to prioritize those stories later.” And then, it doesn’t matter if you’re a collocated team using Pivotal Tracker, you still are having a ‘throw it over the wall’ moment when you go, “Crap! I hope this actually works for what the customer wanted.” So, I consider it part of done.
KATRINA:
I realize how deficient I am when I’m listening to you because often I’ll test drive a feature and I won’t even realize that I haven’t even tried opening my browser. I won’t notice that it doesn’t actually work because I test drove it. I wrote all the tests. If all the tests are passing, I’m done.
ELISABETH:
So Katrina, I have had the great joy to work alongside you for a week and I am here to tell you that there is nothing even remotely deficient about you. So, stop that.
CHUCK:
[Laughs] I have to say, I love getting those bug reports back where it’s all your specs are passing and so you check it in and then you get the bug report back that says, “But it doesn’t work.”
ELISABETH:
[Laughs]
AVDI:
I have totally done the Knuth thing of basically saying I have proved this to be correct, but I haven’t actually tried it. [Laughter]
CHUCK:
Do you have any other cues that prompt you to go on an exploration of your code other than just ‘I finished this feature and I want to make sure that when I click stuff, it does what it’s supposed to’ when you finish a feature?
ELISABETH:
The one other cue is when I realize that I’m reaching a certain level of uncertainty about how things play together. I might have a set of features where I’m sure at this point that they do what we intended, but that emergent property of the overall system. Like I’ve implemented shopping cart and I’ve implemented discounts, but I’m feeling a little uncertain about how discounts and shopping cart all work together. That’s the one time when I actually might click a chore in Pivotal Tracker to say, “Hey, let’s explore the combination of these things and let’s set aside an actual chunk of time to do that that is not dedicated to a specific story.” But the cue, coming back to your actual question, the cue is I feel uncertain about something. Because if I ever am thinking. “Wow, I don’t actually know what’s going to happen if I do this and this and this,” the cure for I don’t know is some exploration.
JOSH:
Okay. Do we have any more notes from James to take a look at before we -- it seems like we’re getting towards the end of things.
CHUCK:
Yeah, we’re pretty close to our hour too.
JOSH:
Okay. Oh, so state tables.
ELISABETH:
Yeah.
AVDI:
Love that section.
JOSH:
Yeah, that was actually probably the most powerful technical insight that I got.
ELISABETH:
Sweet.
JOSH:
Showing the value of a different representation for the same information was eye-opening. We’ve all gotten so used to looking at state diagrams as circles and arcs and labels and saying, “Oh well, this is a great description of what the state machine does.” But transforming it into a table view, or basically a sparse matrix representation of the state diagram, makes it so much easier to see what the machine is not doing.
ELISABETH:
Right.
JOSH:
And just the value of that for being able to explore and understand if your code is behaving the way you expect was just, I’m really looking forward to having that as a tool in the future.
ELISABETH:
Sweet.
JOSH:
There’s a lot of stuff like that in the book. That was the one that really stood out for me. But I liked how the book was written. I felt like it was really accessible to beginners and that it laid a really good foundation for all of this stuff. But it also seemed like there was enough subtlety to it that there would be fodder for people who had a fair amount of experience at this too.
ELISABETH:
Thank you. I was hoping to do that. So, I’m really glad to hear that you felt that it did.
JOSH:
Yeah. Well, it definitely works for me. I have a question though about odd little anecdotes, which you keep coming back to. So, timing sensitive testing is something that I think drives us all slightly insane. And I can’t tell you how many difficult software issues I’ve had relating to information being entered close to midnight.
[Laughter]
ELISABETH:
Oh, yeah.
JOSH:
Or on the last day of the month, or on the 31st day of the month.
CHUCK:
She also put time zones in there, which is always fun.
JOSH:
Yeah. So Elisabeth, a long time ago at Caltech, I was talking to someone and he used the phrase ‘phase of the moon error’.
ELISABETH:
Right.
CHUCK:
[Laughter] Nice.
JOSH:
And I said, “What the hell is a phase of the moon error?” So, he explained it as something that happens infrequently. But with some sort of apparent regularity but sort of mysterious about why it would happen that often. And then, he told me the story of what the original phase of the moon error was which I found somewhat unbelievable. But I was wondering if you’d heard a story about that as well. The story that he told me was that way back when on mainframes and timeshare systems, we would have an essentially dumb terminal talking to the mainframe. People got sophisticated in there. It’s like we have ZShell and all that now that has a lot of bells and whistles for providing extra information about your operating environment in the shell, the same thing on the mainframe. And one of the things they had was the bottom line of the display was always your status line, like we have the menu bar at the top of our screens now. And it would show the time and whether you had any mail in your inbox waiting to be read. But one of the extensions for this on a system showed the phase of the moon, so it would tell you whether the moon was new or half or full. And somebody thought it would be cute to extend that functionality to include saying when the moon was gibbous. But because of the way the software was written, they didn’t actually have access to the source code for that software anymore, so they were just patching the object code. It turned out that the field for that string was only four characters long and they were poking a sevencharacter string into it. So, at the moment that the moon changed from full to gibbous, the system would crash.
ELISABETH:
Oh, that’s great!
[Laughter]
ELISABETH:
No, I didn’t know that bug, but now I’ll remember that forever.
JOSH:
I have no idea if that’s true or just some apocryphal story. Now, that’s something for you to keep in the back of your brain as you talk to people and see if you can find out whether that’s true.
CHUCK:
That is so awesome.
ELISABETH:
Nice.
JOSH:
So, that’s it on phase of the moon for me.
[Chuckles]
ELISABETH:
That’s awesome.
CHUCK:
Alright. Well, we’re getting pretty close to the end of our time. Are there any other aspects of this that we haven’t covered that we ought to?
JOSH:
What about training and education and community outreach, that kind of thing. There have been tons of talks at conferences about TDD and BDD, et cetera. Where do people go to get more information about this? Your book is a great starting point. Is there some culture or community where we can learn about this stuff more?
ELISABETH:
Absolutely. And one of the places that you could go, the Association for Software Testing holds their CAST conference every year, Conference for the Association of Software Testing, and there is usually a fair amount of exploratory content there. We’ve also had people from that community who have given presentations at the Agile conference. So, the exploratory testing community has really been reaching out a lot more to the broader software development community in the last several years. Perhaps what you’re suggesting is that even more would be a good idea for that. Another conference that I would recommend that is a testing-centric conference is the Pacific Northwest Software Quality Conference, PNSQC. It’s a regional conference, but they attract speakers from around the world and their content is always fantastic. It’s run entirely by volunteers. So, those are a few places that you could go if you actually wanted to hang with people who do exploratory testing all the time.
JOSH:
Okay. Probably something we could learn from. Okay, that’s it for me. Anybody else, last words? Avdi? Katrina? Elisabeth?
AVDI:
I think I got most of my comments out of my system.
[Laughter]
ELISABETH:
Avdi, I have to tell you, you weren’t as mean as I was expecting, based on your first telegraphing of, “I’m going to be mean now.”
AVDI:
[Laughs]
ELISABETH:
So, I think you get the [inaudible]. [Crosstalk]
AVDI:
I will endeavor to be more mean next time.
JOSH:
Yeah, Elisabeth, for Avdi that was being mean.
ELISABETH:
Oh, okay. [Chuckles]
CHUCK:
Alright. Well, let’s go and get to the picks. Josh, you want to start us off?
JOSH:
I knew you were going to pick me first. [Chuckles] I could just tell.
CHUCK:
Yeah. Well, your picture’s the one that’s up on top right in front of my face. [Laughter]
JOSH:
It’s always like when you intro me first, you give me the picks first. Okay. So, I have a great pick. My picks are not going to be very exciting this week, except to me. So, I got the new MacBook Air 13” laptop. It was time for an upgrade. I’d had my old laptop about three years. I love how every time I buy a new computer from Apple, it’s far and away the best computer I’ve ever owned. They keep one-upping themselves. I don’t know how they do that. Well, I guess being excellent helps. The thing that they talk about with the MacBook Air is the battery life. And I plugged it in and turned it on and it had 12 hours of battery life. [Chuckles] Twelve hours and 23 minutes, to be exact. And it seems to consistently get around 10 hours even if I’m full Wi-Fi, Bluetooth, playing videos mode. That’s pretty awesome. And then, I got some new headphones for my iPhone because I lost my old ones. I’m a big fan of cheap headphones because I tend to lose them. So, I don’t like spending $100 or $200 on headphones. I always go for the cheap ones. And the ones I found this time that I’m really liking are from Sony. They’re the MDR-EX100IPs. And they’re cheap. They’re like $30 or $40 depending on where you buy them. The sound quality on them is good. They have a very comfortable fit. But most importantly for me these days is they have a little L-shaped connector rather than the one that pokes straight in. I think the durability of that in my pocket will be much, much better. I like the L-shaped connectors better. And I’ve been really liking the sound quality and apparently the microphone’s pretty good for phone calls. Working great for me. So, that’s it for me this week. Thanks.
CHUCK:
Awesome. Katrina, what are your picks?
KATRINA:
I’ve got three today. The first is lineman.js, which is a set of tools to help you build fat client JavaScript apps. It’s all about all of the fiddly stuff that you do around developing with JavaScript. So, it’s the testing in the test servers and the mocking servers and building assets and all of this. And none of it requires Ruby, which means that it’s very, very fast. It’s just a really lovely little set of tools. It doesn’t do any of the MVC stuff. It leaves that to Angular and Ember and all these other things. It lets you wrap whatever it is you actually want to use and just does the tooling around it. It’s very nice. You should absolutely check it out. The second is Grid-It, which is a little thing that helps you organize all of the little pieces of hardware that you carry around in your life, like the little cables and phones and iPads and whatever it is. It’s extraordinary. I’m just going to link to it. Go check it out. And the third is Hugh Howey, the science fiction author. He’s written a bunch of novels set in a post-apocalyptic earth. It’s excellent if you need to procrastinate. I read eight of them this past week. His Wool Omnibus and his Shift Omnibus in the same saga, very good, very enjoyable.
I will link to it here.
CHUCK:
Nice. Avdi, what are your picks?
AVDI:
My picks today are people. Last week, I was frantically getting a talk together and I realized at the last minute that I’d really like to do some audience participation, which involved having a web app. Knowing that I didn’t have time to put that app together, I basically threw the requirement out on the Ruby Rogues Parley mailing list and asked for minions. And folks really just jumped in and made it happen. By the time I gave my talk, it was a fully working web app. It had some bells and whistles that were not even necessary but were awesome. So, the people that really did the most work to make that happen are, in no particular order, Robert Jackson, Jesse House, Jonathan Cairns, Benjamin Fleischer and Michael Moen. Thank you so much for making that app happen in like, I don’t know, three or four days.
CHUCK:
Yeah, it was pretty crazy. I kept seeing the commit messages and stuff come in on my iPhone.
AVDI:
Yeah.
CHUCK:
Awesome. Alright. My picks. My first pick, I’ve been playing with Ember.js a little bit just trying to get my head around it and figure out everything it does. So, the first pick that I have, both of these are done by Ryan Florence, which actually, he’s a JavaScript developer that’s local here. The first one is ember-tools. We actually did a JavaScript Jabber episode on ember-tools. Feel free to go check that out. In fact, that should be out on Friday, so keep an eye out for that on JavaScriptJabber.com. The other one is he’s putting together videos teaching people how to use Ember. You can get those at Ember101.com. So, those are my picks. Elisabeth, what are your picks?
ELISABETH:
My picks are conference-related. I hope it’s okay that I mention a conference I’m involved with, FlowCon. It’s a one-day conference in San Francisco this Fall. It’s a small conference. This is the first year it’s being held and it’s all about connecting value all the way through the software development life-cycle and having a continuous flow of value. We’ve got talks. In fact, Katrina, you’re going to be there, right?
KATRINA:
I totally am.
ELISABETH:
Yey! Katrina is speaking. And we’re going to have talks on everything from very technical things related to continuous flow through design and data and scaling and so pretty excited about that. For a first year conference, I think it’s going to be awesome. And then related to that, in the theme of conferences, I’m super excited about a JavaScript framework for creating presentations, because those who know me and my history with creating presentations know that I have a lovehate relationship with pretty much all presentation software. I kind of need it so I have to kind of love it. But at the same time, none of it is amenable to putting things in the source control. And there have been various attempts over the years. But the one that I’m kind of excited about right now because it looks elegant and gorgeous and also works with markdown is reveal.js. So, I’ll put the link in the link section for that.
CHUCK:
Alright. Awesome. Yeah, FlowCon, that sounds like a really interesting concept for a conference. Alright. So, our next Book Club is ‘Understanding Computation: From Simple Machines to Impossible Programs’ by Tom Stuart. And he’ll be joining us for an episode to discuss the book on August 14th. We have a discount code. So, if you want to buy it, you can get it off of O’Reilly’s website. And the discount code is Ruby Rogues. It’ll give you 50% off the eBook, 40% off the print version. There is no discount for the combined eBook bundle. So, you’re probably going to wind up getting one or the other. Thanks to O’Reilly for giving us a discount. Thanks for Tom for being willing to come on the show. Also, you can get information on the book at ComputationBook.com. Anyway, let’s go ahead and wrap up the show. Thanks for writing the book. Thanks for coming on the show.
ELISABETH:
Thank you so much.
CHUCK:
Alright. Well, we’ll catch everybody next week.