CHUCK:
What is here? What is now? What is the meaning of life?
JAIM:
That depends on what the definition of “is” is. [Chuckles]
MIKE:
No, it only depends on what the definition of the “is” was, actually.
CHUCK:
Or “was is” was.
JAIM:
Welcome to the iPhilosopher show!
MIKE:
Please don’t listen to us.
[This episode is sponsored by Hired.com. Every week on Hired, they run an auction where over a thousand tech companies in San Francisco, New York and L.A. bid on iOS developers, providing them with salary and equity upfront. The average iOS developer gets an average of 5-15 introductory offers and an average salary offer of $130,000/year. Users can either accept an offer and go right into interviewing with a company or deny them without any continuing obligations. It’s totally free for users, and when you're hired they also give you a $2,000 signing bonus as a thank you for using them. But if you use the iPhreaks link, you’ll get a $4,000 bonus instead. Finally, if you're not looking for a job but know someone who is, you can refer them on Hired and get a $1,337 bonus as thanks after the job. Go sign up at Hired.com/iphreaks]
CHUCK:
Hey everybody and welcome to episode 116 of the iPhreaks Show. This week on our panel we have Mike Ash.
MIKE:
Hello, from fair-fetched Virginia.
CHUCK:
Jaim Zuber.
JAIM:
Hello, from Minneapolis.
CHUCK:
I’m Charles Max Wood from Devchat.tv. And this week we have a special guest; that’s John Reid.
JOHN:
Hello, from Palo Alto!
CHUCK:
Wow, Palo Alto. So we were talking before the show, you said you worked from Microsoft but you worked in Apple’s backyard, huh?
JOHN:
Yeah, Microsoftee working on iOS apps, there are such people.
CHUCK:
That’s kind of part of the thing that got both Microsoft and Apple off the ground, if I remember back in the day.
JOHN:
Ancient frenemies.
CHUCK:
That’s right. Yeah, a lot of history there. So do you want to introduce yourself?
JOHN:
Alright! Let’s see – I’ve been doing iOS for about five years, but before that I was doing Objective-C to do Mac development. And before that I was doing TDD.
So I’ve been doing test driven development since about 2001, so that long pre-dates my involvement with Mac or iOS or Objective-C, but it’s what I like to do now.
CHUCK:
Nice. I’m a big proponent of TDD. And I’ve done training in several companies mainly in Ruby for TDD. So I showed them how to TDD their Ruby or Rails apps.
JOHN:
Uh-hm.
CHUCK:
I’m curious, what tools do you use for testing in TD?
JOHN:
so I’m kind of old school in that I sue XCTest, Apple’s testing framework. Part of that is by necessity, and that I’m usually on teams that are either not doing much testing or not doing TDD and they are already using XCTest, or not using anything.
And so rather than cause cognitive overload and say, “Well, let’s use this framework and that framework,” I want to ease people in through the biggest door which is Apple’s testing framework, so that’s mainly what I use.
With XCTest, then I like to use my own testing frameworks to help me, which are OCHamcrest for matching, and OCMockito for mocking.
CHUCK:
So you wrote those then–.
JOHN:
Yeah.
CHUCK:
OCHamcrest and OCMockito?
JOHN:
Uh-hm.
CHUCK:
Now what do you mean by matching?
JOHN:
Matching is the ability to say whether – well it’s basically a predicate system. In the context of testing to be able to say, “Is this the same as that,” for only the parts I care about.
So, you could use a quality testing for a lot of things, and just use Apple’s XCTAssertEqualObjects, but a lot of the time that’s overkill. What you want to test is actually a part of the object. For example, instead of comparing entire strings where prefix of the string is the only part you care about, and you don’t really care about the tail, then it’s easy to have a matcher to say, does the string start with this prefix, and I don’t care about the rest.
It’s even more important when it comes to aggregates though, so that you can say for everything in this collection, does this string prefix apply? So the matchers are composable, and that’s where it gets crazy and powerful.
CHUCK:
Oh man, I’d love something like that. [Chuckles]
MIKE:
It’s the idea that – with you get an assertion failure, the error is more clear about what’s going wrong and where the data came from than if you just wrote the code manually to loop through and check for prefixes and things like that.
JOHN:
Yes, exactly. It tries to be much more precise to say you were looking over this collection but this particular item failed this test. [Crosstalk]
MIKE:
It’s just a normal assertTrue where it would just say, you know, I expected true and I got false, and good luck.
JOHN:
Yup.
CHUCK:
Right, well and especially if you build in it. This is the problem with putting it in a loop is that then – I mean a lot of the testing frameworks that I’ve used, they’d tell you, “On this object I expected this and got this,” but it’d be nice to just be able to put that into one assertion or one matcher on the collection. Because my code looks like, “Over this collection loop and make these five assertions.” And – or you know or “use these five matchers” effectively depending on your style of testing, but it’s not pretty and it’s not always clear what’s going on there. And the loop confuses things even more, and so, yeah collection matchers; I love the idea.
JOHN:
Yeah, one reason I like it is to get rid of loops and conditionals in test code –.
CHUCK:
Yep.
JOHN:
And just have the test flow through straight.
Another thing I like is kind of getting back to the better diagnostics that matchers give you is that, if something just fails, and if in the worst case scenario you just get this thing, you know, this failed and now you better go find out why. And that means, then, debugging through the test, I’d rather just get that information straight up.
CHUCK:
Yeah. I can tell you that in the tools I’ve used – again, I do mostly Ruby – but in the tools I’ve used it’ll tell you I have this object and I expected this result to be different from this other result, but not always. And if there’s an error – like if there’s some place where it creates a condition three objects into my collection, then it actually raises an exception, then you just get a back trace.
And so you have no idea that it was the third or fourth or tenth thing in your collection.
JAIM:
So we’re talking about matchers. So, you’ve got two libraries we talked about – OCMockito, and we talked about Hamcrest. We’re talking about OCHamcrest here, right?
JOHN:
That’s right.
JAIM:
Okay. And you could use these matchers along with XCTest?
JOHN:
Yeah. It should work with pretty much any – maybe any is too bold to claim – most well-known testing frameworks should be able to pickup OCHamcrest and start using it.
MIKE:
So let’s say this – I mean, this sounds great and I want to go use it right away and I’ve never heard of it before. What’s the five second version of how to get started?
JOHN:
The easiest way is to get it if you’re using CocoaPods, and you just add the Hamcrest to your test target. That way if you’re not using CocoaPods, then if you search for OCHamcrest and get the latest release, I’ve got packaged binaries, or you can build it yourself.
Anyway, my page on GitHub – if you just search for it, the Read Me on GitHub gets you started.
Another thing about OCHamcrest is it’s not just a collection of pre-built matchers, although those are useful. It’s also framework for writing your own and extending it. So, it makes – basically turns your test into more DSL-like expressions.
MIKE:
I just pulled up the Read Me here; it looks very comprehensive. That should be a great place to get started then.
Sometimes these things can be a little mysterious until you’re initiated into them, so it’s always great to see.
JAIM:
Yeah, OCHamcrest is a really cool and powerful tool because, like one of the patterns I see very often when a dev starts doing testing in iOS is they use the one XCTest assertion that they know whether it’s “true” or “equals”, and they write all their tests in that.
They’ll do whatever weird “equals” and “return true” and stuff like that. But that breaks down especially when you talk about what we were talking about earlier with the collection. It’s hard to do a clean home world of collection, assert – pattern-matching like that.
When you get into that [inaudible] kind of cases, something that – OCHamcrest is actually very powerful. So, great tool. I give it a thumbs up; I’ve used it in the past and I’ve had pretty good stats with it.
JOHN:
Cool.
CHUCK:
What I’m curious about is also the partial matching. So let’s say that you have an object that you want to test that it meets certain criteria, how do you do that without, again, having a five or six assertions or five or six matchers in [inaudible] things?
JOHN:
What you can do is basically logically combine matchers with ands and ors.
CHUCK:
Oh, gotcha.
JOHN:
Yeah. [Crosstalk]
CHUCK:
So effectively then it’s – I expect it to have this value with, or this attribute with this value, and this attribute with this value, and this attribute with this value.
JOHN:
You could do that, although now that’s starting to potential smell like you’re maybe testing too much in a single test.
CHUCK:
Uh-hm.
JOHN:
But assuming that all of those things are actually focused around a single truth, then yeah go for it.
CHUCK:
Yeah, and you can also build your own matchers that assert all of those things?
JOHN:
Yeah, especially for your own classes where you want to poke in and do something special.
CHUCK:
Right. So I want to test that this is a valid user, and so inside of that matcher it says, “Do they have a username? Do they have a password? Is it hashed? Active check –,” whatever.
JOHN:
Right. You could hide all those details from the test essentially. Then the test reads more logically, you know – is this a valid user?
CHUCK:
Gotcha.
I’m going to veer a little bit more into OCMockito and mocking and stubbing. This has always been a debate that I’m happy to have with people because I always learn something and there’s always interesting tradeoffs that we talk about. But when is it appropriate to mock things out and when isn’t it?
JOHN:
[Chuckles] Well, you never want to mock the thing you’re testing [crosstalk].
CHUCK:
Yes.
JOHN:
Right? That happens.
JAIM:
Never say never. [Laughter] There’s a lot of code but that’s the only choice you have.
JOHN:
I suppose.
MIKE:
Well, if you can mock the thing you’re testing extensively enough then you don’t have to actually implement it, so that would be a way, right?
CHUCK:
Whoo! That’s right.
JAIM:
It’s perfect.
JOHN:
Yeah, but I’m happy to not used mocks. In fact, I tell people to avoid mocks as much as possible.
I've gotten into situations – this happens to everyone who discovers mocks and gets really mockhappy. Now, I’m not going to mock all the things, but I think you're setting yourself up for some testing heartaches there because it just starts to get crazy.
I would try to mock only the immediate interactors of things that a particular class is talking to, if possible, but even there – because things are often hidden in properties of properties, you might need to jam a mock in deeper down.
In general – I’ll credit this show for coming up with the word that I want to use and say in public – use ponzos when you can.
CHUCK:
Ponzos. [Chuckles]
JAIM:
Plain old NSObjects.
JOHN:
Plain old NSObjects.
JAIM:
[Inaudible] survived to the forefront. First time I ever heard it was with him.
CHUCK:
Yup. So yeah, when I’m talking to people about marking, usually I’m telling them, “If you don’t own it, then you're probably okay mocking it.” But it still depends on how you're using it and what your test is actually trying to assert is true.
But then, the other thing is that people get into, “Okay, well I want to test this in isolation,” and so then they use the mocks to basically create an object with the interface they expect. And the problem is if that interface changes or – usually that’s the most common – breakage due to mocks that you get a false positive on.
But you know, anything like that – yeah, it’s much more convenient to use a plain old NSObject, or the actually type of objects you're going to use if you can.
JOHN:
Right.
CHUCK:
And then I also see people sometimes mock things out because whatever they're talking to isn’t very performant, but you really have to be careful with that, because yeah you want to test that things are getting done, but by mocking that out, you're creating a place where your test can give you a false positive.
JOHN:
That’s where a lot of times people have trouble getting into the test mentality is because when people ask me questions, “How do I test this specific code?” And the answer is, “Oh, I don’t like that kind of code.” [Chuckles]
CHUCK:
I was waiting for you to say “rewrite it”. [Laughter]
JOHN:
If something is hard to test, that’s an indicator; that is a valuable form of feedback from the test itself, not just a pass/fail result but how do I feel about this, and how difficult is this to wire up?
CHUCK:
Well, and you also have to break down now. It’s hard to test because I don’t have a tool that’s specifically attuned to the situation. Or is it hard to test because it’s hard for me to look at it and know how to explain in code what it’s supposed to do.
JOHN:
Uh-hm.
CHUCK:
And if you can’t explain it that explicitly, in code, what it’s supposed to do, then it probably needs some attention.
JOHN:
Listen to your tests.
CHUCK:
Yup. I’m also curious, what is your TDD process actually look like? So let’s say you're going to add a feature to an iOS app; now you mentioned you worked for Microsoft, I’m assuming it’s alright if we mention you worked on Skype?
JOHN:
Yup.
CHUCK:
So let’s say that you wanted to add a feature to Skype where, every time somebody typed the word “whistle” their phone would whistle at them. How would you TDD something like that? Like how would you approach that with TDD?
JOHN:
So I am still kind of an inside out type of person. I’m trying to retrain myself right now to be a little more outside in, but I typically like to build up my infrastructure and test those things so that I know that each component I've created is a good, solid and controllable Lego block, and continue to assemble those Lego blocks into something bigger and that usually works for me.
Sometimes it doesn’t, sometimes I’ll end up in a situation where “Oh shoot”, apparently gone down a bad trail somewhere. But more – that’s very unusual, but more often than not, I’ve assembled building blocks and put it together, and then actually hook it up in the UI and try it, and it is not that unusual to have it work first time out of the gate.
JAIM:
Yeah, it’s definitely a good point about if you get into this workflow where your writing test before you're actually running the code, you're running the app. You can get into this workflow where you're free to work the first time you run it. And I've got into that work flow a number of times where I could just write the test, they pass, run the app and, yup it worked; okay, I’m good.
JOHN:
It’s still thrilling though.
JAIM:
Yeah. [Crosstalk]
JOHN:
Oh gosh, it’s like, “Yes!” [Chuckles]
JAIM:
Celebration.
CHUCK:
Yeah.
JOHN:
For me, TDD is like completely gamifying the development process. It’s all about giving me that endorphin rush.
CHUCK:
[Chuckles] Red, challenge accepted! Green, challenge completed!
JAIM:
Apple’s quite a bit behind still in their environment at least with XCode of giving their red/green. When I did .net stuff and I was learning to do TDD stuff, they’ve got auto test runners where you can just code and it’ll run the test behind the scenes. So actually, after you write the code the [inaudible] turns green without it running/doing anything. That hasn’t happened in iOS-land yet.
I wish we will work on it because it’s a really cool workflow, and that definitely hit those reward centers that help us keep motivated. So you're running failing tests, you're running passing tests and you get green lights just by writing the code.
JOHN:
That is cool. I think I've seen somebody hook up a system like that, but I've never seen it in person using XCode’s command line tools in the background.
JAIM:
Someday, my dreams will come true.
MIKE:
Let’s hope it’s sandbox so that I don’t make a typo and blow away all my documents.
CHUCK:
[Chuckles] I’m curious, does Microsoft or Skype mandate that there be tests so that you use TDD, or is that your own personal preference?
JOHN:
I have always used TDD in environments that are either hostile or just “we don’t care”. Here at Microsoft, I can’t speak for the larger – I mean it’s a huge company –.
CHUCK:
Oh, sure.
JOHN:
So many people on it. And I know there are TDD fanatics elsewhere. In iOS-land I see folks write a lot of unit tests but can’t really tell – it doesn’t look like TDD to me. But then I’m only looking at the test after they're done, so I can't really see what’s going on as people write it.
JAIM:
So when we’re talking about TDD, how is it different from just writing the tests?
JOHN:
So, it comes back, for me, to the classic three steps – red, green, re-factor. And red meaning you start by writing a failing test to express something that you want to add to your code. Green, you make that – basically, you implement that in the quickest way you know how, just to get it so that the test passes. It goes from red to green, and in doing that what you're doing is verifying the plumbing that the test and the system under test are actually affecting each other. That is, if I change code and the system under test, the light on the test changes – red-green, red-green – you can toggle it back and forth. And once you have that in place, then the real goal, to me, of TDD is the third step – maybe the most neglected step; to me it’s the most powerful step – and that is refactoring. Because once you get the code working however you want it to in the cheapest, quickest way, then you step back and say, “Now, how do I make that better? How do I improve that? How do I change the design of the code, and also of the test, to be more expressive and simple?”
And then you do that again and again – those three steps, over and over and over.
CHUCK:
Do you ever try and write code that is test afterward, and just see how it comes up differently?
JOHN:
I don’t do TDD for everything; I try to, but there’s some cases where I just really don’t know what I wanted to do yet, so in that – you can’t specify something if you don’t really have a specification in mind. If it’s more like, “I’ll know it when I see it,” well then I’ll hack on something.
I usually add tests to existing code around legacy code that I want to change just to make sure I
don’t break it.
MIKE:
So, what are some examples of iOS code that doesn’t really adapt well to TDD?
JOHN:
Animation. Although even there, that could just be because I’m just getting familiar with animation myself. Skype obviously has audio/video components. There too, I’m learning. And so, when you're in a situation where you're relying on Apple’s frameworks to do stuff and to call you back in a certain way, if you don’t know what those interaction patterns are, then you can't really TDD yet. But once you learn those patterns, the simplest would be like working with table views. Once you figure out, “Oh, now Apple calls this method – this delicate method back to get the number of rows and so forth,” then you can start to do – reverse the process, and start to specify those things upfront. But if you're not really familiar with how Apple’s going to talk to you, then you better go, step back and do some exploratory work first.
So yeah, animation, video, audio – those things I think don’t lend themselves well to TDD. Other things that do that might surprise people are other parts of view controllers like table views for example; you can certainly TDD those things.
MIKE:
I definitely like to hear you elaborate on user interface testing in general because that’s something I've always had trouble figuring out. And you mentioned animations in it; to me, all the problems there are all described user interface in general, but it sounds like you are more optimistic about that side; I’d love to hear what your approach is. Thoughts on that?
JOHN:
Yeah. My general approach with the interface is that I love to test interactions and logic. Not so much the visual behavior – although even that you can capture, for example, rendering something, making sure that you’re drawing – you're custom drawing is correct. That’s not something I TDD, but once I’m happy with the code, I’d lock it down with a snapshot test case; that would then let me re-factor that code.
But in terms of other interaction type stuff – again, if it’s something for which the patterns of interaction between your code and Apple’s code are very clear and well-known – this calls that and that calls this – then once you write that code, once you test drive that code, that is, you use test to drive the creation of that code, then it should just work. I had not really written any UI level testing yet. I've always relied on the guts of unit testing.
JAIM:
When we talk about UI tests, what are you talking about?
JOHN:
I’m talking about actually driving UI elements through Kiwi or Frank, or now, Apple is providing the ability to do the same thing in XCTest.
JAIM:
Okay. So actually clicking on a button whatever that are tools the JavaScript framework that XCode provides or iOS code.
JOHN:
Yeah, I’ve worked [crosstalk]. Yeah, tap the – actually tap this button. People, I think, want to badly, to desperately to write such tests because they're easy to imagine, and they're easy to understand. The problem is that any [inaudible] the UI because of animation transitions and so forth – it’s slow. It’s just really, really slow and fragile.
So, I’m not saying don’t write those – you certainly need those – but a few of that to guarantee that things are hooked up correctly. Instead, the way I write tests for view controllers is, for example to take a button and say, “Well, first does this button exist?” And then what is the action that’s hooked up or tapped up inside this button. And then rather than synthesize some sort of actual tap event and send it to the button, I’ll just call the action method and say, “Well, let’s just assume there’s no need to test Apple’s ability to invoke a method on a button – I've trust that that works – let’s just invoke the method as if the button were tapped.
JAIM:
So you're actually calling the method in code whether we actually write in the app, it’s been called by the nib.
JOHN:
Right.
JAIM:
But you're also testing that the button was created; you can also test that. You can also test that there’s an action that calls the method you want to test. So you're testing all the things that happened if you actually click on the button –.
JOHN:
Yeah.
JAIM:
But there are different parts.
JOHN:
Yeah. So it’s basically it’s a test in three parts – does the button exist; does it have this associate action; call the action.
CHUCK:
Do you do any end-to-end testing? I know that I talked to a lot of people who do this for the web and I basically tell them more or less what you said and that’s just where it’s absolutely critical, happy path. We’re not going to get paid if this doesn’t work. You know, I encourage them to do endto-end testing.
But it seems like if you’re testing delegates and things like that where it’s, “Okay, well we need to get the data from the UI table view. So we’re going to make the request and make sure we get the right data back.” You know, do you test that all the way down to the data store? Or do you mock that part out and just make it really fast?
JOHN:
So dealing with the data store, I would, to me, that’s a separate responsibility, and so I would essentially create a fake when we’re dealing with core data, for example, I would create an inmemory data store.
CHUCK:
Right.
JOHN:
And say, “Let’s just use this thing,” because it’ll be faster.
For dealing with networking calls, I will put some actual calls to the actual service in a separate target, not in the usual unit testing target for a couple of reasons. Making a call to an actual service is slow, and I like to be able to do TDD from wherever I am, including on a plane without WiFi – I’m cheap.
So, if I can turn off all my network connections and run most of my tests – all of my unit test – then I’m happy. But I will still have some tests again in a separate target to confirm network communication to the real server.
That is – I found more insurance against changes on the server site. Even if you're working on a company where the server part is owned by the same company and they think, “Oh, they're my buddies; they wouldn’t change anything out from under me without notifying me, right?” Haha.
CHUCK:
Yeah, been there, felt that pain. But are there instances where you want to test the whole stack?
JOHN:
I've left that to other people. I think that’s where other kinds of acceptance testing come in and that’s not where I’ve played.
CHUCK:
Okay.
JAIM:
I think that’s pretty solid. At least having your slower tests, your network tests in and every project because if the tests are slow, they're not going to get run. If your devs don’t have confidence that they could run this test efficiently, they're not going to run them. And if you have test sitting around that no one’s running, they break, no one knows about it, and it’ll be useless test base.
So it makes sense that, whatever test you have that you're running frequently, you can run them frequently, and they don’t take 30 seconds – however long. But write your test, okay it passed – you can move on. Because if you don’t have that, your whole workflow falls apart. [Crosstalk]
MIKE:
I would actually argue that if you end up with a bunch of broken tests because you have been running them for a long time, then it’s even worse than useless, because if you – if the tests were just useless then you could just move on, start over, whatever. But if you have tests that are actively breaking and it gets an impediment to adding new ones because then you’ll have to fix all these other junk, too, to make things work again.
JOHN:
Yeah.
MIKE:
And you definitely don’t want your project fall into that kind of state.
CHUCK:
Yeah. And I think fast tests really help with that just because then there is no, or very little, cost to running them frequently.
JOHN:
Yup. And frequently is a key because with TDD, the three step cycle is sometimes very tight and quick, like you might go through all three steps in a minute.
CHUCK:
Yeah. I had a client that I did some training for. And they – so they would run their test against basically their backend, and it would break because the backend would eventually crash during a test run; you can almost guarantee that it would.
And so they quit running their test, but they were required to write them so they were writing more tests that would – you’d get a whole bunch of basically false failures because it was throwing up an error that said, “I tried to talk to the network and it crashed.” So we had a long discussion about how to make the test basically to mock out that layer so that it was like, “I am hitting the data layer.”
And then for the longer running test – because you do want the integration with the backend system. I highly encourage them to use continuous integration, and that way they could then put that up there and, of course, they still had to solve their issues with the back end to make it more stable and more reliable. But in the mean time then people could continue to write tests that they could reliably say everything works and they could run it quickly. And then for the other tests, like I said, the net was CI, and I encouraged them also to put that up where people could then check on it.
And so then if there was some discontinuity between the APIs on the back end, and request on the front end system, then they could see those and be able to fix them quickly.
JOHN:
Yeah. Your CI system can run a much larger and longer suite of tests than you would want to do when you’re doing TDD. Or even if you're not doing TDD; if you're just working yourself and running the unit tests often, the CI can run. Those and all the others that are too slow and catch things fairly quickly at least, you know, hopefully on a particular commit.
CHUCK:
Yeah, but ultimately I mean this, for me, is about communication and collaboration. And the fact that you have somebody else’s assumptions or your own assumptions codified where you can then check in with it frequently without actually having to go ask somebody, “Does this break your part in the system?” It is very important.
And then having it easy to access for the things that aren’t so quick to check; it just makes a lot of sense. And then you can have your CI system yell and scream and cry to people over their e-mail if somebody broke something.
JOHN:
The great thing about building up a good suite – a useful suite of tests is the ability to sleep at night, to live without fear.
My early days with coding before I got into unit testing, I was on a team with a bunch of people who had been given this legacy glob of code and everyone was afraid to touch it, because you never knew what was going to break, and that’s the worst kind of development to have.
CHUCK:
Oh yeah.
JOHN:
Whereas, with a solid suite of tests, you're basically given the freedom to change anything that has been tested.
CHUCK:
So what if somebody gives a hand of that big ball of untested code, what would you recommend people do?
JOHN:
Ah, so we’re going to get into one of my picks, but if I could sneakily advance it – run out and buy Working Effectively with Legacy Code. It’s the key that unlock things for me. But basically, the idea is to try to box things off to find the places where you can cut. And if you have this glob – this gnarly mess of code, find a way to isolate part of it.
That part of it may be only the new code that you’re going to write, and you're just not going to touch the other stuff. But at least get the code that you're going to add under test.
Now, that may mean that you're going to write test as if you were the rest of the code calling back into your code, but at least now you're creating sort of a very small API for a very small part of the code, and you're going to make that sane, and that can gradually then spread.
It’s harder with the bigger the chunks are, and so the challenge is to find ways to isolate things behind walls as much as possible.
CHUCK:
Yeah.
JAIM:
I can definitely attest for that book. It’s very useful.
A couple of years ago I worked with a client who had a code base that was 20 to 30 years old. So it had – there was tons of stuff built up and that book was invaluable to getting things working, getting code that we could test, and there’s a lot of really cool techniques for it.
And ironically most of the devs that worked there had a copy of that book, but apparently never used it. [Crosstalk]
JOHN:
Maybe they never read it.
JAIM:
Yeah, there’s a monitor stand for them actually. [Chuckles] I’m not sure why but –.
CHUCK:
Helps slow down the book case.
JOHN:
Oh. I know why. It was because some manager said, “Hey, everybody get this book,” and they all did.
JAIM:
That is true.
CHUCK:
It’s possible. I’ve seen that. [Crosstalk]
JAIM:
So John did you ever –.
MIKE:
Did he pay for them or did he made everyone buy them themselves?
JOHN:
[Chuckles]
JAIM:
You have to buy it yourself and not read it.
But John, the other library you talked about was OCMockito.
JOHN:
Yeah.
JAIM:
And so the default tool everyone uses because it’s been around forever – I was surprised how old it is; it is OCMock – what does OCMockito bring to the table?
JOHN:
So, yeah. OCMock is the granddaddy of them all, and I used it and contributed to it. And started to feel some pain with it especially around when a verification would fail, it throws an exception. And I think it was like for Mac testing that was okay, because the Mac testing framework would catch it and report it. But for iOS testing, it would just crash and that was the end of story – something like that.
Now that probably changed in the past few years, but still I wanted something that would more precisely report with more than an exception what expectation was not met. And I got tipped off by an online friend as I was starting to look for alternatives, and he said, “Look at this thing, Mockito. It approaches mocking in a different way.”
So OCMock was written in the classical mocking style when mock objects were – mock objects framework were first invented, basically, out of jMock, which isn’t a style where you set up your expectations first, and then execute your code, and then you tell the mock objects “verify your expectations”.
Mockito flips that around and says, “No, let’s just set up your mock so that it exists; it’s ready, it’s in play when your code is run.” Run your code, now what the mocks are doing is recording the calls that were made to it. Now let’s go back and query the mocks and say, “These are my expectations.”
So your expectation comes as an assertion at the end rather than during the setup at the beginning. It makes for tests that read better. OCMock did come out with a major new version that brings the new style of mocking to the fore so that you end up with verifications at the end, but that’s pretty new development.
The other thing I wanted with OCMockito was to make the Hamcrest matchers first-class citizens so that for testing what arguments were sent to a particular method rather than having them be very strict. And just using equality to be able to have a matcher that says, “Well, as long as it satisfies these predicates, it’s good.”
So by not throwing an exception, OCMockito actually identifies a line where the verification fails so that you can just click on it in whatever ID you're using and then go straight to that part of the test.
JAIM:
Okay. So the benefits of OCMockito are, one, there’s better integration with OCHamcrest. And the second one is that you could write the test afterward. So you run the code and you verify it; if this happened, this happened, this happened versus setting it all up front –.
JOHN:
Yup.
JAIM:
Which is the old way of doing it. But doing it afterwards definitely does clear your test. Like okay, “Did this happen, did this happen?” It’s closer to how we do it mentally.
One feature I remember being missing from OCMockito – and this goes back to what I mentioned that you should never modify the test you're trying to – the class that you're trying to test – partial mocks. Are those still not in OCMockito?
JOHN:
No, they are not in OCMockito, and so if you really want partial mocks, people ask for them I say, “Yeah, maybe you should do OCMock instead.”
I took a crack at partial mocking and found out that it’s harder than I thought. OCMock uses a very clever system, which I may borrow someday, of dynamically subclassing. So when you create a partial mock, it actually on the fly uses the Objective-C runtime and says, “Let’s create a subclass of the object, and then replace certain things on it.”
That matches what you would do with a hand-rolled mock and that’s pretty cool. But if you feel the need for a partial mock, that could be a smell.
JAIM:
Well, it definitely is a smell, but it’s also very easy to write a view controller that is not testable. And so when you encounter one that hasn’t been – people haven’t been actually writing tests for, it’s almost very rare something that you have to, at least start stubbing out the partial box.
JOHN:
Yeah.
JAIM:
I ran across that just for pragmatic purposes, like this is the only way I can test this right now.
JOHN:
Yeah. And that’s where I fall back on the biggest technique I got out of the Michael Feathers book which is subclass and override, which is basically partial mocking to take the thing that you want to test. But it’s making calls to something that you really don’t want to have happen during a test, then you override those bits and say, “Well, let’s not do that.”
And so when I need that control usually with legacy code, I’ll fall back on doing it manually.
JAIM:
That makes sense. So speaking of subclassing override, did you look into doing any Mockito type things with Swift?
JOHN:
I have not. Leave Swift to the brave, young folks out there because I’ve seen, up until recently, a bit too much pain around the tooling to make me want to invest in Swift especially because OCMock and OCMockito rely on introspection to make a dynamic fake of something. And Apple, as far as I know, hasn’t yet provided introspection for Swift, then you have to fall back on old school stuff.
CHUCK:
So was it a total no-go then in Swift?
JOHN:
I don’t know and that’s where I would ask Brian Gesak, one of the main authors of the Quick framework – he’d be a good person to get on this show.
JAIM:
I mean I've tried to do some stuff with OCMock with Swift because even with a purely Swift object, you're not going to get your runtime stuff that you need to do in mocking. But we’re still dealing with a ton of things that are still derived from NSObject, which have all that in there.
So in theory, we could step out things, and if they're derived from NSObject, we could mock them and do all sorts of crazy things with. I’ve had very little success trying to get OCMock working with Swift. I’ve tried but generally, if I’m doing testing with Swift, I’m subclassing an override – doing that all the time.
And the benefit is Swift is much less intrusive to do that. It can put it aside a class; you don’t have to worry about it creating a header file and JSON file to make that happen. But generally it’s a lot easier to do a subclassing override, but I was hoping someone who could be – would figure out to mock dynamic stuff in Swift. But so far no luck; anyone’s figured that out, let me know.
CHUCK:
So why don’t we go back to one thing that you said earlier, John, and that is that you typically approach your TDD from the inside out, and that you would like to do more from the outside in.
What do you see as the tradeoffs between the two?
JOHN:
I think with an outside in approach, you're – I don’t know. It’s just that I haven’t done it that much yet, so I can't speak from personal experience. Usually, I would work inside out, and then I’ll add an outside in test in the end; like I said I’m trying to maybe do more acceptance TDD – ATDD – by starting from the outside in. But I think one of that reasons I've not done ATDD is because I don’t want a failing test hanging around while I’m trying to create something; I suppose it would go on a separate target for acceptance tests. But I need things to be green before I can re-factor.
So, from an inside out perspective, like I said, I think it depends on strong design skills to be able to come up with those Lego building blocks that are actually going to work when you snap them together. And maybe if you're not as certain of what those blocks should be, outside in may be more valuable.
CHUCK:
Yeah. I've done outside in and the thing that I liked about it is that it tends to drive what the next thing is that I need to build. So I started the very outer layer and then it’s okay; well I need this information or I need this side effect, or I need this other thing, and so as I worked my way through it. Then, also I’m going to call on to this object to do whatever, and so that’s the next thing that I TDD and I get a unit test on that.
But yeah, I can see it does, it stays red for a while and unless you start mocking and then pulling in and pulling out mocks and that can be painful. So yeah.
JOHN:
I think I probably keep it on a separate target so that I can have my failing/acceptance test stay red in one place but have nice green state from my TDD.
CHUCK:
Yup. Yeah, then when you get to, “Okay, what do I do now?” Then you go run that acceptance test and it tells you which piece you're missing next.
JOHN:
Uh-hm. Yeah, I think that’s something I’m going to work on, getting better at.
CHUCK:
Well this is fun. We’ve got into some fun areas, exploring different areas of testing. Are there things that you just plain don’t test?
JOHN:
Like I said, animation I think is the type of thing where the quality of it has to be felt whether it’s good or not.
CHUCK:
Any other thoughts that you have, John? Things that we should talk about or didn’t bring up?
JOHN:
Well, I’ll just get on my little soapbox for a second and urge people to try TDD. There is strange resistance to me in the developer community as a whole of skeptics. I don’t know if you’ve been told by management to do TDD because it’s going to make everything wonderful when you do it and you discover it’s not easy and it doesn’t make everything necessarily wonderful, and so you say “this sucks”; I would urge people to have another look.
CHUCK:
Yeah, totally.
JOHN:
That’s how I live; that’s how I sleep. Honestly I've been doing TDD for 14 years now, and it’s the thing that’s changed my programming more than anything else.
JAIM:
Yeah, it’s worth restating that. Learning to code this way is hard [crosstalk]. You start doing it and you’re like, “This makes no sense; how do you do this?” And you figure it out. And it takes a long time, so you have to put the work in to get the value out. If you just do a little bit you're not going to get that much out of it.
That’s just how it is but that gets glossed over by the evangelist’s thing, “Oh, this will make everything perfect,” and it won’t. It just adds new headaches but – things you can learn. But at the end of the road, things are better. Your code is better, you can keep adding features – all things that are [inaudible] in respect to us.
JOHN:
But you have to get over that hump.
JAIM:
It’s true.
CHUCK:
Alright, well let’s go ahead and do some picks. Mike, do you want to start us off with picks?
MIKE:
Sure. I keep convincing myself that I have done this one before, but I searched around, I’m pretty sure I haven’t; maybe I discussed it.
Anyway, wit.ai is one of my favorite tools out there. And I've never used it for anything practical but then it’s just so cool that I always like to introduce people to it. What it is basically is Siri in a box, and it’s got all the tools that you need to build a natural language response system. It will take speech or text, and you give it – you train it by example. So you got a list of things that you wanted to indentify, and you just give it sentences.
Like you would say “turn on the lights” or “start the car” or “what’s the temperature today” or something like that. And then you manually categorize some of those examples, and it learns on its own from there. And it spits back easy to parse JSON, telling you what you found so it makes it really easy to take a natural language and to put – and turn it into actions that your programs could take. And it’s a lot of fun to mess around with and see what you can do.
CHUCK:
That sounds really cool.
MIKE:
Yeah, it’s totally free to get started with. They have paid plans if you want fancy stuff but the free stuff would take you a long way.
CHUCK:
Alright. Jaim, do you have some picks for us?
JAIM:
Sure! I’ve got some picks. And I do the same picks every time we talk about testing. One is screencast by our guest today, which has been around for two and a half years. Seems like it was much longer ago, but it’s a great tutorial to get started with UI testing.
We talked earlier about testing that your nib is wired [inaudible], your view controller’s set up. I remember, like I can’t even test that; I did not know. And John does a screencast that does just a really simple overview of how to do that, and that kind of got the light on for me and allow me to move forward and figure out new ways to test.
So I’m going to – the UIViewController TDD Screencast. I haven’t seen it in a while but I imagine it’s still good.
CHUCK:
Alright –.
JAIM:
Oh, I got one more. One more.
CHUCK:
Okay.
JAIM:
I also, every time, talk about –.
CHUCK:
You paused.
JAIM:
I know. You pause, you die; that’s how it goes in the podcast world.
I grabbed this excellent book by Graham Lee – Test Driven iOS Development, which helps you go through how to do you test a view controller, a table view controller or anything like that. So those are great resources if you're trying to figure out how to start writing tests. So those are my picks.
JOHN:
Great pick there.
CHUCK:
I’ve got a couple of picks. So on Saturday, my sister got married. She’s the eighth of ten kids. So, while we were doing that I wound up doing video, and a lot of things at her wedding. And I just – I don’t have those high end cameras that a lot of people use, so I was just using an iPhone. And I
found a few tools that I wound up using for some of the stuff.
The first one is – it’s just a little clip that, they use the same standard size screw on tripods and things like that. And so I found a little clip that will mount on there and it will hold your phone in place so you could put it on tripod or on my second pick which is the – I have a little camera stabilizer. And it’s a handheld stabilizer and has a couple of counter weights on it so you can basically balance it out for your phone or your camera. And you can put fairly good size cameras on there and have it balance it out.
But then as you hold on to the handle and you move it around, basically it stays level. And so you can move it up and down or to the side. You can get good panning shots with it and stuff like that. So I really liked shooting with that as well.
So I’ll put links to both of those in the show notes. And then if you're interested in that kind of video or videography with a phone or even with a nice high-end camera, then check those out.
John, do you have some picks for us?
JOHN:
Well, I’ve already blown one pick which was Michael Feathers’ Working Effectively with Legacy Code, which basically taught me how to do Martin Fowler’s refactoring stuff with existing code. So that’s one.
My other two picks are things that have already been mentioned on previously on this show, but I’ll say them again to put in my vote for AppCode as a great IDE made by the same folks who make IntelliJ, so it brings all sorts of refactoring power, also code analysis. It shows typos in your CamelCased variable names. Yeah, that’s awesome. [Chuckles]
So I still use XCode for certain things, but for editing code AppCode is my friend.
And my last pick is the Clean Code video series by Uncle Bob – Bob Matrin – where he talks about all sorts of things including TDD. But in order to do TDD well it think you need to have a good grasp on design, and a lot of the show is about design principle. So that’s at cleancoders.com.
CHUCK:
Awesome. Well thanks for coming, John, it was really fun to talk about testing.
JOHN:
Yeah! Glad that this finally worked out. I’ve been wanting to be on this show for a while.
CHUCK:
Man, you make us sound famous or something. Alright, we’re going to wrap up the show
[Crosstalk].
JAIM:
We’re kind of a big deal. [Laughter]
CHUCK:
I didn’t want to say anything. Alright, well thanks for coming and we’ll catch everyone next week.
[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]
[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit cachefly.com to learn more]
[Would you like to join a conversation with the iPhreaks and their guests? Want to support the show? We have a forum that allows you to join the conversation and support the show at the same time. You can sign up at iphreaksshow.com/forum]