PETE:
So what are we talking about this Tuesday morning?
CHUCK:
I'm not sure, but I think we should write a test for it first. [Intro Music]
CHUCK:
Hey everybody and welcome to Episode 3 of iPhreaks! This week on our panel we have, Pete Hodgson.
PETE:
Hello from Butte Lake! [Ben laughs]
CHUCK:
Ben Scheirman...Butte Lake...
BEN:
Very well done. Very well done. [Laughs]
CHUCK:
Ben Scheirman.
BEN:
Hello from Houston!
CHUCK:
We also have, Rod Schimdt.
ROD:
Hello from Salt Lake City!
CHUCK:
Sorry, Rod. I was looking at Pete's picture and I was like "No, I already said Pete". [Laughter]
BEN:
Yeah, for those who didn't get the joke we were looking at the transcription from last episode, or from episode 1. And --
CHUCK:
Did that get fixed?
PETE:
It got fixed, yeah.
BEN:
Okay. So originally, Pete said he's from Berkeley and it came through as Butte Lake, which I thought was hilarious.
PETE:
I was pretty -- I was looking for the transcript -- it's pretty hilarious how much my accent has closed issues. For whatever personal machine is doing that transcription is definitely challenged by my accent.
CHUCK:
We're really sorry to the transcriptionist.
PETE:
Yeah. [Laughter]
CHUCK:
We will pick our panelist more carefully next time.
PETE:
Oh! It's my fault, huh? [Laughter]
CHUCK:
Anyway...And you can tell I had to ask if it got fixed because I just asked Mandy to do it and assume it's done.
PETE:
Yeah. No, she fixed it. She fixed it very very quickly.
CHUCK:
Yeah.
PETE:
And I'm used to that. I'm living in a [inaudible] when you found out like you have automated voice systems. They often don't work with British accent so I have to put on like a stupid American accent when I'm...operator! [Laughter]
PETE:
Reservations. [Laughter]
CHUCK:
It's funny, too, because a lot of times on those automated systems, they have somebody with a British accent or a fake British accent like speaking.
BEN:
Yeah.
PETE:
Yeah. But they didn't understand British. Siri didn't understand British for a very long time because you couldn't get like, if you lived in the US, you couldn't get American, so you couldn't get British Siri to work with like American information. So if I wanted to actually know about anything about America like where I live, I'd have to use the American version Siri, but she couldn't understand my pronunciation. [Laughter]
CHUCK:
So is the British Siri more polite?
PETE:
The British Siri sounds ridiculous! Like the British Siri sounds like a serious Etonian toughies like "How can I help you?!" [Laughter]
PETE:
I'm your butler! [Laughter]
CHUCK:
Alright. Well, should we start talking about "Testing iOS Apps"?
PETE:
I think we should just spend the whole episode talking about my accent and Siri.
CHUCK:
You're accent is going to provide us with plenty of material over the next file.
PETE:
Yeah. Let's talk about testing instead, then.
BEN:
Why doesn't anybody do it?
PETE:
That's a good question! Actually, this is good timing for me because I'm actually giving a talk about testing with Kiwi, which we'll get into I'm sure at some point during the show, on Thursday. So in couple of days time, it should be, I guess after the podcast comes out, but CocoaConf in San Jose. So, I guess this is kind of top of my mind, anyway, which is convenient. Yeah, why don't people do testing in iOS? I guess, I have a biased kind of sample set because I talked to a bunch of like footworks type agile who are fans of testing. And I think also, all of us actually are a lot of us coming from the Ruby community, which is famously obsessed with testing. But, I don't actually know -- do you guys have a sense of how many people, whether no one really know why the iOS community test, so very few people test it, what's --
BEN:
Yeah. In my experience, very few people do. I feel like, in general, people will agree that it's a good idea, but where the rubber hits the road, it's definitely more difficult than it is in Ruby to effectively test something. So, I don't know, if you've ever looked at the tools when you do file new project, there's a checkbox which says "Include Unit Test", which is kind of like a nod that people should pay attention to it from Apple. But, that's about all they give us. Once you have the unit test project, it uses the SenTestingKit, which is about as arcane of testing tool as I've ever used. And, I don't know, you ran into it like you can test things, right? You can say, "I have a calculator, I'm going to add 2 and 7 and I assert that the value is 9". But, if you do something like add two items to an array and assert that the count is 2, you actually have to cast the result of the arrays count method to an NSInteger because that returns an NSUInteger and unsigned integer. Stuff like that, it's just like "Oh God, I don't want to be doing that", the assertion macros are just really painful to write. Once you've get beyond that first level of testing a calculator, you're like "This is not realistic".
PETE:
I would say there's more friction, initial friction, in getting started than there is in a more dynamic language like Ruby. But actually, and also the thing that I didn't get is, compared to a language like Java or C#, I actually think that Objective C lands itself to unit testing more than those languages because Objective C is actually pretty dynamic. It's got that Smalltalk heritage where you can just send an arbitrary message to an arbitrary object and you can kind of change stuff at runtime and do get a stuff like that. So I actually think, in some ways, Objective C is a pretty good environment for doing unit testing, but I don't think culturally it's there. I definitely don't think Apple have a culture of testing as you can, like I kind of infer that every time they release a new version of Xcode, they break command-line - they'd be able to teach you RAM test on the command-line like almost every single time.
ROD:
Yeah, I was going to ask that. I've never heard of Apple having any kind of formal unit testing procedures that they go through or whatever.
PETE:
Well sir, actually I guess we should take a step back for a second and I'll channel Josh us, sir, and say "We should kind of get some definitions -- " [Laughter]
PETE:
In fantasizing about doing that for a while...
CHUCK:
Is it Josh you were fantasizing about him?
PETE:
Yeah, yeah. There's different types of testing and I think, especially for people who haven't done a lot testing, that it kind of get a little bit mixed together in people's heads. But, the two main differences when you're doing that was missed testing, it's like low-level unit testing, which is kind of what we've been talking about just now; and then like high-level acceptance test or functional test, they have different names, which are testing your code of a higher level.
CHUCK:
And sometimes you wind up with integration test, which test two or more things at the same time.
PETE:
Yeah, like it's kind of convenient to try these two types, but actually, I think it's more of a spectrum, like you're saying, from very low-level testing an individual method to very very high-level testing your entire application plus all of it depend on services and testing the entire world pretending to be a user. I think most people use the word 'unit test' as a kind of a loose term. Sometimes, people just mean any kind of automated test. I don't know, I think Apple don't do much toward unit testing like they do a fair amount with high-level automated test. They have this UI Automation tool that they ship instruments and they have people whose business cards says like "Test Automation Manager" and stuff like that inside of Apple. So they definitely take it quite seriously internally, but I'm not sure that they -- I know for a fact they don't use the same tools internally for like that highlevel UI Automation and I don't think they actually use UI Automation internally for automated testing, for example.
CHUCK:
That's interesting.
BEN:
Also, you can't just have the one tool, right?
PETE:
Yup!
BEN:
The granularity really matters. And if you try to write your whole test with using something like UI Automation, you're going to be in for a really painful time. As you decide to change your UI a bit, or if you just rearrange things a little bit, or if your app is actually broken, you're going to pinpoint the area where it's broken or half your test will going to fail; that feedback loop and knowing the ability to pinpoint failures and knowing exactly what's broken is really important. It also has nothing to do with driving your design except for the fact that some high-level pieces will be testable. So, you'll be able to set that world up in a way that you can had like against to some state and test against that, but it just doesn't negate the need for a lot of fine grained unit tests.
PETE:
Yeah. There's this thing called "The Testing Pyramid", and I'll try and find a link to a good article about it and put it in the show notes. But the idea is that, at the base of the pyramid, you have this broad wide foundation of like focused unit test, it's a very small focused tests. In the middle, you have the kind of integration tests that Chuck was talking about. And then at the tippy-top, the cherry on the top, is acceptance test, it's high-level tests that you would use something like UI Automation for. But really, if you don't build on that foundation of unit test, then you're not going to get any value out of the other stuff. It's really the unit test provide the most bang for your buck in terms of feedback, which I think is the main thing that testing gives you.
CHUCK:
Yeah. I think it's interesting, too, that in my experience anyway, the cost of writing a unit test is usually much much lower than writing an integration test or an acceptance test.
PETE:
Yup, definitely. The cost of maintaining a unit test suite is way less than the cost of maintaining an acceptance testing suite.
CHUCK:
Yeah. And, like you said, there's a lot of value in having those unit test because it'll tell you exactly which piece of code is broken.
PETE:
Yup!
CHUCK:
The other pieces though, are important for just overall knowing that things work the way they should. I mean when it comes right down to it, there are points of failure that are hard to test or hard to isolate. So, having something at the top-level that goes all the way down the stack and all the way back up is something that's really really nice just for knowing that your application works.
PETE:
Yup! And particularly, if you're a fan of like mocking and stubbing things, then you need some kind of high-level tests to make sure that all the different parts of your application are actually talking to each other properly. Because if you're faking out the world, if the world changes, your test won't give you feedback if the world has changed because you're faking out. Let's say your app uses the Twitter API, if you're faking out all of your interactions with the Twitter API, and then Twitter update their API, if you don't have a test that actually hits the real-world Twitter API, then you won't know that things are broken until you start getting 1-star reviews in the App Store. That's kind of a crappy example because Twitter aren't going to change their API without telling anyone, but --
BEN:
Actually, they do [laughs].
PETE:
Oh, okay [laughs].
BEN:
Yeah, they do. I heard about that on Core Intuition a couple episodes back, where they just up and change something for a bunch of apps they worked a long time ago and they didn't know about it until the users complained.
PETE:
Yup!
BEN:
But yeah, your point is completely valid. I really like in Ruby the tool VCR, where you can say "Making out a request" as long as I don't have a canned response already. And then it will check that into repository cassette that has the response status code or the response headers in the body. This subsequent request are then fast because you have that response, but any point in time, you can just, say once a month, just delete all of your cassettes and have it run against the real APIs again.
PETE:
Yeah! And there are similar tools -- I've had at least one or two similar tools to VCR for Objective C. I think especially if you're using AFNetworking does, I don't remember the name, I have to look it up, but that's like AFNetworking VCR or something like that. I think it has VCR or it's probably got like some cute name like AFNetworking datamax or something like that. But yeah, there's a bunch of tools out there. And there's also the other alternative to like faking that stuff out inside of your process is to set up a fake some other that pretends to be Twitter that is under your control. And I assign them a lot of footworks projects we end up doing because a lot of our work is integrating with big enterprise systems; I have all of these services all over the place. So a lot of times, we end up building a fake version of the backend services that we integrate with so that we can test the way that our stuff interact with all of those backend systems.
CHUCK:
I want to change tactics a little bit, and we kind of been talking about some of the mechanics of testing. For people who are kind of on the fence or don't believe that there is enough value in testing, why do we test our code? Why is it worth it?
PETE:
My take is: the first thing that should sell you on unit testing, those other things that you get after the fact or as you kind of start doing it more, the first piece of value that you get from your unit test is feedback as you're making changes. So when I write software, I hit _B all the time to make sure that my codes don't compile. I do that all the time because if I forget a square bracket or leave off a semicolon or whatever, I want to get feedback straight away so that I know like if it was 5 seconds since the last time I hit _B, and I've introduced a bug during those 5 seconds, it's pretty easy for me to figure out where in the code that bug is. And in this case, the bug is just like (I forgot) a semicolon or whatever. So people then, I think all of those who have that habit of doing that now, are just like frequently compiling their application to get feedback from the compiler on the syntax of their code. And then I think, what we start to see, especially when Apple introduced the new LLVM and Clang ToolChain, the compiler start to get really really smart and actually knows about the semantics of your code as well. So, it will say "Hey, I think you need to autorelease this subject", or you're sending a method to a object that doesn't know how to respond to that message. So, that's like a high-level of feedback, and again, it's really really useful. When I think of the name of the method and type it wrong, I want to know straight away that I done that so that I can fix it straight away rather than waiting a day, and then that being like these days where I have the changes that I've made and I don't know which change cools the error.
CHUCK:
So basically, what you're saying is you like the tight feedback loop just for the sense that you're in the same headspace as the code when it tells you it's broken.
PETE:
I think of it at a such space, like if my code stopped working and the last time it was working was a day ago, I've got a day where I have the changes to search to. If my code stopped working and the last time it was working was two minutes ago, then it's easy for me to think about which files have I
changed in the last two minutes; which things could possibly cause this breakage. And to me, unit testing is the next level up after that. It's like a better compiler; it's like compiler++. It's not just checking the syntax of my code, it's also checking that my code actually does what I expect it to do. Once you've got that feedback and the safety net underneath you, you start being able to take code that you know is working and you have this kind of courage to improve it immensely because you know that you can hit _B and _U to build and test stuff and get feedback that you haven't broken stuff. I kind of describe it as this safety net that allows you to do amazing acrobatics. So, a developer or a team has a good safety net of testing underneath them; have the courage to continually improve the quality of their software rather than have it kind of degrade slowly over time.
CHUCK:
Yeah. The only other thing that I would really add is that a lot of times, there are things that other developers do in the code. And so, if they can assert that their code works the way that they expect, it's not just about improving with refactoring, but improving with adding new features. So if I add a new feature and I touch something that somebody else had in there because I think I
understand the code and I think I understand what it supposed to do, if I'm wrong, it'll tell me.
PETE:
Yup! So it's kind of a test become a form of communication amongst different people in the team. Actually, and let's be honest, most of time they're communicating with future you. It's past you saying "Hey, future you, you did this wrong".
CHUCK:
Yeah. I was going to say, sometimes people put really dumb stuff in the code so you have to go fix it. And usually, those some people are me a week ago.
PETE:
Yup!
BEN:
I can't count how many times I've ran git blame on the line of code. [Laughter]
BEN:
Who wrote this? And then I see my name and then I just quietly don't answer. For me, the testing, I totally agree with Pete. You get that feedback and protection against future breakages and really enabling refactoring rather than giving you that fear of like "I want to make this change, but it's too risky. It's too big of elite from one lilypad to the next", and so you need a way to tell whether app is broken. The other aspect of this that I'll say, just because I've been working on a multi-year iOS project, we have some really good testers. They are actually just poking around on the app doing things that no normal human being probably would do, but they're trying to find ways to break the application. And when something just plain and outright doesn't work, like you click on a button and it doesn't do what we thought it would do or crashes, I don't want them spending their time on that; that is just complete utter waste of human talent. If they can find the much trickier issues, the things that are very hard for us to test in an automated fashion like starting to play some music and get on in elevator things like that, whereas I really see the value in having a good QA team. But, I
don't want them having to spend their time on things that we can catch ourselves and we can write this once and continue to catch this behavior and make sure that it's working because the alternative is, test the same thing every single time you're going to go the App Store, and that gets really old really fast.
PETE:
And there's organizations that do that, larger organizations that can [inaudible] money the problem; they just have a bunch of manual testers just testing the same thing over and over again. But, if you're not a company with more money than God, then you should be thinking about how you use your resources. And using this huge smartbrain of a QA to just follow a script is just a total waste of a smartbrain; if a computer can do it for you, then why would you have a human do it if he's not really good at doing the same thing over and over again. CHUCK: I also want to ask you guys, do any of you practice TDD?
PETE:
Never heard of it. [Laughter]
BEN:
I'm definitely in that camp in the Ruby community, I don't think I'm very good at it, but I continually tried because I feel like it makes me a better developer. I've done it a few times just in like isolated practice on iOS, and it is certainly a lot harder to do. But the end result of it, the reason why I keep coming back to it is, when I abandoned that mindset, like I'm never comfortable not writing test, so when I abandoned TDD and I go off on this sort of coding diatribe and I go like "I know what I want to do, I just want to get those as fast as I can", that's kind of rewarding, but that code then is untested and I don't feel safe about that code. And I go back and try to test it, it's almost always harder to go back and try to test that thing; and the longer I go, the more anxiety I have about my own codebase breaking. And so, sometimes I will stop and say "Okay, if I really do this in a tester bin way, how would I structure these classes in a different way?" like make something configurable via -- I don't know if I should take a side step and say "There's a thing called Dependency Injection", which you'd typically have to do in C# or Java, to say "I'm going to talk to this API", and so I have this API client and I want to inject a fake version of that so I can test it. And so, you would typically take it as a constructor-arg and you'd pass in something that looks like that API class, but is something different. You don't necessarily have to do that in Objective C. You could just say "Make that a factory method on your class" and then stub it in your test to return something else. So, you can provide these hooks and it doesn't necessarily mean that you have to just invert your entire design. So, there's things like that to which, if you're doing in the test, you're going to weigh those things with come to the surface more quickly. But, I can't say that I do TDD on iOS just because I find it a little bit too difficult.
PETE:
I think that there's a spectrum there as well, from religious dogmatic TDD through to kind of thinking, like you were saying, of thinking about how you would test this and letting that kind of help drive your design, and then kind of doing test alongside so you kind of write some code then write some tests. You're not doing test first, you are doing test at the same time or something like that. And then, there's kind of test at some point after you've written the code, and then there's like don't test at all. I think, even if you're not doing dogmatic TDD, even if you're just thinking as you're writing the code like "When I go to test this, how am I going to go to test it?" it can still help drive your design, which is what Chuck was [inaudible]. TDD, writing a test before you write your code: one of the big values there is it can actually help drive your design in a better place because you're forced to kind of compose your code or write your code in a way that's decoupled and kind of isolated. And responsibilities tend to be in the right place because basically, all of your code, you're writing it from the point of view of the client consuming that code, which tends to help you write code that's focused and has kind of clear separations of concerns.
BEN:
Yes.
BEN:
Speaking of that, UIViewControllers are not the place to put like your entire application. And that's part of the reason why I find like it's difficult to test, especially when you're working on a project where either you're working on a team or you're working with people who aren't exposed to this philosophy of design like separating out concerns. Almost every sample app you find, or guidance from Apple and things like this, you'll find code that deals with CoreLocation in your ViewController, you'll find networking code directly in your ViewController. Some of that stuff, it's okay I guess if you have a very simple interface to what you expect to get out of these frameworks like CoreLocation, but that makes it very difficult to test that thing because you depend on this thing that's kind of a living breathing beast that has callbacks and can fail and you have to simulate that in your environment. If you can extract that type of stuff out into a much simpler, much narrower API, like "Give me the location and here's a block that will tell what the location is or an error that it failed", if you could extract something out that's that simple, then that becomes easier to test and you have one thing that you can snip out in a test and isolate rather than seven delegate methods.
PETE:
So I would argue that, that is exactly why people should test driving their ViewController code, which is pretty extreme. And I think, someone listening to this podcast who hasn't done that much unit testing, please don't start by trying to test drive [Laughter] your ViewController code because you're going to tell all your friends how awful unit testing is and how it doesn't work. But, I've been on projects and I've definitely talked to over footworks folks who've done that like test driving, not just your ViewController code, actually test driving your nibs and how your View hierarchy is constructed. I'm not sure that test driving building your View hierarchy actually makes sense, but test driving how your ViewControllers interact with the rest of the universe doesn't force you, but it's a design pressure that pushes you towards doing a lot of stuff that Ben was talking about not lumping everything into delegate methods and extracting out responsibilities because it's hard to test stuff in the ViewController and that's because the ViewController should be the interface between the presentation line, which is hard to test, and the rest of your system. If you try and put the entire data of your system into that interface, then it's hard to test. And that's the test that's giving you good design feedback that your code is in the wrong place. So I think you should be testing ViewControllers to force you to write better code.
CHUCK:
I also want to just jump in here and let people know that the skill of testing and the skill of test driving your code, both of those things are things that take practice. So if it's hard to begin with, it just takes practice to figure out when it's hard and it's hard because you're kind of doing things in a way that makes it hard and when it's hard, just because you don't have the habits or the practice.
PETE:
I think, unit testing is the hardest thing I've ever had to learn in software. I've been doing it for probably going on for 10 years and I still find it really hard; I find it harder than any other thing I've had to learn in software. So I think, people shouldn't be disheartened when they find it hard and it feels like it's a struggle and it takes long time; it does take a long time, but it's worth it.
BEN:
It's rewarding.
PETE:
Yup!
CHUCK:
Yup! Absolutely!
ROD:
I think that, what Pete said about the UI being hard to test is the answer to the question "Why no one test on iOS and Mac?"
PETE:
Yeah.
ROD:
Because historically, there's so much focus on the UI in iOS applications and Mac applications. Then, like you said, UI testing is hard so that's why no one does it. [crosstalk]
BEN:
This is a smart client application UI testing a single-threaded web server code or like some library components and things like that. They have clear inputs and outputs that are so easy to test. You just stub out the things you don't want to talk to, you make some assertions, whatever. That stuff you can learn and you just pair with somebody who has done this before and they'll be able to get you right along. But the things that become difficult are, I'm going to call this method in passing a block for when it's going to call me back, and then you need to make sure that the block was called to its specific arguments. That testing paradigm is much much more complicated than testing inputs and outputs. Right?
PETE:
Yeah. I think both of the things are true. And I also think that maybe Objective C isn't a place to kind of hone this or to first learn these skills. Or, let me rephrase that, I think writing an iPhone app is quite a hard place to learn some of these things, but I actually think people would be surprised if you do test-driven development on your iPhone app; how much stuff isn't ViewController of the code or how much stuff isn't presentation line of code. It feels like it is when it's already in one place, but once you actually put that out, it turns out a lot of it is kind of application logic that you can test in isolation.
ROD:
Right.
BEN:
So, do we want to get into some recommendations? Because I made the client that I really hate SenTestingKit and then talked about mocks and stubs and how to isolate things, but we haven't dug into that yet.
CHUCK:
Yeah, I was going to ask what tools you guys use and what tools you recommend? And maybe, we should get started by talking about the tools that are provided by Apple.
BEN:
So the SenTestingKit tools, it's another target that you have in your project; it's a test target. So when you hit _U, it will run the test associated with this project and it has no way of or no easy way of isolating which test surround it just runs them all. And, it will tell you when it will like jump straight, kind of like a build failure able to jump straight to the line that cause the problem and tell you what the problem is. This SenTestingKit is also called "OCUnit", but in the Xcode UI, you'll almost always see it referred to as "SenTestingKit". There's no easy way to run these tests from the command-line, there's multiple ways you can do it; all of them have some draw backs. My current method, which is not that great, is to remove the test host flag in your build settings which will prevent the UI from popping up when you run your test. But as such, you don't get to run any kind of UIKit related code in your test. If you're just testing logic for components that are decoupled from UIKit, then that's one way you can do it and get output on the command-line.
PETE:
And there are other ways; there are ways that you can run -- So, there's these two types of testing that Ben was just talking about. There's the "Application Test and Logic Test", I think those are the two words that Apple uses often. Logic tests don't interact with UIKit so they don't actually need to run in the simulator; they can run on regular just outside of the context of the simulator. Application tests, they're kind of more full stack and they involve the UI and they have to be run on the simulator, which makes them a little bit harder to run in the context of the command-line. But, there are ways to do it. I'll have to dig up my very very smart, much smarter than me colleague, Stew Gleadow, has got some good information on this because he spends a lot of time trying to get things to work in this area. And I think, Eloy Duran, the guy that maintains CocoaPods, he has a way of doing this so that you can run your tests on Travis, which is a whole lot of topic of Continuous Integration.
BEN:
My script is based on his, and I took his -- It was much more difficult to adapt, I guess, to other projects because he was bypassing all of the built in...so, there's a run unit test shell script that's built in to your Xcode installation. That makes all kinds of assumptions and it also has a warning that says "Unit test are not supported from the command-line", or something like that, which is completely false. And you can change that warning to just let it continue, and then it will actually work from the command-line. And so, I think what he did is just take the pieces out of that, that runs OCUnit directly, or OCTest (I forget what the binary's name). But anyway, there's definitely way to do this. But the point is that, Apple should be providing us this stuff in a much more easy-touse fashion. Instead of hacking their scripts or bypassing them entirely, it would certainly be nice if they just had a supportive way of running test from the command-line.
PETE:
Absolutely.
ROD:
Right.
PETE:
And I think, it's a cultural thing. Actually, I'm pretty convinced that it's one guy Apple on the Xcode team who's a very smart engineer who's been maintaining this script [Laughter], I mean it is a one guy. And if you search for any of the internals of that script, he's (this live journalist) the one that pops up from like 2006. It frustrates me a little bit; I had to stop myself of starting ranting about the stuff. But yeah, it is definitely true that Apple just don't seem to really see the value in running this stuff as part of the Continuous Integration pipeline.
CHUCK:
So, since you brought up continuous integration, is there a good way to run it in continuous integration?
BEN:
Yeah. I think the command-line mode is the first hurdle; you have to figure out "Okay how do I run these from a script?" And once you solve that problem, then you're golden. It's just that, with each version of Xcode, like you were saying, they may change something which makes them not run anymore. It's difficult to know what the output is of your tests because it generates gobs of just general build output and you kind of have to scroll up and find them. The script I'm currently using runs Ruby and it runs it through this colored gem to color line output. And so basically, it just does or rejects on each line, and if it things that the test failure, it will color the line red. And so then I use that to really quickly jump to what the failure is and I just run that from the command-line.
PETE:
There's another kind of script and tools out there to do stuff like "Code Coverage". Code coverage is basically trying to find out how much of your application code is executed as part of a test run so that you have some kind of rough ideas which part of the application are covered by test in which bits on. And, there's ways that you can hook into the test run to get back code coverage. It's not super well-documented particularly because a lot of the tools that work for GCC stopped working when everyone moved over to LLVM, which is like the new compiler tool chain that you can get stuff like code coverage. And actually probably if you're coming from a Ruby background, the quality of those metrics is actually better than in a lot of Ruby applications. So, you can do stuff like code coverage. But I think, [inaudible] people here are getting started. The ability to run this stuff on the command-line is a great first step. I encourage people to not get hung up on tools like a Continuous Integration Server. Your first step is just to get this feedback in some way. And if getting that feedback is making sure you run all of your unit test from the command-line before you check in your code, then that can provide most of the value that a CI server provides without having to setting up a CI server. You don't have to do all of this stuff at once; you can ease your way into the brave new world of continuous integration.
CHUCK:
Yeah. The other thing you can do is set up a "git hook". And since Mac OS is based on VST or the Linux Unix kind of stuff, most of these command-line tools actually return a regular code, I think it's a zero, if they run properly and if there's some error while you're running it then able to return another code like a minus one or something. And so, because of that, you can also set up a git hook and when you try and commit or try and push, then you can have it give you feedback then. I've done it before, it's kind of painful if the test take more than like a couple of seconds. But, that is one thing that you can do, so it just won't let you push until it's getting green across the board.
BEN:
Right. So, there's some other tools out there. One of them is "GHUnit", which aim to be like a complete replacement; you don't use the SenTestingKit stuff for all the mentions we've talked about. And the way they did it was you have a library for doing assertions like this thing should be equal to that, and they also have their own test runner, which runs in an iPhone simulator and it runs through all the test and gives you green or red output - which is kind of nice when you're looking at it. But ultimately, there's some value in kind of staying on the Xcode train as long as they keep this thing working. And I use the tool called "Kiwi", it maps closed [inaudible] back in Ruby. So, you have what's called "describe blocks" and "context send before blocks" and "it blocks", and things like these - all block-based. But it allows me to kind of set up an environment and say "When I'm in this state, then I expect these things to happen". And Kiwi includes a pretty powerful mocking and stubbing support and more of a fluid interface or fluid (what's the word I'm thinking), fluid language for asserting things. So instead of saying "STAssertEquals this value, that value", you actually have methods that say "My string should equal this other string", and it reads a little bit better than these sort of assertive syntax of the STAssert macros.
PETE:
There was also another very similar tool that was actually a pre-test that Kiwi called "Cedar".
BEN:
Cedar
PETE:
Yeah, I think Adam Milligan is the guy of Pivotal. Nowadays, I encourage people to use Kiwi because it's a little bit easier to set up on it -- [crosstalk]
BEN:
It's also built on top of SenTest kit. All of the examples that you create in Kiwi are under the hood powered by SenTestingKit. So, you hit command U in Xcode and they just run and it jumps straight to the line where the problem was. That's the type of stuff that are not going to get out of something like GHUnit or Cedar. In general, Kiwi is just --
PETE:
I think that's really really valuable actually. I think it's worth it.
BEN:
Yup.
PETE:
I was saying, I think it's all about feedback and I think just getting that feedback in your IDE is really compelling advantages of building on top of the Apple stuff.
CHUCK:
So effectively what you're saying is that in Kiwi, you're calling something like joe.shoot=fred, and it translates that to an STAssert or something like that...
BEN:
It's close. If you squint --
PETE:
It's objective safe... [Laughter]
PETE:
It's Objective C so it's like pick a random punctuation character or possibly a white space -- that's actually what my pet peeve about these like internal DSLs built in Objective C; objective C is a horrible language for DSLs. It's a great language in lots of other ways, but yeah, if you squint hard enough it does kind of [inaudible].
BEN:
One thing you can do in Objective C is you can attach a method onto any object just by creating an NSObject category. Once you include that category, you can just call the method as if it existed on the object so, "should" is one of those things. So, you have anything like you have a date, a string, whatever, you can call "should", and then you have to close the square brackets because you then need to send a message to whatever should return, to which is some sort of like expectation receiver.
CHUCK:
Okay.
BEN:
I don't know...If you've never designed a DSL like many of the steps is going to make any sense, which is kind of why it's hard to write, if you don't understand what's going on. But anyway, so should will return something like an expectation receiver and then it'll say "It should be equal to or should be less than or greater than or after the state or before the state" and the implementation of those matchers will raise a specific exception if they're not met. So where that breaks down is, if you have something like account, which comes back as an integer, and you want to call should on that, you can't do that in Objective C because it's just a scalar value; there's values on the stack, there's no methods on it. So, you have to wrap it. And so, there is a macro that you can call "Devalue", passing your value, whatever; it could be offloader, double, or integer, whatever. And then that wraps it in an NSValue, which can receive all of these dynamic selectors. So say, the value 5 should be equal to the value 6, or whatever, and then that would fail.
CHUCK:
Ahh! Got it!
BEN:
Yes. It's kind of painful.
PETE:
Yeah, that's my theory with things like Kiwi. These internal DSLs is, it'd actually have to think about how it's implemented in order to use the DSL, then I question the value of the DSL. Because, yes, it might be easier to read, but if you're like stumbling over like "Oh, wait! So, this thing returns a message receiver so now I need to..Oh, wait! This is a primitive value so I need to box it with the value". I think it's great; it's an awesome magic, but it also maybe makes it a little bit intimidating for people who are just getting started because the --
BEN:
I totally agree with that.
PETE:
You're learning like you're not just learning concepts in testing and concepts in mocking, it also like learning the deep dark magic of Objective C with methods swizzling and categories, and stuff like that. It's kind of learning everything at once --
BEN:
One of my gripes about like plain assertion syntax, and this is in any languages, they'll say "assert=" and then there's the real parameters to it; there's two of them are values, and one of them is like a description. So, my gripe there is the description; it should be optional. I should be able to just, based on what was passed in, I should be able to say "I expected 5, but I got 6", or something like that. But the order of those parameters, you passed in 5 and 6, matter, like the expected value and the actual value matter. And when you're doing it wrong, it'll say "Expected 6, but I got 5", and that changes how you're going to debug the test, or fix the problem, or whatever it is. You need to really understand what the message is telling you. I find that I get confused of which value is which when I'm doing the assertion style, but this probably because I've been using RSpec for so long that I kind of prefer the wordier sort of punctuation-heavy version.
PETE:
I like it, too. I definitely prefer using Kiwi to using unit testing style stuff, but it just...yeah, go ahead.
ROD:
If you don't like the Objective C syntax of Kiwi, an option to consider is RubyMotion. You can test Objective C in RubyMotion and integrate Objective C code into RubyMotion. And they have an article on using their RSpec framework that's why it's called "Bacon", they had an article on how to do that.
PETE:
That sounds really interesting.
ROD:
That's an alternative to consider.
BEN:
Sounds very tasty.
CHUCK:
That is interesting...and I love bacon. [Ben laughs]
PETE:
That's another contribution from Eloy Duran, actually, I think.
BEN:
Yup.
PETE:
To write Bacon...
ROD:
Yeah.
CHUCK:
So, we've kind of talked about a lot of these tools for unit testing. I know that Pete has written testing framework that kind of goes a little beyond that called "Frank".
PETE:
Yup.
CHUCK:
Do you want to talk about Frank for a minute?
PETE:
Sure! So...
BEN:
You have 3 minutes! Go! [Laughter]
PETE:
How long do we have? So, Frank is like one of those class of tools that, when I was talking at it, about kind of the spectrum for me to exceptions testing. Frank is kind of more exceptions testing, and if you're familiar with the web testing world, it's pretty similar to "Selenium", or something like that. So, it drives your full application using the UI so it simulates user interactions rather than calling methods on causes; you're actually simulating the user. And then, you can come drive the UI and then make assertions about the state of the UI. A trivial example would be, you want to check your login validations stuff works. In order to do that, you say "Type in my username into the username field; type in my password into the password field. Tap the login button", and I should be on the homepage or I should see a message saying "Invalid password", or whatever. So, it's that really super high-level thinking about things from the point of view of the user rather than thinking about implementation. And Frank, it's one of these tools -- I think it's a pretty good one; obviously, I'm pretty biased there -- so Frank is one option the official Apple tool is, I think, would UI Automation. There's a bunch of tools that kind of extend or build upon your UI Automation in different ways, so there's...Jeez! I can't remember the name of all of them. There's a thing called "Zucchini", there's a thing called...I can't remember the name, but it add some nice like JavaScript and I think maybe CoffeeScript tools to UI Automation. I have to remember the name, though.
BEN:
I know what you're talking about. I'm drawing a blank as well.
PETE:
It's so funny! This is the second time this has happened to me when I've been describing these tools, like I blank on this name. Anyway, that thing is a bunch of other things as a new tool called "Appium", which implements the web driver, wire a protocol -- which is web driver is the new name for selenium basically -- so you can use a lot of the infrastructure you use for testing a web application to drive UI Automation, so Appium is an interesting one. These different tools that [inaudible], there's "Bwoken", which is from Bendyworks. That, I think, is one of the ones that builds on top of UI Automation, if I remember correctly.
BEN:
Yes.
PETE:
And then there's also, the other side of it, is things that kind of reimplement UI Automation. So, Frank kind of is one of those tool from Square called "KIF", where you write your test in Objective C - your acceptance test in Objective C or level ones. Oh, and there's "Calabash", which is a tool that is very very similar to Frank. It was kind of based on the same architectures as Frank and that also doesn't an Android implementation for that thing, too. And so, yeah, there's a whole ecosystem out there.
BEN:
I spent a fair amount of time with KIF, and I really liked it, the end result. We have a Mac community that runs Jenkins; it's running all our tests when we check in the code. And KIF would actually launch the simulator and actually tap on stuff like Frank does. And over time, it actually helped us beef up our accessibility because that's how it has the hooks into your simulator that it knows what buttons to tap and --
PETE:
That's pretty much...almost of these tools have that same property and they help you improve your automation.
BEN:
I found it really difficult to maintain this over time, though, and it would kind of break for interesting reasons to the point where the team stopped having faith in the validity of a test failure, which is a bad thing to have in a test suite. You want to make sure that when it breaks, that everybody is looking at it. So, I ended up slowly, but surely, turning those off. And I have grand plans for entries something like Frank in the future, something that we will pay a closer attention to so that when they fail, people actually care.
PETE:
That's like the golden rule with these high-level acceptance tests: Lots of test that fail are worst than no tests at all. I honestly believe that they're worst than not having any test at all. So, I think it's actually would be better to just delete all of the test apart from the ones that are passing, and then focus all of your energy into keeping those valuable test passing rather than trying to add more test that sometimes fail. This is a lesson that a lot of teams learn over time. And it feels like when you first start using these tools, feels like you're getting so much coverage so easily that you kind of go to town and write a bunch of tests. And then, 6 months later, you're really feeling the pain of maintaining these things and people aren't paying attention when they break. The point that you're not paying attention when this thing breaks, that you've lost all the value of the feedback. So yeah, I think it's really really important to have a small set of valuable test rather than a big set of "nah!" kind of valuable tests. The other thing that irritates me a little bit about KIF is, they reimplemented everything from scratch. So, they're not just doing kind of the high-level touch automation and kind of introspection stuff, they also wrote their own kind of test runner and all of the stuff like Kiwi does; they kind of reimplemented. I recently wrote a blog post, I'll have it linked, showing how you do the exact same kind of testing you do with KIF. But instead using Kiwi plus a library called "PublicAutomation", which is the kind of the underlying core of how Frank works. And that library just uses Apple's internal framework -- Apple have this internal framework called "UI Automation", which is how UI Automation, the tool, actually does touch into [inaudible]. And me plus a couple of smart folks on the front manning this figured out how to kind of unofficially kind of get out hooks into this private framework; and I wrote this library called "PublicAutomation", which is a thin wrapper over this private framework. You can use that to do the same stuff that KIF does, but you're using Apple's own library to do it rather than redoing it from scratch and you can use kind of best of breed...I hate that phrase... [Laughter]
PETE:
You can use the popular tools out there like Kiwi to do the kind of organizing of your tests and reporting on your test and integrating into CI. I think that's a better approach, the Unix philosophy of small tools that are focused on one thing rather than a big tool that tries to reimplement everything from scratch, like "Why should the KIF guys have to maintain the test runner when the Kiwi test runner works fine?"
BEN:
Yeah. One of my kind of big beefs with KIF is, I try to remember this specific change, but there is something that I needed and it was causing my app to not be tested until I have this changed. I go out there and look and some guy has portal quest for it, and it was exactly the right fix and he said "Everything looks good; we'll merger this as soon as you sign this Contributor License Agreement". [Chuck laughs]
BEN:
And, he send it to his company's lawyers and they said "No, they were not going to sign that". So he said "Sorry, I can't sign it". And so, there that change sitting in limbo and you can use his branch if you want, but it's not going to be included into the main project. They probably fixed that particular blocker since then, but yeah, I see your face in the portal quest list as well. [Laughter]
PETE:
The CLA thing, the Contributor License Agreement thing is, that's a hassle one. I've gone through a similar thing with Frank when actually I wanted to change the license with Frank to make it more passable with companies. So, I used to have some GPL code in there, so the whole thing was like quite aggressively open source license, which may make some companies nervous. That's actually, I think -- well, I know because I taught with the Calabash guys, that's the reason Calabash originally was written; was because they looked ease in Frank, but that license made them scared because they wanted to build the business around this and you want to make your product attractive to people that are going to pay you money. But in order for me to change the license, I actually have to go and find every single person that's every contributor in the line of code and get them to agree to change in the license. So, it's tricky. I agree that it's annoying to have that friction, but it's also, as a maintainer of a project, it makes things a little bit easier when you want to do stuff that's for the benefit of the whole community. So, it's pros and cons; swings around it about, I
guess.
BEN:
Yeah, yeah.
CHUCK:
Yup!
PETE:
I had the same reaction when I first wanted to contribute to KIF, I go "Oh! See you later! Really?", and then I found myself making one myself so [laughs]...got to wento full circle on that one...
BEN:
You guys should have seen the CLA, like poster child GitHub discussion, have you seen that? On a discourse project, somebody had a contributor's text file that said "developmer", and somebody corrected it and sent him a bore request. And they're like "Can you sign the CLA for us?" [laughs]. And the guy said basically, "Nope, I'm not signing it". And so, there's kind of joke that the guy would just be developmer forever. [Laughter]
CHUCK:
Alright well, we're pretty much out of time. I know there's a lot more to say about testing; it's kind of a wide broad subject. So, we'll probably revisit it in the future and probably a little bit more focused on just one area, maybe Unit Test or Continuous Integration or something. But, I think we've had a pretty good discussion about why and how and what tools are out there, and given people a lot of things to go check out. So, I'm really excited to get this one out and have people go and start playing with this stuff and start testing their code just because I've seen testing really make my life easier and I think there are big payoffs for a lot of folks out there who aren't doing it to just earning a little bit will get a lot out of it. So, let's get into the picks! Ben, do you want to kick us off with picks?
BEN:
Sure! I've got 4, one of them is "TextExpander". I need to find more ways to use this tool, but TextExpander allows you to recognize a snippet of code and expand it in something else. One of the ones I have is FFliptable (so it's just fliptable all one word with an extra 'F' at the beginning), and that produces the SKArt Fliptable thing, it's kind of fun. [Laughter]
BEN:
Things like that. If I want to write out the command, the Apple command logo, I can never remember what the code is to do. And so, I have that CCMD, and I have OOPT for option, which should actually work and like expand that into the actual symbol. And I have it for canned emails that I have to send all the time, so I just have a quick snippet that just kind of I find areas in my life where I can make something a little bit quicker and I try and create a textexpander snippet for that. The next one is "Alfred". This has been my application launcher of choice for a couple of years now. I've used like a powepack subscriber and I just got on to the v2 of Alfred. It's got a clipboard history in there, it's got an iTunes in the server so you can quickly just play any song or album. Launch applications, I have it with workflows now, so you can have it search to Google or give you the weather; it's just like really programmable. Also, "Jenkins", which we mentioned in the show. We use that heavily at my job. And lastly, I thought I would pull an Avdi Grimm and do a boost pick. So, I've been enjoying some "Oban Scotch"! [Laughs]
CHUCK:
Nice. Alright, Pete, what are your picks?
PETE:
My first pick is actually a product of Ben's, that's "NSScreencasts on Kiwi", which is this testing tool we've been talking about today, and I believe that's one of the free ones.
BEN:
Yeah, I think it is.
PETE:
That's your gateway drug to both unit testing and NSScreencasts all wrapped up in one. And I actually just watched that the other day because I was dating all of his material to do a talk about Kiwi.
BEN:
Oh man, that was episode number 4, wow!
PETE:
Oh, yeah?
BEN:
That was over a year old. Yeah! You need another one.
PETE:
It's a good one! It covers exactly what you need to get started, which is great. I guess I've got a random pick, it's "Rock Climbing". I used to be into rock climbing back before I had a kid and I just recently went rock climbing for the first time in like 3 years; and I remembered how good it is to find an activity that's kind of totally not related to software. I realized that a lot of times, doing these kinds of things actually helps you solve separate problems. So, "rock climbing" is one of my picks! And then my last pick is, an alcohol pick as well. This is only for people who are listening in the West Coast, I think, so there never has distribution out in the East Coast, I may seriously be a dork. There's brewery here in [inaudible] which is called 'Speakeasy', and they have this kind of new -- I'm not sure if it's seasonal or...I've not seen it before, but it's called "Scarlett Red", and it's a rye beer. It's really really good. So, Scarlett Red, rye, from Prohibition.
CHUCK:
But you may only be able to get it Butte Lake, right? [Laughter]
PETE:
Yeah. Right. Maybe only available in the West Coast. It's not from Prohibition, sorry. The beer that they have most [inaudible] for is Prohibition, but Speakeasy is the brewery, Scarlett Red is the beer.
CHUCK:
Awesome. Rod, what are your picks?
ROD:
Alright, if you want to read more about testing and Kiwi and all that stuff, there's a book called "Test-Driven iOS Development" by Graham Lee. So, that's my first pick. For my second pick, as a baseball fan and a Dodger fan, I wanted to pick the movie "42", just all this weekend. It is a movie about Jackie Robinson, he was the first African-American play in major league baseball. Those are my picks!
CHUCK:
Awesome!
PETE:
Can I have a last minute extra pick? [Ben laughs]
CHUCK:
Fine...
PETE:
I just remembered...I can't believe I'm already like not remembering stuff that I've already picked. That's a new book on "UI Automation from Pragmatic Programmers" by the --
BEN:
Yeah! Jonathan Penn!
PETE:
Yup! By the awesome Jonathan Penn. So, if you're looking into getting into UI Automation, then I think that would be a great resource. He also tap some other things, he references things to tools like Frank, but the focus is on using Apple's UI Automation Tool.
CHUCK:
Nice. Alright so, my picks; my first pick is, I do a whole lot more web development than I do mobile development, so my first pick I'm going to pick "Backbone.js", which is something that's really nice to help you organize your code. You can use it on mobile web apps just like you can on regular web apps, but yeah, I'm going to pick that. I'm also going to pick something as a counterpoint to Ben's pick of Alfred, I use "LaunchBar", and I've really really liked what I've gotten from it. It does all kinds of things, you can tell it to index your music, and then you can start typing in the names of songs, and it'll just play them. In iTunes, you can start typing in app, you can tell it to just print stuff really big on your screen, and you can also, if you just start typing numbers, it'll actually do math in there, so you can just...
BEN:
Check, check, check! [Laughter]
CHUCK:
Yeah. I'm sure Alfred does more or less the same thing and more. But anyway, it's really super nice. And so, I'm going to pick that, and I'm probably just going to stop there. So next week, we're going to be talking to Josh Abernathy about iOS and Mac and the differences between them so, looking forward to that. And if you've been listening to the last three shows and you have been enjoying the show, we would all really appreciate it if you would go into iTunes and give us review. We did make it into New and Noteworthy, but it would be nice to move up a little bit in New and Noteworthy. So, thank you for listening! We'll catch you all next week!