Integration Testing - The Why and How - .NET 204
We talk to Martin Costello - a .NET developer with a QA background - about integration testing. We walk through the different types of automated testing and discuss the benefits and purpose for each type.
Special Guests:
Martin Costello
Show Notes
We talk to Martin Costello - a .NET developer with a QA background - about integration testing. We walk through the different types of automated testing and discuss the benefits and purpose for each type.
Martin introduces us to useful tools he uses to write tests within the .NET ecosystem and discusses what we should and shouldn't be testing as well as the metrics that are important when evaluating how well tested your code is.
Links
- Integration Testing Techniques for ASP.NET Core
- Reliably Testing HTTP Integrations in a .NET Application 1
- Writing Logs to xunit Test Output
- Integration testing AWS Lambda C# Functions with Lambda Test Server
- Integration Testing ASP.NET Core Resources Protected with Antiforgery Using Application Parts
- GitHub - coverlet-coverage/coverlet: Cross platform code coverage for .NET
- GitHub - martincostello/sqllocaldb: SQL LocalDB Wrapper is a .NET library providing interop with the Microsoft SQL Server LocalDB Instance API
- GitHub - justeat/httpclient-interception: A .NET Standard library for intercepting server-side HTTP dependencies
- GitHub - martincostello/xunit-logging: Logging extensions for xunit
- GitHub - martincostello/lambda-test-server: A NuGet package that provides an in-memory test server for testing AWS Lambda functions
- GitHub - martincostello/dotnet-minimal-api-integration-testing: An example of integration testing ASP.NET Core 6 Minimal hosting and actions
- Twitter: Martin Costello ( @martin_costello )
Picks
Transcript
Hello, and welcome to another episode of adventuresin.net. I'm Shawn Clabough, your host. And with me today is cohost, Wai Liu. Hey, Wai. Hey, Shawn.
Hey. How you doing? I'm not doing too bad. You know, the weekend is here, so play It's always good. Hockey season has started for me, so I'm getting out and getting some exercise and playing some hockey.
So I'm enjoying it. Nice. Yeah. Yeah. Cool.
Let's bring in our guest. Today, we welcome Martin Castillo. Welcome, Martin. Hi, guys. How are you doing today?
Pretty good. Pretty good. So, why don't you kinda give us the introduction to I know about you. I guess, kinda how you got into development and then then how you got into dot net and then kinda what you do currently. Sure.
So, my name is Mario Castello. I'm a senior engineer at justeat takeaway.com, which is like an online food delivery marketplace system. It operates in the US, Canada, Europe and Australia and New Zealand and I've Like an Uber Eats competitor type thing? Kind of similar. It's not not quite the same whereas we also include what we call marketplace.
So that's like your local takeaway. They can list themselves on our platform, and they might do the delivery themselves, or you could go to the restaurant and pick up the food yourself. So it's also like a convenient way for a restaurant to not have to worry about ecommerce. They can list themselves on the platform, and we'll provide them with, like, a a device they can put in the restaurant, and the orders come through to them. And then, our consumers can sort of order on the iOS app, the Android app, the website, and then we handle the payments.
We send the order through to the restaurant, and then they do what they're good at, which is cooking the food. And then in some cases, we also provide, like, couriers to take the foods from the restaurant to the customer. But in other cases, they might do that themselves because they already have their own, courier drivers or something like that. So I was here. I've been in the software development industry now for about 15 years, and, I started fresh out of university where I didn't do computer science or anything like that.
I did, physics, astrophysics and it was just before like the, the credit crunch happened. So it was when, graduate jobs were still a thing you could easily find in the job market when you came out of university. And I just sort of ambled into a job as a software tester, and the only coding I've done previous to that was I did a little bit of visual basic when I was, secondary school, and then I did it, and then I copied the knots and crosses tic tac toe game I made at that, and then I ported it into c plus plus for a module during my degree, and it was terrible. Like, I still have the code. And occasionally, if I fancy feeling better, I pay myself.
I just look at it and go, wow. What what rubbish that was? I would never write something like that now and but otherwise that was the only coding I could do that to my code that I wrote 6 months ago I know but it's a really easy way to feel better if you go back 20 years yep and but yeah so so that was the only real coding I'd done, and I was a software tester. And I got I was working on some software that was to do with address validation, which is very niche, but also very complicated. And the company I worked for at the time, the auto there wasn't much automation.
So it was like write some bat files and run some files through this processor and then diff the results and things like that. And the QA lead I had at the time, he'd written a tool to try and automate all of that, and it was like a Windows form app written in this language I'd never heard of before called c sharp, and then made my life a lot easier, him writing this tool for me so I could get all the menial tasks done a bit quicker. And then after my first project, I was like, made the lead to I say the lead. I was the only person working on this project. It was quite a small project that was just myself working on it, but it was a totally different product.
So there was no this automation didn't exist. So I was like, I spoke to my QA lead at the time. I was like, could I just, like, take this code and, like, make it work for this other product? And he was like, sure. I'm sure you can do that, but I'm busy doing this other stuff, so you need to work out for yourself if you wanna do it.
So I started getting into c sharp, and I think it was dotnetframework 2 at the time. That's how long ago it was. And it was, like, copying the whole project and then just renaming it. I came up with the name first and then sort of picking my way through it, trying to make it work by looking at the API manual for the different product and trying to make it work and then hitting, like, lots of the beginner things with c sharp. They would go, why does this say I need an instance of this?
Why can't I just call this method and just getting this whole mess of static and instance space code and things like that? And then over the next 9 years well, not 9 years. It was about 6 years of being a QA at my previous job. It's just sort of learning more and more c sharp as I went along writing automation to sort of make my life easier and the life of other people in the QA team's life easier. And then because I'd started to get quite technical, I got put onto a team working on one of our ecommerce products where there was only budget for 1 QA rather than 2, and they needed to be quite technical so they could write integration tests to call the API and test it worked and things like that, so they put me on it.
And, like, when you're, like, filing bugs and stuff, you've got Jira or whatever system, and it's like you write the bug up and then the developer looks at the bug and then the developer might fix it when it gets to the top of the prioritization and stuff like that. And when I would find really simple bugs, I'd be like, why can't I just change that myself in the source control system and fix it? Because otherwise it will never get prioritized and like something like a typo or something like that. And over time through that, they're like, Martin, you're writing so much product code and fixing the bugs in it and writing the unit test and doing all this stuff. It's gonna make you a developer now.
We're just gonna move you from QA into development. And then I did about 18 months of being a developer at that company, and then I moved to justittakeaway.com from there. Or it used to be Justit, but we've joined the takeaway.com. So we're one bigger, even longer named company now. And that was just before .netcore was launched, and then I sort of, like, picked picks it up and started looking into it, and it first came out out when it was in, like, the previews.
And then sort of from there, I've become very dotnetcoreadvocate and.netadvocate over.netframework. And part of the reason for that is just with my QA background, it's just so much easier to framework world, you're like, oh, I need to know somehow configure my local tests to integrate into IIS Express or into IIS and things like that, and suddenly it would be a lot harder and not as reliable to write tests against and get a good, like, in a development loop going and feedback on what you were working on. So it's kind of sort of sucked me in as it were from it being a lot cleaner and nicer to work with over dot net framework, and then I've just sort of absorbed and learned about all the different parts of mainly the ASP dot NET course end of the stack rather than say WPF and things like that. And most of my day to day job is working on, either APIs or ASP dot NET core, websites and infrastructure to do with those things and also, Lambda functions also written in dot net. So mostly dot net.
Sometimes I do a bit of HTML and TypeScripts or JavaScripts, and I really try and keep away from CSS because I'm one of those developers that's just, why isn't this in the middle of the screen? I can just about manage bold italics and changing the colors, and then that's about the limits of my abilities. Yeah. So I I think our main topic for today was was gonna be talking about testing and things like that. So, you know, there's different types of testing and there's unit tests, there's functional tests, there's integration tests, all sorts of different types of tests.
And everybody's also got a different definition of them. So can you kind of give us the the high level thing of testing and what kind of tests there are and how you define each one of those and then we can go into that. Sure. So it's it's almost it's almost like, testing talk bingo. It's like, mention the test pyramid.
So, it's like the the main types of test I think about in my day to day work is we've got unit tests, integration tests, and then end to end tests. So it's called the daily parlance we use in my team, where unit test is, like, we're gonna take a class or maybe just a method if it's something super simple, you know, like classic calculated territory, please add these numbers together. And that's where we're gonna use a test framework, like say x unit. We're gonna put some inputs and test for some outputs on that class. Some people think of it more like as as a functional unit, which you can also do that with x unit because sometimes I say to people x unit or n unit or MS test.
They're not unit test frameworks. They're test frameworks. They just help you write tests. It doesn't matter what style of test it is or how much of your stack or what level of the stack is testing. It's just a tool to help you do the testing and then the level above that is what high term integration tests, some people might call them system tests, To me, they are using the application in the way the user would use an application and, like, treating it like a black box.
So if it's like an API, I'd spin up the API and I would talk HTTP to that API. So it's doing get some posts to it. So for example, I don't know, I seek to do app example. I do a post to create a to do item, and then I would verify that worked by calling the get endpoint and getting the to do back and checking that the API says that the to do, I asked it to make matches what I asked it to do. And then end to end test is like the top of the 3 layer test pyramid I've sort of articulating, which is where you deploy all the code into your product not sorry.
Not your production environment. Don't do that first. Put the code into your development environment of choice, whether that's AWS as your Google Cloud or whatever, and you do the same sort of things you would be doing with your integration tests but with the whole deployed system and any other systems that might talk to. Whereas at the integration test layer I would almost always not mock as in MOQ or Rhino mox mox, but I would mock or stub out anything external that's off the local machine or that would introduce test and reliability from, like, environmental differences so that you're just dealing with what the application does and you can fake anything that it's talking out to externally. So one way I like to think about with integration tests, though, you wouldn't do this especially because you would be able to stack overflow search for any of your coding answers.
It's almost like they're the type of test you can pull the the network cable out of your dev box and your app that talks to other services, it would still work because it's not really talking to those other services. So I don't know if if if this wasn't pandemic times and we could sit on planes with our laptops and do coding, you'd still be able to do meaningful sort of not tests above a unit test layer without having network connectivity and other things to talk to. Yeah. I think that's like chaos testing or something like that where you just go around and start pulling wires and plugs and trying to break things. You know, I think I think that's a thing that Netflix really started doing, you know.
Oh, yeah. The chaos monkey. Yeah. It's one of those things never quite reached the point where you're actually brave enough to, let that sort of thing run amok in an AWS account or in a production environment. So how do you get started with the integration tests?
You know, if that's we're gonna we're gonna focus mainly on that today. So what's the main thing to know in how to get started? So the main thing that I sort of because, like, you have classical test driven development where it's like, let's write all the tests and all everything will break, and then we'll write code until the test pass. Personally, I can't quite do that because, like, if I haven't written any code yet, it's not even gonna compile, let alone the test fail. And I just it's just too it's just too far in that direction for me.
So what I usually do is I'll have written, I don't know, finger in the air. So 80% of the logic, like, I wouldn't have maybe come down the edge cases. But if I was writing, like, a cred app, I'd have done, like, the post and the get and stood it up and done a bit of f 5 debugging if you're using studio, just see, I think I've done the minimum right of dev here. And then at that point, I'd switch to my test project and then start looking at writing my tests. I personally, I use, but, like, sort of using HTTP client and things like that to talk to it.
But that's where you've, like, well, it's tricky to start just firing up the test server all the time as part of the test. So I think a really good thing if you're doing ASP dot NET Core that you can use is there's a NuGet package they provide, which is Microsoft.asp.netcore.mvc.testing, and it contains a component called the web application factory. And what that does is it sort of knows about how to set up, like, the folder where all your Vue files are if you're doing razor and the dependencies and things like that, and it hooks into another component, which is part which is related to the Kestrel server, which is called test server. And what that does is it sort of runs your application in memory. So instead of it be actually listening on a HTTP port, it's sort of it's all hosted in memory.
And when you do networking calls, they don't really go over the network because it uses a special kind of HTTP client where that sort of just it's sort of plumbing the client to the server directly together in memory. And sort of first thing is to get that sort of setup done so that you get faster feedback and you can easily start up the application that you're trying to test. And the other benefit that web application factory gives you is it can also give you access to your dependency injection container. So if you've got things like entity framework database plugged in or you've got your configuration doing HTTP client calls off to something. Where you need to, you can tweak the the, like, the application's configuration, the dependency injector injection setup so that you can swap out dependencies.
And then that's the bit that gives you the power so that you can go, I know you really want to talk to the SQL Server that's on this other server over here in the cloud, but instead, could can you use this connection string? And then you can potentially give it, like, the connection string for, like, a SQLite database or SQL Server and local DB or something like that. And then suddenly, you you don't have to worry about, oh, I ran this test. And then the second time I ran it, the data was in the database now. So my test failed because there was already some data in the database and stuff like that.
It means you can easily relatively easily sort of clean, set up, and tear down in between your tests. And then that gives you more reliability in and repeatability in sort of validating that your code works and trying to get away from the old flaky tests. You, like, push up your change into your CI system, and then it fails. And then you're like, why did that fail? I didn't even touch that.
And then you just press the rerun button until it goes green. And then you're like, yep. It's all sorted now. Do do you have any advice actually about, like, how long a test should run for before it becomes, like, just taking too long, I believe? Well, that's a because, you know, the cleanup, that stuff does take a long time some time, you know.
So I think I genuinely I think, like, the true answer would depend on your use cases and what your code really does because, like, if you're doing if the underlying thing you're testing takes 30 seconds, then maybe you're not gonna notice. Okay. Sure. If you've got 100 of tests, it's all gonna add up, but you may be not gonna notice a half a second set up and tear down at the end. Whereas if you've if you're if what you're testing takes a couple of milliseconds, then, yeah, you probably don't want one second, 2 seconds set up and tear down times around it.
I think we had this problem with some test suites at my old job, actually. Like, we set up a load of tests where we'd stubbed out the database, and then we had, like, a handful of tests, and they were really useful. And then the tests suite got bigger over time. And then it was like, why are these tests so slow? Why do they take several minutes to run?
And it was because it was spending 2 seconds per test creating a database, deleting it again. So with that, we sort of things. So it was more where the presence of other data, the test wasn't sensitive to it. We, like, had a shared fixture that the tech the test would reuse. So we just paid the setup test, the set cost once.
But then if we had tests where it was important that the database was empty at the start or it had specific data in it, they'd have custom, fixtures. And then we'd play our test run. We managed to, like, shave, like, 5 minutes off it just by refactoring it a little bit. But, yeah, I think this I think the sweet spot for me with a set of tests, whatever level of the test pyramid is at, I'd say about a minute is my upper limit. Because once you've got past a minute, then if you wanna rerun all the tests to be confident in whatever it is you've just done locally, you start getting into the the point where your mind starts to wander.
And then you're like, oh, just check that email. I'll just look at this thing in Stack Overflow that I wanted to look up. I'll just look at Twitter on my phone while this runs. And then you start not being as productive anymore. So, yeah, there's like this sort of sweet spot of how long is too long, but I think it was always gonna be a factor of that is gonna be how much feedback you get from the test, how many tests you're you're running, and what your product does.
Depending on, when you run the test. Some some tests, I'm guessing you run on every compile or every time you commit or whatever. Some tests you might run on every time you, you know, you have a pull request, and then some tests you might just run nightly kind of thing. And the the nightly ones, they can run Oh, yeah. Take an hour if they want to, you know, like Yeah.
Because I I think I've noticed that from doing, contributions to, some of the dot net repos is like if you do a pull requests to ASP net core, it takes about an hour for all the GitHub statuses to go green because it's it's building it on, like, 4 OS's, and then it's testing it on 4 OS's. And then it does some some other stuff, and then it all aggregates together. But then they also have a nightly run that tests it on even more operating systems. And it's just like, wow. That must just take a very long time to get.
If you've if you if you're a developer on the project full time and you've made quite a sweeping change, like, you're talking about over a day to know that you haven't broken something. Yep. Oh, you missed the silly comment that Yeah. Yeah. Because like I know what I guess I'm mostly use Visual Studio over Visual Studio Code or Rider or something like that and it has a feature now for like to automatically run your tests when you've compiled.
But most of the projects I work on, the suites take 1 or 2 minutes to run. And it's just like you're constantly I'm almost like one of those people who's like, I don't know, pretty quick being doing my homework on Microsoft Word in the, like, the late nineties. Just constantly control s control s control s not trusting it's not lose your work and that none of my keyboards have an s key. Well, you you can't not the s etching you can't see it after a period of time, but I just I was just constantly interrupting the test run because I'm just typing a little bit more and doing it. So I ended up having to turn that off.
So yeah. So I I think often or like sort of work up if I if I'm changing a class, I'll run the unit tests related to that. And then when I think I'm done, all of the repos my team have a work. We have just, like, have a build script in the root of the git repo, and I just open up a term terminal. I just go build dotpsoneenter, and then it compiles it and runs it all.
And then if I'm if that doesn't, crap out or the, the test coverage is rubbish or whatever, then, then that's when I push it up and maybe do a PR rather than trying to get into the whole, CI feedback loop of, oh, just throw it up to CI and see if it fails, and then try and work out why it failed if it did. Because it's it's not the most productive to try and rely on the CI system to do it because those tests that don't work in CI but work locally is stuff of nightmares. So you talked about integration tests with APIs. Is that the only thing you can do, you know, use integration tests for is APIs? Can you can you do a web application or a desktop application or anything like that?
So I yeah. I also use it for, this approach for web applications. So with let's say, a razor razor pages or MVC app and, like, even if it's got rich JavaScript with ASP dot NET Core, you can the trick to that is because if you use the vanilla test server, there's no real HTTP report there. So you could write a test using HTTP client and you could get the HTML of the page back, and then you'd be getting into gnarly HTML parsing. But you can't use a tool like Selenium with it because there's nowhere for the browser to talk to to get the pages.
So an approach I've used with that is to sort of wrap web application factory so you get, like, the the niceties of the DI system and finding your razor files and things like that. And they're also like your static content for your CSS and JavaScript. But then exposing that on a real HTTP port because then you've got something that a browser can talk to. And now you can integration test your rich web application with, like, you need to click that as you need to test that if I click this button, this happens. Or if I log in, this happens.
And extending it slightly to use a real HTTP server then means that you can also use it for your UI and web applications as well. And we use that extensively in my team to test the the sort of the web stack part of the sort of the vertical slice of the platform that my team work on. So but ultimately, when you get into web application, you're still starting to get into the whole UI testing is slower and more brittle than the lower level of testing. So we we try and focus the test more like the headless lower level, and headless maybe not the right word, but, not having the browser in play because you end up with having to do wait until the button is enabled, wait until this is on the screen, wait until that. And then that can sort of slow your flow down.
But you can definitely use this sort of approach for a, a web application, not just like an API that's doing JSON or XML or whatever other flavor of markup you wanna play with this week. But I would imagine you probably can do it with the desktop application, but that's not really something I work on. So it's not something I've tried to do in anger. But I'm sure there's the smart people out there who've solved that particular nut. So what are some things you shouldn't try to do with integration tests?
You know, some got watch things to watch out for. So one thing that I already brought up was the whole having too much time spent in your test, like, your setup and tear down time because you're gonna get to the point where if your test suites take too long, people might stop running them. And if you're not running them, they're not really giving you any value. The other the other trap that you can sometimes fall into that I've definitely fallen into from time to time as well is asserting on too much. So if you're writing a test, oh, get resource foo from the API.
And then if there's lots of edge cases in your logic, and then you're like, oh, and test it does that and test it does that and test it does that and does this do that. And then suddenly you've got a test that does one get and has, like, a 100 asserts. And, like, you put Isn't there like a basic, like, a purest view that you can only you should only have 1 assert per test? So do you do you do you follow that belief or do you it's not not a 100. But I can I can see I can see the value in it, but I think it's just the one is a bit too extreme because in my experience, it leads to test suites where, like, all of the tests are they get shoved into structures and things like that?
So that when the tests run, they're actually just asserting on a property that a magic construct to somewhere set. And then if you just wanna test the tweak the test behavior slightly, then you need to make another class that derives from that class and tweak something, and then you need to oh, but I can't change that bit. And then your people end up writing virtual methods to tweak the behavior, and then it just turns into this horrible hierarchy of nested classes. So I think if there were better patterns in dot nested classes. So I think if there were better patterns in dotnet to tackle that, then I think it would it would work nicely.
But I think at least in my experience, the way I've seen people try to implement that, it just sure if you break one thing and one thing only in like a 1,000 tests, one test will fail and you'll just know that that's the one thing you've broken. But then you get the maintenance nightmare of actually trying to keep that test suite in shape, but being easy enough to change for, like, when your business requirements change, you need to tweak something. And he's like, oh, now this common facet of these 300 tests all related to this base class. That doesn't apply to everything anymore. And now all my tests are broken, but they're not broken broken.
It's just I've changed the requirements, and now I've got to refactor half my test suite to account for the change in the business requirements. So I think definitely try and keep the asset small, but at the same time, I dislike when you, like, have, like, say 5 test methods and they're all the same and they're just doing one slightly different assertion. But then if you break something really fundamentally, then all the tests break, but they're all broken because of something that's not really specific to that test. It's just like you've broken the end point so all the tests break. So having a 100 tests fail doesn't add much value over having the one test failure in that scenario because it's doing that.
Oh, yeah. You just completely break that end point. I I think I see it. I think, like, I think, like, I've I've always thought that, like, like, unit testing and automated testing, you know, stuff. Like, it's probably it it can be learned.
You know? And it's not not that hard. But what's really hard is actually figuring out what to test and making your test aren't brittle enough that it just breaks for any reason. And you're and you're spending all this time maintaining it. But also making sure that your tests are actually meaningful and actually testing the the right things.
And those things are very, they're very circumstantial, I guess. And it's really dependent on the project and, the experience of the person writing the code. Yeah. I've seen a lot of projects where I basically worked on it. And there are lots of tests, but none of them are really testing like like anything.
You know? Like, they're like the yours your the the the a lot of them are even just there just to have increased that co coverage number. You know? So Yeah. There's, like, it's something I've seen in in the past.
It's like, oh, we've got a 100% code coverage. It's amazing. And it's like Yeah. No. No.
You've got a 100% of code coverage of the code you think you have to write. Yeah. But, like, if you totally missed a requirement in an edge case, it's not gonna work if you ask it to do that. But Mhmm. Because the code to do it isn't there.
So the code it makes no sense to have any code coverage of it because it's not there. So, yeah, it's just like code coverage is good to know where your blind spots are, but it's not an a magic number that a 100% means everything's bulletproof. Everything's great. It's all the sunlit uplands. We've done our job perfectly correctly.
Yeah. And also, you're mentioning tests need to be, like, easy to maintain, but also they need to be easy to add new tests. Because if you if you've got, like, a more junior engineer that say started on your team is not got as much experience, It should be relatively easy for them to look at tests we already have. Add some more for whatever changes they're making because if the tests are hard to write new tests, peep developers are lazy. They won't add new ones.
And then you the sort of the rot starts to set in over time. And then suddenly, oh, yeah. We bring code coverage back just to to make a point. Oh, yeah. We had 90% code coverage 6 months ago with that test suite, and it was all great.
And now 6 months later, and now we're at 40% coverage. What happened? And then it's like, oh, yeah. We made the test suite so complicated. No one wrote new tests for the last 6 months of work, and the coverage dipped.
And now, oh, now we need to, like, make a technical debt item to get back to where we were before. So yes. Do you recommend a minimum percentage of code coverage? So my anecdotal number, which I think I see I've seen banded around a lot, which I think is not far off in a good sweet spot is I'd say something like in the region of 70 70%. Because depending on what project you're working on, you probably find that if you dug through a code coverage report, you'd find a lot of the the code blocks.
So like your error handling and your edge cases or, like, you're saying of your logging configuration and things like that. So 70, 80, you know, 4, 8, 5 lines of code sort of area. You probably on average hitting the major use cases of your app. And what's left is edge cases and or, like, those infrastructure bits that you can't really hit when you're doing a unit test because I don't know, like, program, program dot main. You're probably not calling program dot main as part of a test because it's just gonna hang and the web server is waiting for requests and you can't stop it.
You you just exclude those from from code coverage and then you can get back up to a 100%. And everything you have written a test for, just exclude from code coverage. 100%. You got it. That's definitely one way around it.
Yeah. But, yeah, I think yeah. I definitely think once you get to around the 70, 80 point, you're starting to get into diminishing returns. And it's becomes more of an exercise in chasing a metric rather than what what other spots in my app where I'm completely missing some test coverage that should be there. But I think the tricky bit of that is sometimes you have to have the discipline to occasionally go back and actually look at a visualized report rather than just seeing a number on a screen or a number in a report and going, so where where is it that we don't have the coverage and actually, like, periodically going through the report reports and going, ah, that whole area of the app.
Actually, we're not covering that. We're not testing it works. So what what are you using to generate these reports? So, the tool that I use regularly and we use in my team is called, well, actually, we use 2 2 tools. So we use coverlet to actually generate the coverage metrics in the first place.
That's a open source library, and it plugs in really really easily to the dotnet test SDK, and, we use the MS build integration it has. So you just sort of add a new get package, and you can just configure a few MS build properties to, like, say, what namespaces or classes or attributes to exclude from code coverage. And you can also give it a minimum code coverage, and it will fail your build if the coverage doesn't meet that level, which is handy to stop or stop. Let you know if you've started going down like a drift of Mhmm. You were at 80 and you're getting you're being a bit sloppy and you'll get you're trending down.
It will hit a level and then you'll just fail the build and or, like, occasionally, at least to, well, I'm only adding this tiny thing and it doesn't make any sense write any test, so I'm just gonna change the number. But, yeah. So we use that to actually generate the metrics, and then there's another open source library called report generator, and that generate can generate you HTML coverage reports from a variety of code coverage output formats like, and, TRX files. I think it's TRX files. And you can then get yourself a HTML report, and that is of your code files.
So it can show you your code file, however your code's structured. And then if you've covered a line, it's in green, and if it's not covered, it's in red. And if you've got, like, branches and you've, like, hit one branch and not the other branch of the code, like a finish statement or a ternary, then it would, like, shade it in yellow. So then you can go through those reports and you see the code as you would see it in an ID, but then it's you've got all the color highlighting on it, and then it's really easy to spot if you've got, like, a massive hole in your test coverage. And also, you can sort of browse it hierarchically, so it shows you the metrics.
You can see the metrics at an assembly level, or it can go down to name space, class, method. So you can look at it, say, at the name space level and go, oh, why is this name space 20% when all the others are 80, say, and then drill into it and then drill into the classes and then go, oh, actually, it's this one class. And then, oh, yeah. Actually, we've got this huge class that's got 100 and 100 of lines of code in it, and we're not testing in it, and it's dragging the number down. And then you can sort of, within your team, then make a decision to go, is it okay that this class is missing all of this test coverage, or have we got, a hole in our in our testing here that we need to address?
Can you go a little bit more into the the setup and and tear down process? You know, what's what do you have to do there and to make sure everything's set up right for your tests? And then when your test done, what do you have to do at the end? So in a typical application for, say, an API that's doing some CRUD with the database, then as part of the setup process, typically, what you would do is you would sort of you would change any configuration settings that might need to be changed for your test. So for example, if you were calling connect an external API, you might wanna change the URL so that if you're stubbing of your HTTP calls hasn't hasn't worked for some reason, it's going gonna talk to something that's not real.
So, like, for example, if, I don't know, if we were integrating with the GitHub API, I would change the test apps configuration for the URL to be, like, GitHub dot local, not GitHub dot com. And then I do similar thing with email addresses, like, they were gonna act in the email real people from tests and things like that. But, you can change any appropriate configuration that you have in your test. Like But wait. That's the email.
So would your email when you do your integration testing with your email server, would you literally hook it up to a real email server or do you kinda just, like, mock it or like a So I don't actually work on anything in our system that does send emails. I'd we did there was something at my old job where we did that, and we just did mocks. But I think there was I just remember there was a time where someone didn't set up some mocking properly, and he and he he and he, like, emailed himself because he'd put his own email address, and he was getting emails from his test. I was just like, just just make it something that's not real. Like, if it goes wrong, it will go into a black hole.
Yeah. Yeah. Yeah. Another things in the setup typically is another thing I'll often do is I'll reroute the logging. So, like, if you're running a test and it fails, it's useful to look at the applications logs because they might give you hints as to why it might have failed or, like, I don't know, no reference exception somewhere in your application.
Your integration test probably isn't gonna tell you that. It's probably just gonna tell you the server returned to 500 error, HTTP 500 error. That's not useful for debugging in of itself. It just tells you something's failed. So but if all your logs are going into a file somewhere on your local developer machine, then you've said, oh, which file did the error log go into so I can go and find it?
So what I typically do is hook into the the bootstrapper, the bootstrapping of the test server and configure a logger that routes to in the case of x unit, the x unit output. So then what you can do is if your test fails, you can just look at the x unit output, and it has all the application logs for that that were during the lifetime of that test associated with the test. And then you can just open the test output and go, oh, there was there was a no reference exception on line 52 of Foo class. I'll just go and dig around in there and see what it is I broke or just stick a break point at the top of the area of the app where that test fails and then run the test again and see why I hit. And in the case of if you were doing some stuff with databases, then typically, like, creating a new database.
So maybe depending on what your data access code does on whether, like, you're using just using an ORM or if you're doing, like, actual low level SQL commands, then you might be doing commands that maybe aren't as portable between, say SQLite and SQL Server. So then it gets a bit trickier to do, like, redirecting the database. But for, like, say, it was simple SQL stuff, I'd, set up, like, a a SQLite database in, like, the temp directory on the local computer and then run whatever commands we need to, like, create the tables and then and seed any data. So that that Isn't there, like, a in memory database that comes with dotnetcore? I'm not sure if you've used that one before.
Yes. So EF core, they have any memory, database provider you can use, but I treat it with caution because I've been burned in the past before where I've done a query that works in linked objects but it doesn't work in linked to sequel and then this this was pre um.netcore but like you'd have a if a unit test that would work perfectly well and then when you point it at a real database it would go I don't know how to turn that into SQL, and it would just throw an exception or it would behave differently because in memory projections versus, SQL projections and things would slip through the net. I think it's not completely the same as the real database. I think it's it's it's got things like it doesn't have foreign keys or something like that, or it doesn't respect foreign key constraints and things like that. I think the whole thing is it's designed to run fast because you want your test to run fast.
But, yeah, it's it doesn't completely map to, like, a real SQL server database. Yeah. That that's where I found the shortfalls where they are. I think it's like foreign key constraints because Mhmm. I I think we've probably all done that bug that one time where you, forget to configure, unique constraints.
You tried to put another ID 1 in. And with, like, the memory databases because they don't have ID one in and with, like, the memory databases because they don't check for those things. You have all these tests and they're all passing perfectly fine. You're like, yep. Let's ship it.
And then you ship it and then immediately breaks because you've got a very foreign key constraints. So that's Well, I'm I'm guessing, like, you have your automated test, but you do have real testers as well right now. Yeah. Yeah. And with, like, with, like, going back to the test pyramid with, like, the end to end tests, like, as part of our deployment pipelines, we go through, like, a QA environment and a staging environment before production.
So we try and at least reuse what the test is doing even if it's not the same code. And we'll have, like, our integration test self host the app and talk to it directly on your local machine, but then we'll have our end to end tests that point at the deployed code in AWS, and then they'll find that sort of thing. And so you still got a safety net for that sort of oops moment, but for from the whole sort of, like, shift left with testing approach is, like, finding your bugs then is a lot more expensive than finding it on your laptop before you've opened a pull request and ask your team to review it. It's a great point, actually. I like I like that.
Yeah. It's a good way to prefer to talk about the cost. Yeah. So Yeah. Because you don't use, like, it's like it's not even, like, monetary cost.
It's just, like, your own productivity. It's like if you run all the tests and it's all fine, you do the PR and your teammates look at it and go, yep. This looks all fine. And then I don't know. It takes 10 minutes to build in CI, and then it takes 10 minutes to go through deployment pipeline.
And then the test run, and it fails and it's broken. It's like, oh, great. So to as well as having to work out why that's broken and then fixing it, once I've identified the fix and coded it, that's another 30, 40 minutes before I'm back at that point in my deployment process again. K. We just got a few minutes left.
Is there any last minute, tips and tricks that you wanna give people on on integration tests? So something I've used in the past well, actually, not in the past, I use now, is a lot of people might work on something that is just like social login and it's like, oh, I need to log in to my app before, I can actually use the app to test the business functionality that I actually wanna test. But I don't wanna have to automate logging in to GitHub or Twitter or Facebook or whatever, especially as a lot of those accounts really don't like bots or want you to use 2 f a. It's like, how do you automate 2 f a? And then you get into this whole mess of things.
So something that we've looked into doing with our test that use social login is there's a few extensibility APIs in ASP dotnetcoreidentity where you can hook into the point where the app redirects off to the 3rd party provider and then you can send it back to itself. So instead of logging in to say Twitter, log in with Twitter, it sends the login request back to itself and then you can hook into that in your test suite. And then you can just go, yep, You logged in. And then you can really speed up your UI tests because you don't need this step where you're trying to either go through real login pages of an external third party or you're trying to mint authentication cookies in code to just dump them in the browser to skip the process. So it's it's a really handy little hidden away hook to sort of cheat the login process with tests.
Good tip. Alright. Great. Great. So let's, move on to picks.
Why what's your pick this week? Okay. Yeah. So this week, my pick is gonna be a new TV show I've started watching. It's called, Young Sheldon.
So basically, it's, I'm not sure if you guys seen the TV show Big Bang Theory. It's, it's kinda like a prequel to that. It's about, Sheldon, the the main character on Big Bang Theory. He's like this super smart guy kind of thing. It's about him growing up.
So, yes, it's kind of a nice sitcom that I've been watching recently. So it's good. Great. Alright. Martin, do you have a pick for us?
So this week this week in the UK, at least, we got, the latest series of what we do in the shadows, which I don't know if you guys have seen it before. But it's a sitcom about vampires, and it's a spin off from a film that Germaine Clement and Tika Waititi made about I think it's about 10 years ago. And it's about, like, some vampires who'd like it's what vampires really do. Like, they're not all living in Transylvania in a castle. They just live in the suburbs of Staten Island, and it's about all the things they get up to.
And, we just got the 3rd series of that that in the, the UK, and we binge watched it over the last 2 days, like, all 10 episodes. Just the whole lot's gone. It's just so funny. I would really recommend it. And you see, you don't have to be like horror fan to like it.
It's just it's just it plays mostly often like all the the pop culture tropes of vampires, but just in a bit of a weird and wacky way. Is is it a UK show or a US show? So it's a it's a US show. I think it's on FX in the states, but we only just got it this week on, BBC iplayer. So we we got the whole series in one go because I think it's already been on in America.
Mhmm. No. Because I I love UK shows. You know? Uke, UK, like, the inbetweeners and and all that stuff.
So yeah. Like, okay. I can binge I can probably watch those over and over again, like, 2 or 3 times now. So yeah. UK humor is just kind of funny.
It's awkward. So 1 of 1 of the, one of the characters in What We Do in the Shadows is, like, very English. So, like, there's an element of that within the show even though it's, American TV show. Alright. So, my pick this week is, since hockey season has started up again and I'm playing, Seattle has a new hockey team this year.
It's the 1st season that they're having a hockey team, and it's called the Kraken. So it's, my my wife bought me a bunch of shirts and shorts and things like that for birthdays and Christmases and Christmases and things like that. So, if you, interested in hockey, check out the Seattle Kraken. So Seattle has never had a a hockey team. They they did a long time ago, but, yeah.
It's, they would be been without one for a long time. And, that's a few years ago. They they got approved for 1, and they finally started playing this season. So if I get a chance, you know, maybe COVID gets better, I'll be able to go over and check out a game. That's something I've always wanted to do.
I don't know why I didn't do it the last time I went to America, but I I'm actually see a an NHL game. So maybe one day. No hockey in Australia? Oh, I would've got, like, one ice ring in in my city. That's all about it.
So not in not cold enough. So Yeah. We don't have, ice hockey either. Here here, hockey means a totally different game with with, wooden sticks and grass. Yeah.
Alright. Cool. Alright, Martin. If, our listeners have questions and they wanna get in touch with you, what's the best way to to reach out? So if anyone's got any questions they wanna find my way, the best place would be to, at me on Twitter.
I can be found at atmartin_costello. Awesome. Thanks for coming on the show today, Martin. It was great to have you. No problem.
It's nice to speak to you. Something different from, to do with being stuck inside. I agree. I agree. Mhmm.
Great. Thanks, everybody, and we'll catch everybody else on the next episode of adventuresin.net.
Integration Testing - The Why and How - .NET 204
0:00
Playback Speed: