Real-World Testing: Insights from Rainforest QA Expert AJ Funk - DevOps 231
Welcome to another exciting episode of Top End Devs! In today's show, we dive into the intricate world of quality assurance and testing strategies with our special guest, AJ Funk. AJ, a seasoned software engineer at Rainforest QA, shares his unique journey from playing professional baseball to developing cutting-edge QA solutions. Joining us are co-hosts Will Button and Jillian, along with fellow guest Matteo Collina.
Special Guests:
AJ Funk
Show Notes
Welcome to another exciting episode of Top End Devs! In today's show, we dive into the intricate world of quality assurance and testing strategies with our special guest, AJ Funk. AJ, a seasoned software engineer at Rainforest QA, shares his unique journey from playing professional baseball to developing cutting-edge QA solutions. Joining us are co-hosts Will Button and Jillian, along with fellow guest Matteo Collina.
AJ walks us through the evolution of quality assurance at Rainforest QA, emphasizing the importance of balancing confidence and velocity in testing. He highlights innovative approaches like using visual layers for testing, eliminating the need for extensive code-based tests, and explains how their no-code solutions empower teams to maintain high-quality standards efficiently.
From discussing the myth of 100% test coverage to the role of AI in QA, AJ and our hosts explore practical strategies for developers. We also touch on the importance of real-world testing environments, handling microservices, and tips for leveraging Rainforest QA's robust tools effectively.
Join us for a thought-provoking conversation that covers everything from the basics of end-to-end testing to advanced QA practices, and even takes some entertaining detours into the personal lives of our speakers. Whether you're a seasoned developer or just starting out in the tech industry, this episode is packed with insights you won't want to miss. Tune in for some expert advice, a few laughs, and a whole lot of valuable information!
Transcript
Will Button [00:00:01]:
So welcome everyone to another episode of Adventures in DevOps. Happy New Year. Warren, happy New Year. Thanks for, joining me today.
Matteo Collina [00:00:10]:
Oh, thank you for having me back.
Will Button [00:00:12]:
Always a pleasure. Jillian, welcome back. How are you?
Jillian [00:00:16]:
Good. Good. How are you? And happy New Year.
Will Button [00:00:18]:
I'm pretty excited about today's episode because we have your background, AJ. So our guest today is AJ Funk, and his background is just so cool. So you're a software engineer for Rainforest. You are you live in California. You're a snowboarder, and you are a previous d one and professional baseball player. Is that all correct?
AJ Funk [00:00:50]:
That is correct. I've taken an interesting career path. So, yeah, definitely, lots of stuff to do here in I'm in the Lake Tahoe area, in California, around the border of California and Nevada. So, yeah,
Matteo Collina [00:01:04]:
it's all true.
Will Button [00:01:06]:
What position did you play in baseball?
AJ Funk [00:01:08]:
I was a pitcher.
Will Button [00:01:09]:
A pitcher. Okay. Cool.
Jillian [00:01:11]:
Oh, that's 1. I know what it is. That's great. I was about to be like, I have no idea what we're talking about here, but I know what the pitcher is.
Will Button [00:01:20]:
It it's all about the little connections, Jillian.
Jillian [00:01:24]:
It really is.
Will Button [00:01:26]:
So cool. AJ, tell us a little bit about your role at Rainforest and what Rainforest does. And I I think I'm interested to hear what led you to go from pitching Major League Baseball to pitching code.
AJ Funk [00:01:45]:
Yeah. For sure. Yeah. So I started writing code when I was a kid, really, always as a hobby. My plan in life was always to play baseball. You know, once you become an adult and you realize that it's not always such a viable career path, I started realizing that this thing I do as a hobby, I could do as a career. So I just started kinda dabbling and realizing that I actually really enjoyed doing this, and it was a it was a big pivot from, you know, just athletics all the time. So, yeah, now I I ended up at, Rainforest QA.
AJ Funk [00:02:21]:
I've been here for, seven and a half years now. Oh, wow. Yeah. It's been a long time, and it's, it's been really fun. I work with really, really great people, really intelligent, experienced engineers, really great culture. We're distributed all over the world, so it's fun. We get to go, meet up with each other every once in a while in some random place in the world. And so it's it's a really enjoyable place to work.
AJ Funk [00:02:47]:
I specialize in front end development. So I spend a lot of time with the product and design team shaping how our product works. And, you know, as a quality assurance company, it's really important for us to have a really high bar of quality and reliability for our product. Obviously, if our app breaks, why are you gonna trust us to make sure that your app doesn't break? Seems reasonable. Yeah. Right? And, you know, to to meet that high bar, we use rainforest to test rainforest. So being able to eat your own dog food on a daily basis is a really, really good position to be in. It allows you to identify pain points in the user experience before your users do.
AJ Funk [00:03:30]:
Hopefully, that's not always true, but we do our best. And it also means we spend a lot of time thinking about the best way to do QA, both from a kind of philosophical standpoint and from a practical implementation standpoint, you know, reality, in other words. So this experience obviously helps guide our product road map, but it's also led me to develop a lot of strong opinions on how product and engineering teams should shape their testing strategies, what kind of tools they should be using, and how they can ship code quickly and continuously without having to sacrifice quality.
Will Button [00:04:08]:
For sure. For me, that seems like one of those rabbit holes that it's really hard to find the balance on of, like, what's the minimum level of testing that you should be doing, and then, like, what's a what's an adequate level to get into where you're actually still getting good use for your time? You know? Because, obviously, you can throw something at it that tests every possible combination or or path through your application that could ever be taken, and and you're gonna reach a a point of diminishing returns there. So how do you figure out what that right spot is?
AJ Funk [00:04:45]:
Yeah. Absolutely. The the myth of a 100% test coverage. Right? Yeah. That we we all strive for. I think they teach you very early on that, like, a 100% test coverage is, what we should be doing with every every time we push code there, we should make sure everything is covered. The reality is that's just not possible. Right? What does that even mean? Does that mean every line of code is covered? Does that mean every possible edge case is covered? Like, you're never gonna get all of those things.
AJ Funk [00:05:13]:
So the trick is finding that balance. Right? We wanna make sure that we have confidence in the thing that we're shipping, that both the thing we're shipping works and that we're not breaking anything else, but we also wanna be able to move quickly. We don't want this to get into our way. And so the main ways we go about that is, really think about what your testing strategy is. Right? What are you testing? What is your bar for quality? Because in reality, sometimes we're okay if some things break, especially when we're in the early stages of prototyping something. It's a it's a beta feature, etcetera. And what are the, the layers of your testing? Right? So we can we can think about our testing strategy as layers with, pictured as 3 layers, but more as a pyramid. Right? So your foundation of your testing strategy is your unit tests.
AJ Funk [00:06:17]:
This is something we're all familiar with. We're using code to test code in small chunks. Right? What a unit is is totally up to you. We can define it as a single function or some class component, whatever. The top of our pyramid is our end to end or our UI tests. That's testing our application in a kinda real world scenario. As we go up the pyramid, these tests are more comprehensive. We can rely on them more, but they're slower.
AJ Funk [00:06:49]:
They're more expensive. Right? So at the base of our pyramid, we have all of these unit tests that we could run constantly. So at Rainforest, we run these every single time you push code. It just runs them because it's cheap. We don't really care. If you're lazy like me, I like to just push my code up and, oh, something broke. I didn't notice, because I don't wanna run the whole test suite all the time on my local machine. But as you move up that pyramid from unit tests, the middle of that pyramid would be our integration tests.
AJ Funk [00:07:19]:
So, basically, testing multiple chunks of these units and how they interact with each other. That could be maybe one of your microservices talking to another microservice or something like that. As we get to the top of the pyramid and we start running these end to end tests, that's where, I think strategy becomes much more important, because, like I mentioned, they're slower. So we actually care about when we run these. We need to be more strategic about how often we run them, and they are more expensive. So we don't wanna just blow a bunch of a whole bunch of money on them. Right? Right. So finding the balance between those things is the real trick.
Matteo Collina [00:07:57]:
I like that you pulled out test pyramid and not one of the other newer hip hipster trends of, the test diamond or test Klein bottle. And gone with the the tried and true, test pyramid. I mean, at least that's how I've always seen it. And, you know, it's also really interesting that you bring up the 100% test coverage is not possible. At least from, like, anecdotal experience for me, I found it's almost like a Pareto distribution and and follows the 80 20 rule where if you wanted to have a 100% test coverage, it would actually require an infinite amount of time.
AJ Funk [00:08:34]:
Absolutely. Yeah. We we we certainly strive to have all that test coverage, but I think the reality of, 100% test coverage is more along the lines of how you define your like, your user workflows. Right? So the the typical example is like your login flow. What are the the main outcomes of the login? It's successful login, failed login, maybe it forgot your password. And do you have test coverage, end to end test coverage, on that flow? If so, we could usually consider that covered. Do I have a unit test that covers every combination of things that I could type into that box? Absolutely not. But having some sort of test coverage on it to make sure that it actually loads in some kind of real world scenario gives me much more confidence.
Will Button [00:09:30]:
With a lot of things like logins, and and there's so many areas these days in developing software, you're using SaaS products as a as the mechanism for implementing that. What's your approach for dealing with that external dependency? Because you can you can mock it or you can try to simulate it or you can actually call it. What do you what do you think about those?
Matteo Collina [00:09:58]:
I feel like there's a job at, at MeWell. I I I see you I see you, you're coming there.
Will Button [00:10:04]:
No. No. I'm trying to Yeah. I got nothing.
Jillian [00:10:07]:
I need
Will Button [00:10:08]:
Hey, Jay. Back to you.
AJ Funk [00:10:12]:
Yeah. So, I I think what you're asking is, when we run any kind of tests, take we'll stick with unit tests for for the time being. There's a context that they run inside of. Right? And if we wanted to test our app as close as possible to reality, we would test it in production, on an actual machine with an actual human, which some people do. Right? There's obviously downsides to that. Testing and production, probably don't have to get into why that's not a great idea. And
Jillian [00:10:50]:
Each entire video game industry is wrong. Like, to feel like Oh, no. Some are interested in testing things.
AJ Funk [00:10:56]:
Touche. And, yes, they are wrong.
Matteo Collina [00:11:00]:
I mean, I'm not even sure that's true anymore. Like, they don't even release games. Right? It's just, like, content that you you click download on and you pay for, and then the game comes later or something. Like, I I think that's what the game industry has gone towards.
Jillian [00:11:11]:
That does tend to be how it goes, but I I still feel like they have the users doing an awful lot of acceptance testing in the video game industry. And I'm, like, too cheap for this nonsense. But, anyways, I'm trying not to derail the entire conversation today, so we can we can skip right on over that.
AJ Funk [00:11:28]:
No. I I totally agree. And it's like you do even when you do get a game that's an incomplete game that's buggy and you play it for an hour and you go, I'm never playing this again. So I'm I'm a, late adopter when it comes to these things. I wait till I, the Internet stops screaming about it and then I start downloading things. But, yeah, when obviously, we don't wanna test our applications in production because we're smarter than that, and we have the ability to test these things in different environments. When we are running things like unit tests, we're kinda stuck inside of this artificial context. Right? If you're just running code to test code, it's inside of that specific code environment.
AJ Funk [00:12:15]:
It's giving us inputs and outputs. Even as we go up the chain to some things that, like, they call themselves end to end tests, which I kinda disagree with, which would be things like DOM based testing. You're still stuck inside some kind of context. Right? So if the the DOM, if you're not familiar, it's the document object model, and it's essentially the application in, interface that we have with the browser. So it's how our JavaScript code talks to the browser, how we manipulate things, how we read things from the browser. And so the important, nuance here is that our code interacts with the DOM. A human being doesn't interact with the DOM. Right? When you go click a button, you don't go talk to the DOM.
AJ Funk [00:12:59]:
You interact with the user face user interface. So we want to get our tests as close to actual end to end tests as possible. Right? A human looking at the screen, a human interacting with the screen. And if we're not in that production environment, every step we take away from that gets us further from reality. Right? It gives us this false sense of security sometimes. There's a really common example with with DOM based tools, might be click this button. Did it work? Right? Well, just because you can interact with that button through the DOM, doesn't mean your user can actually interact with that button. Right? There might be, you know, some kind of overlay over my button.
AJ Funk [00:13:46]:
The button might be off of the screen. But when I ask the dom, can I click the button? It says, yeah. We clicked it. It worked. Shipped the code, and now no one can log in to your app because no one can click the button. Right? And so doing our as as much as we can to get to that real world scenario, creating testing and staging environments that mirror production as much as possible and loading these things into, virtual machines with operating systems instead of a headless browser, which is, you know, basically a browser with no UI and interacting with it in a way that human doesn't interact with it, just gets us further away from reality.
Matteo Collina [00:14:25]:
I mean, it's interesting you bring that up. And I'm I'm sort of now I'm intrigued if you maybe wanna roast what we've been telling our customers. So, obviously, we're we're we provide a third party product, for our customers for login and access control. So they have you know, providing them the off needs there. And I think the biggest advice that we end up giving them is, like, we are already testing that thing. Like, you know, don't focus on this. You're wasting your time duplicating our testing. If you felt the need to do that, it's almost like you're you don't trust us with our product.
Matteo Collina [00:14:59]:
And then you probably should question why you're using that solution in the first place. If you if you get to that point, you know, that's actually a conversation more than it's a technical solution. Mhmm. However, we do find some customers still have a need to go a little bit further. And the thing that we've done, I don't know if this is the right answer, but we provide a given it is a SaaS product, we provide a clone of our service as a container that can run that it is trimmed down, only has minor features, but allows the flows that you're going to test or you want to actually verify, available without having to go through all the complexity that the service actually provides.
Jillian [00:15:37]:
Mhmm.
AJ Funk [00:15:38]:
Yeah. I mean, I think that's a good compromise. Right? So in in this whole strategy of finding balance between our our confidence and and our velocity in shipping, the reality is our testing environments are not gonna match our production environments all the time. Right? And a lot of times we're constrained by resources. So I think in a situation like that, that totally makes sense. You know, some kind of pared down version of your production application. I think the important thing is how you're testing it. Right? If if we are able to just kinda, like, strip down our product and test the bare bones version of it, as long as we are in an environment, right, they're clicking on it through, I don't know, a web browser or whatever it might be, versus just, like, you know, running some script in the background, I think that's a really good balance between those two things.
AJ Funk [00:16:29]:
The key here is that we're still doing end to end testing. Right? I imagine it's, you know, the someone's typing in the box, a button's being clicked, there's an HTTP request or whatever it might be to an API that reads from a database, and we're checking out all of these things actually work together. So, yeah, I I think that's that's a good compromise.
Matteo Collina [00:16:49]:
I think one of the things that actually comes up a lot is, maybe just on a slight tangent, is people are so focused on end to end testing. They never stop to question, should we, like, for that particular flow? Is that where the value is for our company? Is it really where we should put a lot of resources in? Do you find that, those that you're working with or your customers may or may not know where the highest value testing should be done, and then that's a conversation or maybe it's something that your tool provides?
AJ Funk [00:17:21]:
Yeah. Absolutely. And I I think the trick between, the trick for that is doing it early. Right? So if you, have a large application in the code base and you have not run-in the end end tests, it's hard to determine where to start. Right? Versus when if you start early on, it, kinda writes itself. Right? The first thing is your login. You have some login coverage. Determining where the highest value is is certainly up to each, usually the product team.
AJ Funk [00:17:53]:
Right? What do we care most about not breaking? And can we create some kind of smoke test that spans all of these? Right? So each one of these tests has a certain level of granularity to it. A a good smoke test might be, can I log in to my application? Can I create a thing? Can I delete a thing? And things just, like, generally
Matteo Collina [00:18:15]:
work.
AJ Funk [00:18:15]:
Those initially are your highest value tests because I know that my app actually loads in reality, right, regardless of what my unit tests say. After that, it's usually defined by what those user flows are. Right? So as you're scoping something out with your product team, here's this new feature that we're building. It's really important in your planning process to include that. Right? Write tests for it. These are things that we usually, as developers kinda bake into our estimates. Right? I have to write unit tests for this. At Rainforest, we've shifted more towards baking and rainforest tests for these things.
AJ Funk [00:18:55]:
We obviously have unit tests, but getting the coverage at the time of implementation or the time of release or whatever that might be is usually your best bet to get that. If I have a large, large application and I don't have that coverage yet, it is certainly a balancing act figuring out what what should be a test first. Right? So I would certainly start with those kind of smoke tests. And then, your highest used features is usually a really good place to start. The the pitfall that you go into is putting too much nuance in all of these tests. Right? What if they click into this and click out of that and then open this menu and whatever? Keeping them very coherent and legible and kind of focused on the thing that they're testing, is the important piece of of having efficient tests that you can maintain over time.
Matteo Collina [00:19:46]:
I I saw Will smirking there, and I I know he's just entered into a a new glorious position at his organization. So maybe he has some unique insight that he's, interested in, blessing us with.
Will Button [00:19:58]:
No. I was curious, because this is an opportunity for me to throw in a buzzword that's trending. And so once the episode is transcribed, we'll just go viral on that. So does AI play a role in helping figure out that type of user flow in the different, like, odd places you can end up?
AJ Funk [00:20:20]:
Good question. Not to my knowledge yet. You know, AI is really good at some things and really bad at some things, and we haven't, quite figured out how to make it give it enough context to understand how it should go about testing your app. Right? We do have, some really cool AI tools at Rainforest. They don't, determine what your test coverage should be. Rather, that's kinda left up to you, and then, it helps you write the test. So what we have is, entering a prompt. You know, say, it could be something pretty generic.
AJ Funk [00:20:56]:
Log in and add an add an item to the cart and check out, something like that. And it will generate your rainforest steps for you. So during execution, AI is left out of it. Right? It does the initial generation, and then we just execute things normally. And then, we have some self healing functionality. So this it fails on something that we generated. We're gonna try and regenerate those steps. And what's really nice about that is since Rainforest is a visual tool, we identify things on the screen based on screenshots.
AJ Funk [00:21:31]:
Right? It's possible for you to make slight visual changes, and now that image doesn't quite match up. Your test might fail. You don't wanna have to go back in and retake all of those screenshots. But since it's generated by AI, it could go back, follow the same steps, and realize this is what the button is here. It would be really cool if it could kinda add that test coverage for you or, like, tell you what you should be testing. We've poked at that a few times, and it's honestly just really dumb in that aspect and doesn't really give you anything useful. It's like, yeah. Go go test all the things and make sure things work.
AJ Funk [00:22:07]:
And it's like, cool. Yeah. I I knew that. Thank you.
Will Button [00:22:10]:
Yeah. So we
AJ Funk [00:22:12]:
maybe as they get started,
Matteo Collina [00:22:14]:
we kinda I I feel like part of the answer is also the domain you're in. I I know something we haven't talked about is, like, really at the top of the test pyramid is exploratory testing, whereas, like, add your creative human instincts to where bugs could potentially pop up while you're looking at an interface or or API. And I don't think yeah. If we're doing anything wrong in the creation of AI or LM models, it's removing the creativity from them, and I think that that harms us here. But there has been one area, especially within things like protocol creation or SDKs, interfaces for the services, and that's I think the keyword is fuzzing. So trying an l m, any sort of AI can spam with almost a more intelligent brute force strategy about what sorts of inputs, tend to break, your interface or your service or your product, and then use that as a a potential test, that you can commit longer term. And, again, it's not for everything. Like, I don't think it really works so much in a UI world, but definitely depending on what your service or interface is doing, stuff in the crypto space, cryptography, not blockchain, just to be clear.
Will Button [00:23:30]:
I I feel like that was a dig there, Warren. What are you trying to say?
Matteo Collina [00:23:34]:
You know, I I it's not the sort of, thing I wanna bring up on an episode, Will. You don't
Jillian [00:23:41]:
want a record.
Matteo Collina [00:23:44]:
Yeah. Definitely not on record. We do cryptography, because we're really into security and deep there. And we're not building our own crypto, but high we're very high, users of it. Everything JWT creation or JWT creation, every single different kind of algorithm strategy, we end up utilizing these. And so finding where we're not using libraries effectively is certainly an area that we've potentially looked into. Actually, according to our company bylaws, we're not allowed to do anything regarding cryptocurrency. Like, it's actually not allowed by the country of Switzerland for us to get involved in any way.
Matteo Collina [00:24:18]:
We can't accept payments. We can't pay people in crypto. We can't even think about, consulting for companies that wanna do something crypto related.
Will Button [00:24:29]:
That's discrimination.
Jillian [00:24:33]:
I work in HPC and I'm pretty sure, like, some of the admins will just kind of use, like, a little bit of the compute power from different clusters that they have to be running different crypto schemes. But I haven't I haven't, like, a 100% caught anybody, but I'm just I'm waiting for the day. I'm waiting for it. Not to tell on them. I just wanna know because I'm super nosy and, like, I just like knowing things like this.
Matteo Collina [00:24:53]:
No. I gotta tell them as long as they cut
Will Button [00:24:55]:
you in?
Jillian [00:24:56]:
Yeah. That's right. That's the scheme. It's like when my, you know, when my website got hacked by that Chinese jewelry store, and I was like, guys, like, if you would if you would just give me a cut, this would be fine. It was nice jewelry. I liked it.
Matteo Collina [00:25:08]:
I mean, I think that's I I think that really is expert advice from our resident m ML, expert here because
Jillian [00:25:15]:
the on the team? Like, just
Matteo Collina [00:25:17]:
make sure that Yeah. No. Because AWS just came out and said that the strategy of sharing reservations across customer AWS accounts, like, if you're a consultant that does bundling for, instance reservations or compute reservations, you no longer can pass along that savings to the customer. I mean, what are they gonna do with all this excess capacity now other than, some good old fashioned Bitcoin mining?
Jillian [00:25:43]:
I don't know. I don't know. I mean, we could be making drugs for autoimmune diseases and cancer, or or you could be making some coal fired cash. I don't know. It all No. Mean, we could also do both. It's not it's not an either or. There's plenty of compute power these guys have, you know, they're spinning up plenty of AWS.
Jillian [00:26:07]:
They're not gonna load us if that last 10% is using crypto.
Will Button [00:26:14]:
So when this episode launches and we all get blocked from our respective AWS accounts, we can just reflect on this moment fondly.
Matteo Collina [00:26:22]:
So, I mean So this
Jillian [00:26:24]:
is our moment in the sun.
Matteo Collina [00:26:25]:
For the record, AWS isn't gonna block you because the ROI on utilizing cloud resources to mine crypto is so low that you're pretty much just paying AWS. But it is a good indication that there is malicious activity happening on your account, so it is something that they will, for sure investigate. And that I think is as much of a tangent on this that I I want to go down for today. Right?
Jillian [00:26:52]:
Yeah. I think we should talk about the low code with Rainforest. I love low code stuff. How did how did this come about, and, like, how does it work? I just I wanna know all about it.
AJ Funk [00:27:02]:
Yeah. For sure. So when I first joined Rainforest over 7 years ago, our model is a bit different. We had a bunch of human testers. It was kind of the gig Uber model of I have something I wanna test. Here's my test cases. They're all written in plain English, and we'll provide a bunch of humans for you to, to go test your application. Right? Including some exploratory stuff like you mentioned.
AJ Funk [00:27:27]:
Go click all over this page and and try and find problems with it. And that worked really well. It it was true end to end testing. We load your app in a, in a virtual machine inside of a web browser. They're actually clicking the buttons and confirming those things on the screen. But what we found is that humans are inefficient and expensive as we all know. That's why we have automation. Right? And so we kinda shifted over to automation, but we wanna do something a bit different from what everyone else was doing, which is these, code based tools, DOM based interactions, and instead, we built it all on the visual layer.
AJ Funk [00:28:08]:
So the way it works, is you go in, you load your app, and, you essentially just, like, take screenshots of things. Right? Click on this, type into this field. I can give it an AI, an AI prompt and say, you know, log in and and check check out in the cart. And then when you execute things, it loads into the same environment. Right? You have your your staging environment. Hopefully, you have some some seed data, with login information. You can load that all in the rainforest. It goes in and runs this whole workflow for you.
AJ Funk [00:28:45]:
The output of it is a video of the thing being tested, results on each step, things like HTTP logs, JavaScript console logs, all the information that you need to actually debug things when something breaks instead of it just saying, you know, like, in a unit test when it's like failure, like, one does not equal 2. And so by doing things at that visual level, it offers a lot of flexibility. The first thing is that we're not stuck inside of the browser. Right? We do primarily focus on web based testing, but that does not mean you're stuck inside of the browser. It means you can do things like install a Chrome extension. Right? Open another tab in your browser, install a Chrome extension, interact with that extension because while you're still inside the browser, you're outside of the scope of that web page where you usually are interacting just through the DOM. You can, you know, install some type of desktop application and test it through there. Because since we're working at the visual layer, it doesn't care what you're testing.
AJ Funk [00:29:52]:
It doesn't care what your tech stack is. It just cares that it loads in the machine. And it also offers a lot of, like, more flexible and robust in avoiding flakiness and brittleness to small changes. We have, fallback methods. As much as I've been, kinda hammering that testing with the DOM is not a great idea, we do offer DOM fallbacks, because sometimes it makes sense. Sometimes I don't care about the visual appearance of the button, and all I care about is, that there's a button there. Right? In, in reality, there are variables that we can't control. Right? A a very common scenario is my marketing team is running experiments.
AJ Funk [00:30:36]:
Every time I load this page, the button says something different. It looks different. And so we don't wanna tie the visual appearance, to the the pass fail result of this test, so I'll use something else. I'll use a DOM selector. We also have, like, AI search. You could say something like the button the login button at the bottom of the page. And so the important point here is you don't write any code whatsoever. We have an intuitive UI that you do all of this through, which means you don't need skilled engineers to do it.
AJ Funk [00:31:07]:
Right? A lot of teams have, QA engineers that their job is to just write tests all the time. Other teams, you are re the engineer is responsible for writing these tests, but they need very specific domain knowledge. Right? I need to know, about the thing that I'm testing. Like, from a product standpoint, what does this thing do? I need to understand the code. I need to know how to write these tests. With a no code solution, anybody could do this. Right? It's it's up to your team who owns quality, who owns these tests. For us, it is, usually the the engineer that is shipping the code.
AJ Funk [00:31:49]:
We write the rainforest tests with it, or our product and design team owns it because, like I was saying, they're very tied to our user workflows. Right? They're like, this is how we designed this thing. Engineers are gonna build it. And then any of us that have knowledge about how this user flow is supposed to work can own this test. So it makes it much easier to both write and maintain your tests over time. And then, you know, if that person leaves your company and has all of that domain knowledge on how it works, you just need someone who knows how the app works and they can update your test suite.
Matteo Collina [00:32:21]:
Now I feel like one of the biggest mistakes that I keep seeing over the course of my career is as companies grow, they tend to have more, allegedly, more software, more code, which may or may not end up in a giant, ball of mud mass or even, an extensive number of quote, unquote, microservices, that communicate and really depend on each other. And there was always this challenge by someone who wanted to have a test that required somehow interacting with all of these components. And they never really could understand that the one of the whole points of microservices was to isolate testing. But I think we live in the reality, which is there are some companies that do have a giant ball of mud that have thousands of binaries that have to be installed and running servers. Is there a strategy that I don't I don't mean to, you know, pick on your company. I I don't know if there I I don't think there is a strategy. I think the strategy is right microservices. But I can imagine that, you know, as a SaaS company, the last thing we wanna tell our customers is, yeah, have you tried not having that problem? Have you tried to do
Jillian [00:33:32]:
package manager? That tends to be the solution that I see. That one is everybody's favorite.
Matteo Collina [00:33:37]:
Yeah. Distributed monolith always works. Publish all your binaries that remotely depend on each other to a third party solution and then pull those out at runtime. Always works. Best solution ever. Maybe, AJ, you have some insight here on either something that works or something that works with Rainforest 2 a to to deal with those situations. Or maybe you just you know, it's not something that is handled today.
AJ Funk [00:33:59]:
Yeah. For sure. We we do have a bunch of different microservices running. And I think, I'm gonna refer back to the the the testing pyramid. Right? Is we test each one of those microservices in isolation. Absolutely. Maybe we test, as we go up to the next layer of our integration test, we test some interactions between them. Right? The kind of core, you know, handshake interactions, what whatever is the the main functionality of these 2 microservices talking to each other.
AJ Funk [00:34:28]:
Maybe we have some tests there. But at the end of the day, these end to end tests are comprehensive. Right? If anything in that microservice architecture is failing, presumably, my test is going to fail. And at the end of the day, all we really care about in theory is what the user gets when they're interacting with it. So if I'm just clicking a button, maybe there's a 1,000 microservices that are involved in this, and maybe I'm not directly testing each one of those. But by implementing it as an end to end test, I am very confident that they're all working because my test passed. And so being smart about how you implement each one of those layers, in an efficient way. Right? Lots of unit tests on each microservice and then this overarching test that just make sure everything is working together, is usually the way to go about this.
Matteo Collina [00:35:19]:
I mean, maybe it's a technical implementation question. Like, where is the environment running that the rainforest tests are actually executing is? Is this some sort of binary or CLI that's run on the client side, or are they sharing with you a set of microservices with deployment instructions so that you can run them within your own infrastructure?
AJ Funk [00:35:39]:
Sure. Usually, our requirement is that you need to be able to access it via a web URL. So the the kind of standard way that a ring for assesses run is we provide you a VM, and that VM has a browser on it, so I can specify Chrome on Windows 11. And the first step of your test is going to be a navigation. Navigate to this URL. This is where my web app lives. There are some different use cases where you can absolutely go download a binary and install it and do whatever you want with it. Our out of the box functionality is to, primarily test those web apps.
AJ Funk [00:36:19]:
So that's where we focus, but you certainly have the flexibility to do whatever you want with those VMs.
Matteo Collina [00:36:25]:
No. I I mean, I think I think that approach is genius. Basically, it's out of scope for setting up the environment unless you want it to be in scope of which, you know, then it's a it's a virtual machine. Go to town on how you wanna deal with it there.
AJ Funk [00:36:37]:
Right. And and by, kind of forcing people to give it a public URL, we're nudging them towards good practices, right,
Matteo Collina [00:36:46]:
which
AJ Funk [00:36:46]:
is set up a staging environment, a QA environment, and make it mirror production as much as possible, which includes being able to navigate to it in a URL. And these are small things that we see with some of our new clients. Like, well, I don't have a staging environment. And, sure, I guess you could load your production environment in there, but let's let's show you how to test this properly and not shoot yourself in foot.
Matteo Collina [00:37:11]:
I'm I look. I love that you're saying that. The number one feedback I've always seen here is we can't expose our nonproduction environment publicly. Like, people can't know what we're currently working on. They will use that information maliciously against our company in some way.
AJ Funk [00:37:26]:
Yeah. Like, what what are they gonna do with it, though? You know? Like, if I I have seen some interesting mistakes. Like, you know, maybe we're cloning our production database and not sanitizing sensitive information from it or something like that. Then, yes, absolutely, you're doing some bad things.
Matteo Collina [00:37:42]:
Yeah.
AJ Funk [00:37:43]:
But there are certainly ways to do this. And, I'm of the opinion, who cares if people are in your testing environment? Like, worst case, they blow up your testing environment and whatever.
Will Button [00:37:55]:
And in that case, you figured out how they could blow up your production environment without losing prod.
AJ Funk [00:38:01]:
Exactly.
Will Button [00:38:02]:
Yeah. I think there's, that's, you know, like, part of the undocumented learning curve of working in this industry. Mhmm. You know, because people who are early in their careers think things like, oh, I shouldn't expose staging until, you know, they learn that that's actually probably a good thing. But, like, nowhere in any computer science or course or or boot camp or anything do they cover these kinds of things. And so I think that's actually, like, a really valuable add on service that, you know, you get from rain for forest or that you get from working with people who are more experienced is just like learning that tribal knowledge that's gonna help you out later in your career so you don't have to reinvent the wheel and solve problems that we actually solved 30 years ago.
Matteo Collina [00:38:56]:
I mean, that's well, I mean, we've unfortunately had to append our documentation with, like, here explicitly are the sensitive pieces of data that are relevant to our 3rd party application. This is sensitive. This is sensitive. Like, this is not sensitive. This is like the app application ID, not sense. Like, do not try to encrypt this. Do not try to secure it because people will try like, how do I do this? I'm like, you can't. Like, stop it.
Matteo Collina [00:39:20]:
Like, this has to be public, on your website, in your application. People have to be able to see it. You're not gonna be able to get around that, and I feel like it's more than just experience. I feel like there's a whole level of pragmatism there, like weighing the cost versus the reward of actually trying to sanitize a piece of information. And having a third party testing service, as you mentioned, just reinforces that in a way. Like, you are going to have to expose that to be tested, must show that it's not actually sensitive information.
AJ Funk [00:39:52]:
Yeah. Definitely. To to me, it just reminds me of, like, the myth of the 100% test coverage. Right? It's like, can we a 100% encrypt everything? Absolutely not. Like, people your end users need to see this information. I see I've seen some interesting attempts to kinda obfuscate those things. Like, I've seen some libraries that, prevent you from opening the JavaScript console, for example. And it's like, what are you hiding in there? Maybe maybe you should just not put sensitive things
Matteo Collina [00:40:17]:
in there.
Will Button [00:40:19]:
Here's a here's a wild thought. How about
Matteo Collina [00:40:21]:
you just don't do that?
Jillian [00:40:22]:
Have your credentials, like, encoded in the HTML on your page. Maybe.
Matteo Collina [00:40:27]:
I I mean, I can't believe you 2 are joking about this, honestly. Like, one of the most common attacks against So
Jillian [00:40:34]:
I don't do UI, so I can joke about all this because none of this is
Matteo Collina [00:40:38]:
It's okay. It's Absolutely none of this. I'm just
Jillian [00:40:40]:
like, uh-uh.
Matteo Collina [00:40:42]:
Jillian, you'll have plenty of opportunity to get your models encoded with, AWS access keys and secrets, and then you just ask the model, hey. Can I have an access key and secret that are valid that work for any AWS account?
Jillian [00:40:55]:
I did actually accidentally push my AWS credentials to GitHub once, and, like, the amount of emails that I got from AWS was just, like, it was unreal. It was a very it was a very bad day for me. It was a very, very bad day. So I've done other stupid things, but I don't do the same stupid things. So I can sit here and be very smug about this. Like, this is.
Will Button [00:41:18]:
That is like that is I've always I've often wondered about that. Like, the speed that AWS and other malicious people can identify that you committed an AWS access key to a GitHub repo
Jillian [00:41:31]:
It was instant. It was, like, instant right then. Because as soon as I did it, I was like, oh, no, and tried to, you know, and, like, try to, like, make the GitHub repo private. And, nope, it was instant. They knew. They knew it was out there.
Matteo Collina [00:41:44]:
Yeah. I mean, it's bad. I mean, I think I saw a bunch of statistics on this that for AWS keys and on GitHub, it's about 30 seconds to 2 minutes having been exposed in the repository anywhere in any format. So, like, commit at the beginning of the repository where it was there, but then got removed. So it's not in plain text anymore. You have to go back through the Git history. It's still about 2 minutes. Then there's exposure on, like, Stack Overflow and places like I don't know who uses Facebook in connection with their work, but, that was another place and then Instagram and Reddit, somewhere between 2 and 4 or 5 days, and then there's a couple other ones where it's 6 and more.
Matteo Collina [00:42:23]:
Some of those you have to thank, like, GitHub for. Like, they'll actually discover secrets there. So if you provide a third party application that has credentials, like, at auth risk, we have our secret keys registered there. So if one of our customers exposes keys for our service on GitHub, we'll get notified, automatically revoke those keys, and send them an email telling them that they did something that they probably did not wanna do, multiple times if that's if necessary because that's happened as well.
Will Button [00:42:54]:
I wanna switch topics here real quick, AJ, because you've been with Rainforest QA for over 7 years now.
AJ Funk [00:43:01]:
Wow.
Will Button [00:43:02]:
Yeah. Which is unusual in the tech industry. So I'm curious about what are the, what are the things that you look for in a job that have been fulfilled at Rainforest that keep you there that long?
AJ Funk [00:43:20]:
Mhmm. Yeah. For sure. 1st and foremost is the people. Actually, when I was interviewing with Rainforest, the last person I talked to told me something. Like, I stay at Rainforest because of the people. And I was like, okay. That's that's a word.
AJ Funk [00:43:35]:
That's what everyone says.
Will Button [00:43:37]:
Oh, we're family. Right? Yeah.
AJ Funk [00:43:40]:
And then I, I quickly drank the Kool Aid, I think, and I found myself saying that on interviews. And I'm like, I know this sounds like a load of crap. And I think, you know, the hiring process is super, super important. Right? Yeah. Both finding people that are qualified for the job, obviously, but our good culture fits. We have a pretty small team, so there's nowhere to hide. If you are not doing your job or you're not up to par, you're gonna be exposed pretty quickly, which leads us to have a very, reliable team.
Matteo Collina [00:44:12]:
You know? We we are distributed
AJ Funk [00:44:12]:
globally, so there's a lot of, distributed globally, so there's a lot of, hand off. You know? I'm going to sleep. You're waking up. Here's what I did. And I trust when I wake up that you're just gonna have this thing done. And if you're not one of those people, you're probably not gonna fit at Rainforest. So really, really qualified, experienced, smart, reliable people makes life so much easier. And then the other piece of it is, you know, the mission that we're on, the technology that we're building.
AJ Funk [00:44:42]:
I think, when I was first exposed to it, the first time I shipped code with Rainforest, it was kinda like, wow. How have I been shipping code before this? And the answer was I was probably breaking things all of the time, and you don't notice until user catches it in production 2 days later or whatever. And it's it's something that I'm really passionate about. I think as a front end engineer, we get really caught up on the details. Right? There's always visual layers. There's these very specific human interactions. I like building things that humans are actually interacting with, And that kind of naturally leads you to a quality assurance mindset. Right? I want everything to be perfect all of the time.
AJ Funk [00:45:26]:
How do I ensure this? And so the combination of really great people and working on something that I'm actually really passionate about, and I want to see the rest of the world adopt these correct ways of testing things, in my opinion, of course, just makes it makes it easy to work here. Yeah.
Will Button [00:45:43]:
Right on. That's cool. That's cool. Is there a you guys obviously do a lot of, front end type testing. Is there a particular industry or vertical that you have got a lot of experience in or something that has worked really well that makes a really cool story?
AJ Funk [00:46:07]:
Something I've been involved in that makes a cool story.
Matteo Collina [00:46:10]:
Oh, I
AJ Funk [00:46:10]:
don't know if I have a good answer for you, honestly. I've been I like I said, I've been at Rainforest for so long. That's all I can think about, I guess.
Will Button [00:46:17]:
Right. Do you do you attract, like, a certain, like, customers with, like, financial apps or with, like, web based gaming apps, or is there a particular vertical that tends to gravitate towards your sir your service?
AJ Funk [00:46:31]:
I think not really. And I think that's one of the things that makes it cool is it's a very generic testing tool. Right? There are some, some limitations. But in general, if you could load your app on a machine, you could probably test it with Rainforest, not caring about what the tech stack is, those kinds of things. So there's a very wide range of users that we have, from yeah. There's some financial, some financial companies, doing some, what I always find interesting kind of, like, testing visually things like spreadsheet style apps, like their tables and things like that. And then we have some really cool, like, visual tools, like, like drag and drop interfaces where you're building things, like, you know, Lego style building, where there's probably, to my knowledge, not any other great way to test something like that. Like, what do you say? Are all my Legos on the page? Yeah.
AJ Funk [00:47:28]:
They are. Are they kinda oriented this way? Like, yeah. They are. But how does it look? Right? How what does the user see? So the the real sweet spot is really visual based applications because I don't think there's other great solutions for them out there. But in general, being a kinda generic visual testing application, it really applies to anything.
Will Button [00:47:50]:
Right on. For a lot of web based front ends, it's all Node. Js based. Do you have a favorite Node. Js type tool? Are you like a a React fan or Next JS or Vue? Give a personal preference?
AJ Funk [00:48:09]:
Yes. I am a React fan boy for sure. I I I started, you know, re re rewind all the way back to, like, the jQuery days and
Will Button [00:48:19]:
stuff. Right.
AJ Funk [00:48:19]:
I see that, and I have nightmares still. We have some we actually have some of that floating around in our, like, like our admin applications and stuff where it's like a rails back end, and they're like, yeah. We got jQuery in there. And then my first thought is always like, well, like, how do you test that jQuery? And answer is we don't. I'm going through a couple of waiting for us to test that and call it good. And I I started, I started with Angular back in the day.
Will Button [00:48:45]:
Oh, right on.
AJ Funk [00:48:47]:
Back Angular 1, anyways, was kind of the reverse of React where, like, we're gonna put your JavaScript in your HTML. React took the approach of we're gonna put your HTML in your JavaScript. You know, just smush it all together. And it's come a very, very long way, I must say. So, yeah, I find working with React very easy and intuitive, and it's very nice that the, the general JavaScript community has supported that and has pushed that forward because especially, you know, with with all software and technology, but especially in front end development, it's really easy to pick the the the wrong tool long term. Right? I picked this thing. It's great. And then we find a better way to do it, and they just abandon the project.
AJ Funk [00:49:30]:
Right? This is true with, I mean, anything open source. And we've run into this a lot of times, right, even with, open source testing tools. And, actually, we had a very large Enzyme test suite on our React application, and we ran into something like this. There was a new, new way of testing React apps, which was the React testing library. And Enzyme kinda said, yep. That's a better way to do it. We're gonna stop supporting after after a good version like Rack 16 or Rack 17. I'm like, well, we want to upgrade to React 17.
AJ Funk [00:50:06]:
It's like, well, none of your enzyme tests work. So for you. Yeah. Exactly. Exactly. Too bad for us. And so now you start weighing the options of, well, how do we upgrade? Right? Do we just say, let's not upgrade, which is gonna bite you really quickly. Right? Especially at the pace, all these JavaScript libraries are being updated.
AJ Funk [00:50:25]:
I want that new shiny thing. I want support for that thing, and I don't wanna be stuck in the past. The more you get stuck in the past, the harder it is to catch up with everything else. Right? And so our options were basically rewrite all these however many thousand enzyme tests, or we could just nuke them all, which is, it reminds me of, like, these, these memes I see about, like, junior engineers and the intern where they're like, their commit messages. I nuked all the tests because they were failing, and I kinda make them pass. Or, like, return true in all the tests because, yeah, it's one of the past.
Jillian [00:51:03]:
The only reasonable way to do things.
AJ Funk [00:51:05]:
Yeah. And and it it sounds, kinda like an overreaction, but as we started to kinda think about these testing philosophies, we're like, we have end to end test coverage on all of these things. Right? And a lot of the front end tests, even though they're unit tests, they they load things in a headless browser and are kinda recreating what an end to end test does. So we chose to keep all of our actual unit tests, all the kind of business logic that didn't use Ensign, new call the Ensign tests, and just lean into a rainforest test because we know if the rainforest tests are passing. We don't need all of these redundant tests anymore and instant productivity boost. Like, I don't have to maintain all of these things anymore. I don't have to upgrade them. I could just get them out of my way, and I can upgrade all my dependencies.
AJ Funk [00:51:57]:
And because we have really good m n test coverage, we could do that confidently and know that we're not breaking things. So, yeah, choosing dependencies can be quite tricky sometimes, especially in the JavaScript world.
Matteo Collina [00:52:09]:
Did you find some places that you still wanted to reintroduce some of the React testing library for, I don't know, component level testing of of the UI? Or have you kept with, like, a 100% of the decision to not have that layer of testing anymore regarding the UI components because you focus on the full picture end to end testing for the user flow and also whatever you have, with the interaction with the back end.
AJ Funk [00:52:37]:
Yeah. We still have some of it, and we drew the line at user interactions. Right? So, React has this idea of hooks, which are basically just chunks of logic that's just a function that I can use inside of a component. We stopped having any, React Testing Library tests that were actual user interactions, no clicking on things, and instead, we used it to test the functionality, the logic of those hooks. So it's essentially a unit test, but it's testing a specific React thing, and it requires the the testing library to do that. Everything else kinda gets hoisted up to the end to end testing level. And it's nice to just say, hey, designer. Hey, product manager.
AJ Funk [00:53:19]:
Like, go at this test coverage while I'm busy hacking on things, and I don't have to worry about this anymore.
Will Button [00:53:24]:
So that's actually a go ahead, Jillian.
Jillian [00:53:27]:
Oh, I was just gonna say I'm so impressed with people who can keep up with, like, the UI and JavaScript plan because I've tried. I've tried, like, a few times, and it just it then everything changed. I was like, alright. I'm not doing this anymore. I'm gonna go I'm gonna go do high performance computing. That hasn't changed in, like, 30 years. It's gonna be hard.
AJ Funk [00:53:45]:
Yeah. You definitely start feeling like Sisyphus. You're just pushing that rock up the hill in this state. And every time you get to the top, someone tells you that you're actually on the wrong hill. So
Will Button [00:53:57]:
I was gonna say that seems like a a really interesting, approach that I hadn't thought of when we initially started talking. But, like, you you can replace having to write a lot of your tests in your React app by using Rainforest. Right? By just focusing on what the end user experience is and testing for that, you can save yourself from having to write a lot of tests in the React standard library.
Matteo Collina [00:54:23]:
So that's where the trade off is, though. Right? Because these tests then are testing more functionality at once. And so if there is a problem, you don't necessarily know, like, which line of code is causing the issue or what interaction there is. So, you know, there really is, like, how valuable is that flow? I think and that's something that as you pointed out, AJ, like, you sort of have to determine upfront, like, where is the value of your testing and how do you get the most value out of which pieces you're adding and where you're validating and etcetera. And so, yeah, I mean, in your case, the ends the enzymes weren't actually providing the right value in the first place. Yeah. So definitely switch them over.
AJ Funk [00:55:01]:
Yeah. Absolutely. And it is kind of a question of redundancy too. Right? Like, is redundancy good? Sometimes. Like, I can be really, really sure, and I can have some extra confidence that the things isn't isn't gonna break. But most of the time, it just slows us down. Right? I find that the often the best time to add more unit test coverage is when something breaks. Right? Because if my end to end tests are all passing, but something's broken, very often, it's some kinda edge case.
AJ Funk [00:55:28]:
Right? It's some either some weird user behavior, some weird input, some weird sequence of events. And those things are usually better captured in a unit test, because it's it's easier to kinda implement that specific scenario, that specific line of code that is the offender here, versus creating, you know, a whole new end to end test to just cover some edge case. Those tests are gonna just get longer and longer and just be kinda confusing, honestly. It's like, well, why why am I just, like, clicking in all of these random spots doing these things trying to cover these edge cases? Like, just try to unit test for it and call it good.
Jillian [00:56:06]:
I really like the emphasis on, you know, testing for business logic and just in general having, not everything controlled by the engineers. Because I find for myself, you know, like, I'll I'll write something and then I'll hand it off to a user. And then they immediately start using it in some way that I didn't even think of. And then, you know, and then we do, like, a couple rounds of this. So being able to cut back on that person who writes thing, who does not actually use the thing, and then just immediately being able to push it off to an end user.
AJ Funk [00:56:34]:
Yeah. Absolutely. And yeah. And and things like, testing and staging environments are great this. We push code to those environments all the time, give them to a PM, and say, go run and try and try and break this thing. Right? Don't always wanna do that in production. Right? Like, if the thing's not fully baked, I don't wanna crack something in the database or whatever. And so having places to push that and have people early in the process iterating on this, finding the major bugs, the minor bugs, the stylistic bugs, is is super, super valuable that having one of your users find it later.
AJ Funk [00:57:09]:
Right on.
Will Button [00:57:10]:
So you live in Tahoe. Do you get outdoors a lot?
AJ Funk [00:57:14]:
I do. I, I live here with my wife and my dog. He's a, lab husky mix. So he kinda thrives in the summer, thrives in the winter. There's lots of snow out right now. And so we're out we're outside pretty much every day, you know, snowboarding, hiking, kayaking, you know, all that kind of stuff.
Will Button [00:57:31]:
Oh, right on. How long have you lived in the Tahoe area?
AJ Funk [00:57:35]:
I've been here for about 5 years now. Right. I grew up in the San Francisco Bay Area, and I was part of the great COVID migration out here. We we always wanted to get here eventually, and, I was lucky. I was still working at Rainforest at the time and already remote. So the transfer up here to from remote, near an office to remote, it actually doesn't matter how far you are from the office, was very easy. And we're also real fortunate that we were not the only ones doing this migration, so we've made lots of friends that were like, yeah. We live down the street from you in the city, and we all live here now.
AJ Funk [00:58:11]:
So it's it's a very different life, but, we love it, and I don't think we're ever leaving.
Will Button [00:58:15]:
Ah, that's cool.
AJ Funk [00:58:16]:
Yeah.
Will Button [00:58:17]:
Right on. Tahoe's a beautiful area.
AJ Funk [00:58:19]:
Yeah. It really is.
Will Button [00:58:23]:
Cool. Alright. Should we move on to some picks? Before we do, any final thoughts on QA, rainforest, tips, guidance that you wanna leave us with, AJ?
AJ Funk [00:58:37]:
I think just recapping is finding the balance between confidence and velocity. Right? Everybody needs to set their own bar for quality. Like, how what is my ratio between confidence and velocity? Determining that for yourself is is the most important thing here. And, keeping in mind that not only is the velocity, but a lot of times it's the sanity of your engineers. Right? Like, we don't wanna spend all of our time writing tests. So finding that balance and, doing things in an efficient way is is the key to success.
Will Button [00:59:14]:
Right on. I and I think that's very use case specific too, you know, because the right answer for a financial app is gonna be very different than the right answer for, like, a social media app. Absolutely. Cool. Alright. Jillian, calling you out first. What'd you bring for a pick today?
Jillian [00:59:36]:
I am gonna pick Drive by Dave Kellett. It is a sci fi graphic novel, and I think it's on its I think it's, like, releasing the 4th one this summer. But it's so good and it's so nice and wholesome, which is very nice because, like, I really like sci fi, but I don't really like violence or gore or, you know, icky fluids or just I I don't like any of that. Okay? I don't like any of it. And this is just so wholesome and adorable, and the main character is very cute. So that's it. That's the pick.
Matteo Collina [01:00:07]:
Right. I got a whole
Jillian [01:00:07]:
bunch of copies for Christmas, and I'm, like, making people read them. And I'm gonna I'm gonna have, like, a little little indie graphic novel cult going on soon enough. It's gonna be
Matteo Collina [01:00:15]:
great. Right on.
Will Button [01:00:20]:
Alright. Warren, what'd you bring?
Matteo Collina [01:00:23]:
Yeah. So I just got back from a long hiatus being away from the show. I was on vacation, and so I think this pick is really accurate. Very short book, highly recommend, Tao Te Ching by, Laozi, which is the founder of Taoism, spelled Taoism in case in case you've seen it written but never pronounced before. And there's just so much good stuff that is in the book that can be applied to everyday life, working environment, etcetera. It's incredibly short. There's only, like, a 108 principles or so. And, it starts off great with, the DAO that can be told is not the eternal DAO.
Matteo Collina [01:01:05]:
Like, you can't write down the whole truth. There is something that's never said. It's impossible to convey everything. And I know it sounds so philosophical, you know, to go down this path, but I I feel like going through these really helps to put into perspective thinking outside the box with solving, certain problems or interactions or the communication we have every day. Highly recommend.
Will Button [01:01:27]:
Right on. Cool. AJ, what you got for us?
AJ Funk [01:01:32]:
Yeah. My, my my reading and listening choices are kinda all over the map, but I did have an interesting one, recently. It was called the light eaters. It's about plants and specifically this idea of plant intelligence. So, obviously, intelligence is a loaded word. They're not intelligent like you and I. They're not debating QA strategies and things like that, but they do have a lot of intelligent like behavior. You know? They communicate.
AJ Funk [01:01:55]:
They recognize their kin. They hear sounds. They transform themselves based on the, the visual appearance of environment around them. And so I found it really interesting and gave me a lot to think about, especially, you know, when I'm out in nature with the with the wife and dog just kinda staring at trees and stuff. So, yeah, check it out.
Matteo Collina [01:02:15]:
No. Oh, yeah. That's intelligent for sure. A 100%. Totally with you. There's a there's a good one. If you are out and there's plants or grass being cut and you notice the smell of, you know, freshly cut grass, what is that? It's a fear, intensely a fear pheromone that's been sent off to warn other grass that that there is danger around. Like, that is the sign of intelligent life.
AJ Funk [01:02:38]:
Yeah. For sure. There there's lots of super interesting examples in this book. I'm just like plants acting like animals, essentially. And it's, it's kinda a mind blowing experience.
Will Button [01:02:51]:
I read a book recently. I can't remember which one it was, but I've been studying mushrooms a lot lately, and and this book showed where mushrooms actually act as a communication agent for trees in the forest. And so, like, a a specific, you know, set of insects can start attacking trees on one end of the forest, and then the mushroom, because it's the the mycelium that grows underneath the entire forest floor, will relay that information to the other trees in the forest. And so by the time the insects work their way down to those trees, that those trees are producing a sand or a pheromone that actually, repels the insects by the time they get there. And I thought that was super cool.
Jillian [01:03:39]:
That is cool. I mean, on a, like, a mushroom and foraging Facebook group, and everybody just, like, takes pictures of fun mushrooms that they find when when they're out and about. And it's it's just such a nice little group because it's so chill. That's it. There's, like, there's no drama. There's no nothing. It's just look at this mushroom I found.
Will Button [01:03:58]:
There's a an app called Inaturalist that I use for that. You can take a picture of not just mushrooms, but anything you you find that you can't identify and then upload it to Inaturalist, and it will it will try to auto detect what it is for you, but then other people will come in and and confirm or tell you what that actually is. That was pretty cool.
Jillian [01:04:19]:
I used to do that a lot as a kid. I'd have, like, the field guides and go out with my field guide and try to, like, identify all the plants, but now we have an app for that.
Will Button [01:04:27]:
There's an app for that.
Jillian [01:04:28]:
Always.
Matteo Collina [01:04:29]:
Oh, Will, what's your what's your pick?
Will Button [01:04:32]:
My pick is there's a series on Netflix called Kunk on Earth, and I thought my sense of humor was, like, really, really dry. But this lady
Matteo Collina [01:04:45]:
She's a man.
Will Button [01:04:45]:
Takes it she takes it to a whole new level. This series is just hilarious because she sits down. It's, you know, like a the history of Earth base basically, but she'll sit down with legitimate world renowned experts in their field and ask them the most off the wall questions. And that to me, the hot was the highlight of the series. It's just the looks on their faces when she would ask them these questions that had absolutely nothing to do with what they were an expert in. But super entertaining series, definitely 10 out of 10 stars, kunk on earth on Netflix.
Matteo Collina [01:05:21]:
100%. And, you know, there's actually 2 other things. There's kunk on, Britain, I think, and then there's, like, one on Christmas and Shakespeare. So you have some extra biscuits too.
Will Button [01:05:32]:
Oh, sweet. I will have to check those out because I love her sense of humor.
Jillian [01:05:37]:
A mafia mystery. That's I've never heard that term. That's fun.
Will Button [01:05:42]:
Yeah. It's very, very accurate. Alright. That brings us to the end of the episode. Thank you everyone for listening. Jillian, Warren, thank you for joining me, hosting the show. And, AJ, thanks for coming on the show, man. It's been a pleasure talking to you.
AJ Funk [01:05:59]:
Thanks so much for having me. It was a lot of fun.
Will Button [01:06:01]:
Right on. Glad to hear that, and I will see everyone next week.
So welcome everyone to another episode of Adventures in DevOps. Happy New Year. Warren, happy New Year. Thanks for, joining me today.
Matteo Collina [00:00:10]:
Oh, thank you for having me back.
Will Button [00:00:12]:
Always a pleasure. Jillian, welcome back. How are you?
Jillian [00:00:16]:
Good. Good. How are you? And happy New Year.
Will Button [00:00:18]:
I'm pretty excited about today's episode because we have your background, AJ. So our guest today is AJ Funk, and his background is just so cool. So you're a software engineer for Rainforest. You are you live in California. You're a snowboarder, and you are a previous d one and professional baseball player. Is that all correct?
AJ Funk [00:00:50]:
That is correct. I've taken an interesting career path. So, yeah, definitely, lots of stuff to do here in I'm in the Lake Tahoe area, in California, around the border of California and Nevada. So, yeah,
Matteo Collina [00:01:04]:
it's all true.
Will Button [00:01:06]:
What position did you play in baseball?
AJ Funk [00:01:08]:
I was a pitcher.
Will Button [00:01:09]:
A pitcher. Okay. Cool.
Jillian [00:01:11]:
Oh, that's 1. I know what it is. That's great. I was about to be like, I have no idea what we're talking about here, but I know what the pitcher is.
Will Button [00:01:20]:
It it's all about the little connections, Jillian.
Jillian [00:01:24]:
It really is.
Will Button [00:01:26]:
So cool. AJ, tell us a little bit about your role at Rainforest and what Rainforest does. And I I think I'm interested to hear what led you to go from pitching Major League Baseball to pitching code.
AJ Funk [00:01:45]:
Yeah. For sure. Yeah. So I started writing code when I was a kid, really, always as a hobby. My plan in life was always to play baseball. You know, once you become an adult and you realize that it's not always such a viable career path, I started realizing that this thing I do as a hobby, I could do as a career. So I just started kinda dabbling and realizing that I actually really enjoyed doing this, and it was a it was a big pivot from, you know, just athletics all the time. So, yeah, now I I ended up at, Rainforest QA.
AJ Funk [00:02:21]:
I've been here for, seven and a half years now. Oh, wow. Yeah. It's been a long time, and it's, it's been really fun. I work with really, really great people, really intelligent, experienced engineers, really great culture. We're distributed all over the world, so it's fun. We get to go, meet up with each other every once in a while in some random place in the world. And so it's it's a really enjoyable place to work.
AJ Funk [00:02:47]:
I specialize in front end development. So I spend a lot of time with the product and design team shaping how our product works. And, you know, as a quality assurance company, it's really important for us to have a really high bar of quality and reliability for our product. Obviously, if our app breaks, why are you gonna trust us to make sure that your app doesn't break? Seems reasonable. Yeah. Right? And, you know, to to meet that high bar, we use rainforest to test rainforest. So being able to eat your own dog food on a daily basis is a really, really good position to be in. It allows you to identify pain points in the user experience before your users do.
AJ Funk [00:03:30]:
Hopefully, that's not always true, but we do our best. And it also means we spend a lot of time thinking about the best way to do QA, both from a kind of philosophical standpoint and from a practical implementation standpoint, you know, reality, in other words. So this experience obviously helps guide our product road map, but it's also led me to develop a lot of strong opinions on how product and engineering teams should shape their testing strategies, what kind of tools they should be using, and how they can ship code quickly and continuously without having to sacrifice quality.
Will Button [00:04:08]:
For sure. For me, that seems like one of those rabbit holes that it's really hard to find the balance on of, like, what's the minimum level of testing that you should be doing, and then, like, what's a what's an adequate level to get into where you're actually still getting good use for your time? You know? Because, obviously, you can throw something at it that tests every possible combination or or path through your application that could ever be taken, and and you're gonna reach a a point of diminishing returns there. So how do you figure out what that right spot is?
AJ Funk [00:04:45]:
Yeah. Absolutely. The the myth of a 100% test coverage. Right? Yeah. That we we all strive for. I think they teach you very early on that, like, a 100% test coverage is, what we should be doing with every every time we push code there, we should make sure everything is covered. The reality is that's just not possible. Right? What does that even mean? Does that mean every line of code is covered? Does that mean every possible edge case is covered? Like, you're never gonna get all of those things.
AJ Funk [00:05:13]:
So the trick is finding that balance. Right? We wanna make sure that we have confidence in the thing that we're shipping, that both the thing we're shipping works and that we're not breaking anything else, but we also wanna be able to move quickly. We don't want this to get into our way. And so the main ways we go about that is, really think about what your testing strategy is. Right? What are you testing? What is your bar for quality? Because in reality, sometimes we're okay if some things break, especially when we're in the early stages of prototyping something. It's a it's a beta feature, etcetera. And what are the, the layers of your testing? Right? So we can we can think about our testing strategy as layers with, pictured as 3 layers, but more as a pyramid. Right? So your foundation of your testing strategy is your unit tests.
AJ Funk [00:06:17]:
This is something we're all familiar with. We're using code to test code in small chunks. Right? What a unit is is totally up to you. We can define it as a single function or some class component, whatever. The top of our pyramid is our end to end or our UI tests. That's testing our application in a kinda real world scenario. As we go up the pyramid, these tests are more comprehensive. We can rely on them more, but they're slower.
AJ Funk [00:06:49]:
They're more expensive. Right? So at the base of our pyramid, we have all of these unit tests that we could run constantly. So at Rainforest, we run these every single time you push code. It just runs them because it's cheap. We don't really care. If you're lazy like me, I like to just push my code up and, oh, something broke. I didn't notice, because I don't wanna run the whole test suite all the time on my local machine. But as you move up that pyramid from unit tests, the middle of that pyramid would be our integration tests.
AJ Funk [00:07:19]:
So, basically, testing multiple chunks of these units and how they interact with each other. That could be maybe one of your microservices talking to another microservice or something like that. As we get to the top of the pyramid and we start running these end to end tests, that's where, I think strategy becomes much more important, because, like I mentioned, they're slower. So we actually care about when we run these. We need to be more strategic about how often we run them, and they are more expensive. So we don't wanna just blow a bunch of a whole bunch of money on them. Right? Right. So finding the balance between those things is the real trick.
Matteo Collina [00:07:57]:
I like that you pulled out test pyramid and not one of the other newer hip hipster trends of, the test diamond or test Klein bottle. And gone with the the tried and true, test pyramid. I mean, at least that's how I've always seen it. And, you know, it's also really interesting that you bring up the 100% test coverage is not possible. At least from, like, anecdotal experience for me, I found it's almost like a Pareto distribution and and follows the 80 20 rule where if you wanted to have a 100% test coverage, it would actually require an infinite amount of time.
AJ Funk [00:08:34]:
Absolutely. Yeah. We we we certainly strive to have all that test coverage, but I think the reality of, 100% test coverage is more along the lines of how you define your like, your user workflows. Right? So the the typical example is like your login flow. What are the the main outcomes of the login? It's successful login, failed login, maybe it forgot your password. And do you have test coverage, end to end test coverage, on that flow? If so, we could usually consider that covered. Do I have a unit test that covers every combination of things that I could type into that box? Absolutely not. But having some sort of test coverage on it to make sure that it actually loads in some kind of real world scenario gives me much more confidence.
Will Button [00:09:30]:
With a lot of things like logins, and and there's so many areas these days in developing software, you're using SaaS products as a as the mechanism for implementing that. What's your approach for dealing with that external dependency? Because you can you can mock it or you can try to simulate it or you can actually call it. What do you what do you think about those?
Matteo Collina [00:09:58]:
I feel like there's a job at, at MeWell. I I I see you I see you, you're coming there.
Will Button [00:10:04]:
No. No. I'm trying to Yeah. I got nothing.
Jillian [00:10:07]:
I need
Will Button [00:10:08]:
Hey, Jay. Back to you.
AJ Funk [00:10:12]:
Yeah. So, I I think what you're asking is, when we run any kind of tests, take we'll stick with unit tests for for the time being. There's a context that they run inside of. Right? And if we wanted to test our app as close as possible to reality, we would test it in production, on an actual machine with an actual human, which some people do. Right? There's obviously downsides to that. Testing and production, probably don't have to get into why that's not a great idea. And
Jillian [00:10:50]:
Each entire video game industry is wrong. Like, to feel like Oh, no. Some are interested in testing things.
AJ Funk [00:10:56]:
Touche. And, yes, they are wrong.
Matteo Collina [00:11:00]:
I mean, I'm not even sure that's true anymore. Like, they don't even release games. Right? It's just, like, content that you you click download on and you pay for, and then the game comes later or something. Like, I I think that's what the game industry has gone towards.
Jillian [00:11:11]:
That does tend to be how it goes, but I I still feel like they have the users doing an awful lot of acceptance testing in the video game industry. And I'm, like, too cheap for this nonsense. But, anyways, I'm trying not to derail the entire conversation today, so we can we can skip right on over that.
AJ Funk [00:11:28]:
No. I I totally agree. And it's like you do even when you do get a game that's an incomplete game that's buggy and you play it for an hour and you go, I'm never playing this again. So I'm I'm a, late adopter when it comes to these things. I wait till I, the Internet stops screaming about it and then I start downloading things. But, yeah, when obviously, we don't wanna test our applications in production because we're smarter than that, and we have the ability to test these things in different environments. When we are running things like unit tests, we're kinda stuck inside of this artificial context. Right? If you're just running code to test code, it's inside of that specific code environment.
AJ Funk [00:12:15]:
It's giving us inputs and outputs. Even as we go up the chain to some things that, like, they call themselves end to end tests, which I kinda disagree with, which would be things like DOM based testing. You're still stuck inside some kind of context. Right? So if the the DOM, if you're not familiar, it's the document object model, and it's essentially the application in, interface that we have with the browser. So it's how our JavaScript code talks to the browser, how we manipulate things, how we read things from the browser. And so the important, nuance here is that our code interacts with the DOM. A human being doesn't interact with the DOM. Right? When you go click a button, you don't go talk to the DOM.
AJ Funk [00:12:59]:
You interact with the user face user interface. So we want to get our tests as close to actual end to end tests as possible. Right? A human looking at the screen, a human interacting with the screen. And if we're not in that production environment, every step we take away from that gets us further from reality. Right? It gives us this false sense of security sometimes. There's a really common example with with DOM based tools, might be click this button. Did it work? Right? Well, just because you can interact with that button through the DOM, doesn't mean your user can actually interact with that button. Right? There might be, you know, some kind of overlay over my button.
AJ Funk [00:13:46]:
The button might be off of the screen. But when I ask the dom, can I click the button? It says, yeah. We clicked it. It worked. Shipped the code, and now no one can log in to your app because no one can click the button. Right? And so doing our as as much as we can to get to that real world scenario, creating testing and staging environments that mirror production as much as possible and loading these things into, virtual machines with operating systems instead of a headless browser, which is, you know, basically a browser with no UI and interacting with it in a way that human doesn't interact with it, just gets us further away from reality.
Matteo Collina [00:14:25]:
I mean, it's interesting you bring that up. And I'm I'm sort of now I'm intrigued if you maybe wanna roast what we've been telling our customers. So, obviously, we're we're we provide a third party product, for our customers for login and access control. So they have you know, providing them the off needs there. And I think the biggest advice that we end up giving them is, like, we are already testing that thing. Like, you know, don't focus on this. You're wasting your time duplicating our testing. If you felt the need to do that, it's almost like you're you don't trust us with our product.
Matteo Collina [00:14:59]:
And then you probably should question why you're using that solution in the first place. If you if you get to that point, you know, that's actually a conversation more than it's a technical solution. Mhmm. However, we do find some customers still have a need to go a little bit further. And the thing that we've done, I don't know if this is the right answer, but we provide a given it is a SaaS product, we provide a clone of our service as a container that can run that it is trimmed down, only has minor features, but allows the flows that you're going to test or you want to actually verify, available without having to go through all the complexity that the service actually provides.
Jillian [00:15:37]:
Mhmm.
AJ Funk [00:15:38]:
Yeah. I mean, I think that's a good compromise. Right? So in in this whole strategy of finding balance between our our confidence and and our velocity in shipping, the reality is our testing environments are not gonna match our production environments all the time. Right? And a lot of times we're constrained by resources. So I think in a situation like that, that totally makes sense. You know, some kind of pared down version of your production application. I think the important thing is how you're testing it. Right? If if we are able to just kinda, like, strip down our product and test the bare bones version of it, as long as we are in an environment, right, they're clicking on it through, I don't know, a web browser or whatever it might be, versus just, like, you know, running some script in the background, I think that's a really good balance between those two things.
AJ Funk [00:16:29]:
The key here is that we're still doing end to end testing. Right? I imagine it's, you know, the someone's typing in the box, a button's being clicked, there's an HTTP request or whatever it might be to an API that reads from a database, and we're checking out all of these things actually work together. So, yeah, I I think that's that's a good compromise.
Matteo Collina [00:16:49]:
I think one of the things that actually comes up a lot is, maybe just on a slight tangent, is people are so focused on end to end testing. They never stop to question, should we, like, for that particular flow? Is that where the value is for our company? Is it really where we should put a lot of resources in? Do you find that, those that you're working with or your customers may or may not know where the highest value testing should be done, and then that's a conversation or maybe it's something that your tool provides?
AJ Funk [00:17:21]:
Yeah. Absolutely. And I I think the trick between, the trick for that is doing it early. Right? So if you, have a large application in the code base and you have not run-in the end end tests, it's hard to determine where to start. Right? Versus when if you start early on, it, kinda writes itself. Right? The first thing is your login. You have some login coverage. Determining where the highest value is is certainly up to each, usually the product team.
AJ Funk [00:17:53]:
Right? What do we care most about not breaking? And can we create some kind of smoke test that spans all of these? Right? So each one of these tests has a certain level of granularity to it. A a good smoke test might be, can I log in to my application? Can I create a thing? Can I delete a thing? And things just, like, generally
Matteo Collina [00:18:15]:
work.
AJ Funk [00:18:15]:
Those initially are your highest value tests because I know that my app actually loads in reality, right, regardless of what my unit tests say. After that, it's usually defined by what those user flows are. Right? So as you're scoping something out with your product team, here's this new feature that we're building. It's really important in your planning process to include that. Right? Write tests for it. These are things that we usually, as developers kinda bake into our estimates. Right? I have to write unit tests for this. At Rainforest, we've shifted more towards baking and rainforest tests for these things.
AJ Funk [00:18:55]:
We obviously have unit tests, but getting the coverage at the time of implementation or the time of release or whatever that might be is usually your best bet to get that. If I have a large, large application and I don't have that coverage yet, it is certainly a balancing act figuring out what what should be a test first. Right? So I would certainly start with those kind of smoke tests. And then, your highest used features is usually a really good place to start. The the pitfall that you go into is putting too much nuance in all of these tests. Right? What if they click into this and click out of that and then open this menu and whatever? Keeping them very coherent and legible and kind of focused on the thing that they're testing, is the important piece of of having efficient tests that you can maintain over time.
Matteo Collina [00:19:46]:
I I saw Will smirking there, and I I know he's just entered into a a new glorious position at his organization. So maybe he has some unique insight that he's, interested in, blessing us with.
Will Button [00:19:58]:
No. I was curious, because this is an opportunity for me to throw in a buzzword that's trending. And so once the episode is transcribed, we'll just go viral on that. So does AI play a role in helping figure out that type of user flow in the different, like, odd places you can end up?
AJ Funk [00:20:20]:
Good question. Not to my knowledge yet. You know, AI is really good at some things and really bad at some things, and we haven't, quite figured out how to make it give it enough context to understand how it should go about testing your app. Right? We do have, some really cool AI tools at Rainforest. They don't, determine what your test coverage should be. Rather, that's kinda left up to you, and then, it helps you write the test. So what we have is, entering a prompt. You know, say, it could be something pretty generic.
AJ Funk [00:20:56]:
Log in and add an add an item to the cart and check out, something like that. And it will generate your rainforest steps for you. So during execution, AI is left out of it. Right? It does the initial generation, and then we just execute things normally. And then, we have some self healing functionality. So this it fails on something that we generated. We're gonna try and regenerate those steps. And what's really nice about that is since Rainforest is a visual tool, we identify things on the screen based on screenshots.
AJ Funk [00:21:31]:
Right? It's possible for you to make slight visual changes, and now that image doesn't quite match up. Your test might fail. You don't wanna have to go back in and retake all of those screenshots. But since it's generated by AI, it could go back, follow the same steps, and realize this is what the button is here. It would be really cool if it could kinda add that test coverage for you or, like, tell you what you should be testing. We've poked at that a few times, and it's honestly just really dumb in that aspect and doesn't really give you anything useful. It's like, yeah. Go go test all the things and make sure things work.
AJ Funk [00:22:07]:
And it's like, cool. Yeah. I I knew that. Thank you.
Will Button [00:22:10]:
Yeah. So we
AJ Funk [00:22:12]:
maybe as they get started,
Matteo Collina [00:22:14]:
we kinda I I feel like part of the answer is also the domain you're in. I I know something we haven't talked about is, like, really at the top of the test pyramid is exploratory testing, whereas, like, add your creative human instincts to where bugs could potentially pop up while you're looking at an interface or or API. And I don't think yeah. If we're doing anything wrong in the creation of AI or LM models, it's removing the creativity from them, and I think that that harms us here. But there has been one area, especially within things like protocol creation or SDKs, interfaces for the services, and that's I think the keyword is fuzzing. So trying an l m, any sort of AI can spam with almost a more intelligent brute force strategy about what sorts of inputs, tend to break, your interface or your service or your product, and then use that as a a potential test, that you can commit longer term. And, again, it's not for everything. Like, I don't think it really works so much in a UI world, but definitely depending on what your service or interface is doing, stuff in the crypto space, cryptography, not blockchain, just to be clear.
Will Button [00:23:30]:
I I feel like that was a dig there, Warren. What are you trying to say?
Matteo Collina [00:23:34]:
You know, I I it's not the sort of, thing I wanna bring up on an episode, Will. You don't
Jillian [00:23:41]:
want a record.
Matteo Collina [00:23:44]:
Yeah. Definitely not on record. We do cryptography, because we're really into security and deep there. And we're not building our own crypto, but high we're very high, users of it. Everything JWT creation or JWT creation, every single different kind of algorithm strategy, we end up utilizing these. And so finding where we're not using libraries effectively is certainly an area that we've potentially looked into. Actually, according to our company bylaws, we're not allowed to do anything regarding cryptocurrency. Like, it's actually not allowed by the country of Switzerland for us to get involved in any way.
Matteo Collina [00:24:18]:
We can't accept payments. We can't pay people in crypto. We can't even think about, consulting for companies that wanna do something crypto related.
Will Button [00:24:29]:
That's discrimination.
Jillian [00:24:33]:
I work in HPC and I'm pretty sure, like, some of the admins will just kind of use, like, a little bit of the compute power from different clusters that they have to be running different crypto schemes. But I haven't I haven't, like, a 100% caught anybody, but I'm just I'm waiting for the day. I'm waiting for it. Not to tell on them. I just wanna know because I'm super nosy and, like, I just like knowing things like this.
Matteo Collina [00:24:53]:
No. I gotta tell them as long as they cut
Will Button [00:24:55]:
you in?
Jillian [00:24:56]:
Yeah. That's right. That's the scheme. It's like when my, you know, when my website got hacked by that Chinese jewelry store, and I was like, guys, like, if you would if you would just give me a cut, this would be fine. It was nice jewelry. I liked it.
Matteo Collina [00:25:08]:
I mean, I think that's I I think that really is expert advice from our resident m ML, expert here because
Jillian [00:25:15]:
the on the team? Like, just
Matteo Collina [00:25:17]:
make sure that Yeah. No. Because AWS just came out and said that the strategy of sharing reservations across customer AWS accounts, like, if you're a consultant that does bundling for, instance reservations or compute reservations, you no longer can pass along that savings to the customer. I mean, what are they gonna do with all this excess capacity now other than, some good old fashioned Bitcoin mining?
Jillian [00:25:43]:
I don't know. I don't know. I mean, we could be making drugs for autoimmune diseases and cancer, or or you could be making some coal fired cash. I don't know. It all No. Mean, we could also do both. It's not it's not an either or. There's plenty of compute power these guys have, you know, they're spinning up plenty of AWS.
Jillian [00:26:07]:
They're not gonna load us if that last 10% is using crypto.
Will Button [00:26:14]:
So when this episode launches and we all get blocked from our respective AWS accounts, we can just reflect on this moment fondly.
Matteo Collina [00:26:22]:
So, I mean So this
Jillian [00:26:24]:
is our moment in the sun.
Matteo Collina [00:26:25]:
For the record, AWS isn't gonna block you because the ROI on utilizing cloud resources to mine crypto is so low that you're pretty much just paying AWS. But it is a good indication that there is malicious activity happening on your account, so it is something that they will, for sure investigate. And that I think is as much of a tangent on this that I I want to go down for today. Right?
Jillian [00:26:52]:
Yeah. I think we should talk about the low code with Rainforest. I love low code stuff. How did how did this come about, and, like, how does it work? I just I wanna know all about it.
AJ Funk [00:27:02]:
Yeah. For sure. So when I first joined Rainforest over 7 years ago, our model is a bit different. We had a bunch of human testers. It was kind of the gig Uber model of I have something I wanna test. Here's my test cases. They're all written in plain English, and we'll provide a bunch of humans for you to, to go test your application. Right? Including some exploratory stuff like you mentioned.
AJ Funk [00:27:27]:
Go click all over this page and and try and find problems with it. And that worked really well. It it was true end to end testing. We load your app in a, in a virtual machine inside of a web browser. They're actually clicking the buttons and confirming those things on the screen. But what we found is that humans are inefficient and expensive as we all know. That's why we have automation. Right? And so we kinda shifted over to automation, but we wanna do something a bit different from what everyone else was doing, which is these, code based tools, DOM based interactions, and instead, we built it all on the visual layer.
AJ Funk [00:28:08]:
So the way it works, is you go in, you load your app, and, you essentially just, like, take screenshots of things. Right? Click on this, type into this field. I can give it an AI, an AI prompt and say, you know, log in and and check check out in the cart. And then when you execute things, it loads into the same environment. Right? You have your your staging environment. Hopefully, you have some some seed data, with login information. You can load that all in the rainforest. It goes in and runs this whole workflow for you.
AJ Funk [00:28:45]:
The output of it is a video of the thing being tested, results on each step, things like HTTP logs, JavaScript console logs, all the information that you need to actually debug things when something breaks instead of it just saying, you know, like, in a unit test when it's like failure, like, one does not equal 2. And so by doing things at that visual level, it offers a lot of flexibility. The first thing is that we're not stuck inside of the browser. Right? We do primarily focus on web based testing, but that does not mean you're stuck inside of the browser. It means you can do things like install a Chrome extension. Right? Open another tab in your browser, install a Chrome extension, interact with that extension because while you're still inside the browser, you're outside of the scope of that web page where you usually are interacting just through the DOM. You can, you know, install some type of desktop application and test it through there. Because since we're working at the visual layer, it doesn't care what you're testing.
AJ Funk [00:29:52]:
It doesn't care what your tech stack is. It just cares that it loads in the machine. And it also offers a lot of, like, more flexible and robust in avoiding flakiness and brittleness to small changes. We have, fallback methods. As much as I've been, kinda hammering that testing with the DOM is not a great idea, we do offer DOM fallbacks, because sometimes it makes sense. Sometimes I don't care about the visual appearance of the button, and all I care about is, that there's a button there. Right? In, in reality, there are variables that we can't control. Right? A a very common scenario is my marketing team is running experiments.
AJ Funk [00:30:36]:
Every time I load this page, the button says something different. It looks different. And so we don't wanna tie the visual appearance, to the the pass fail result of this test, so I'll use something else. I'll use a DOM selector. We also have, like, AI search. You could say something like the button the login button at the bottom of the page. And so the important point here is you don't write any code whatsoever. We have an intuitive UI that you do all of this through, which means you don't need skilled engineers to do it.
AJ Funk [00:31:07]:
Right? A lot of teams have, QA engineers that their job is to just write tests all the time. Other teams, you are re the engineer is responsible for writing these tests, but they need very specific domain knowledge. Right? I need to know, about the thing that I'm testing. Like, from a product standpoint, what does this thing do? I need to understand the code. I need to know how to write these tests. With a no code solution, anybody could do this. Right? It's it's up to your team who owns quality, who owns these tests. For us, it is, usually the the engineer that is shipping the code.
AJ Funk [00:31:49]:
We write the rainforest tests with it, or our product and design team owns it because, like I was saying, they're very tied to our user workflows. Right? They're like, this is how we designed this thing. Engineers are gonna build it. And then any of us that have knowledge about how this user flow is supposed to work can own this test. So it makes it much easier to both write and maintain your tests over time. And then, you know, if that person leaves your company and has all of that domain knowledge on how it works, you just need someone who knows how the app works and they can update your test suite.
Matteo Collina [00:32:21]:
Now I feel like one of the biggest mistakes that I keep seeing over the course of my career is as companies grow, they tend to have more, allegedly, more software, more code, which may or may not end up in a giant, ball of mud mass or even, an extensive number of quote, unquote, microservices, that communicate and really depend on each other. And there was always this challenge by someone who wanted to have a test that required somehow interacting with all of these components. And they never really could understand that the one of the whole points of microservices was to isolate testing. But I think we live in the reality, which is there are some companies that do have a giant ball of mud that have thousands of binaries that have to be installed and running servers. Is there a strategy that I don't I don't mean to, you know, pick on your company. I I don't know if there I I don't think there is a strategy. I think the strategy is right microservices. But I can imagine that, you know, as a SaaS company, the last thing we wanna tell our customers is, yeah, have you tried not having that problem? Have you tried to do
Jillian [00:33:32]:
package manager? That tends to be the solution that I see. That one is everybody's favorite.
Matteo Collina [00:33:37]:
Yeah. Distributed monolith always works. Publish all your binaries that remotely depend on each other to a third party solution and then pull those out at runtime. Always works. Best solution ever. Maybe, AJ, you have some insight here on either something that works or something that works with Rainforest 2 a to to deal with those situations. Or maybe you just you know, it's not something that is handled today.
AJ Funk [00:33:59]:
Yeah. For sure. We we do have a bunch of different microservices running. And I think, I'm gonna refer back to the the the testing pyramid. Right? Is we test each one of those microservices in isolation. Absolutely. Maybe we test, as we go up to the next layer of our integration test, we test some interactions between them. Right? The kind of core, you know, handshake interactions, what whatever is the the main functionality of these 2 microservices talking to each other.
AJ Funk [00:34:28]:
Maybe we have some tests there. But at the end of the day, these end to end tests are comprehensive. Right? If anything in that microservice architecture is failing, presumably, my test is going to fail. And at the end of the day, all we really care about in theory is what the user gets when they're interacting with it. So if I'm just clicking a button, maybe there's a 1,000 microservices that are involved in this, and maybe I'm not directly testing each one of those. But by implementing it as an end to end test, I am very confident that they're all working because my test passed. And so being smart about how you implement each one of those layers, in an efficient way. Right? Lots of unit tests on each microservice and then this overarching test that just make sure everything is working together, is usually the way to go about this.
Matteo Collina [00:35:19]:
I mean, maybe it's a technical implementation question. Like, where is the environment running that the rainforest tests are actually executing is? Is this some sort of binary or CLI that's run on the client side, or are they sharing with you a set of microservices with deployment instructions so that you can run them within your own infrastructure?
AJ Funk [00:35:39]:
Sure. Usually, our requirement is that you need to be able to access it via a web URL. So the the kind of standard way that a ring for assesses run is we provide you a VM, and that VM has a browser on it, so I can specify Chrome on Windows 11. And the first step of your test is going to be a navigation. Navigate to this URL. This is where my web app lives. There are some different use cases where you can absolutely go download a binary and install it and do whatever you want with it. Our out of the box functionality is to, primarily test those web apps.
AJ Funk [00:36:19]:
So that's where we focus, but you certainly have the flexibility to do whatever you want with those VMs.
Matteo Collina [00:36:25]:
No. I I mean, I think I think that approach is genius. Basically, it's out of scope for setting up the environment unless you want it to be in scope of which, you know, then it's a it's a virtual machine. Go to town on how you wanna deal with it there.
AJ Funk [00:36:37]:
Right. And and by, kind of forcing people to give it a public URL, we're nudging them towards good practices, right,
Matteo Collina [00:36:46]:
which
AJ Funk [00:36:46]:
is set up a staging environment, a QA environment, and make it mirror production as much as possible, which includes being able to navigate to it in a URL. And these are small things that we see with some of our new clients. Like, well, I don't have a staging environment. And, sure, I guess you could load your production environment in there, but let's let's show you how to test this properly and not shoot yourself in foot.
Matteo Collina [00:37:11]:
I'm I look. I love that you're saying that. The number one feedback I've always seen here is we can't expose our nonproduction environment publicly. Like, people can't know what we're currently working on. They will use that information maliciously against our company in some way.
AJ Funk [00:37:26]:
Yeah. Like, what what are they gonna do with it, though? You know? Like, if I I have seen some interesting mistakes. Like, you know, maybe we're cloning our production database and not sanitizing sensitive information from it or something like that. Then, yes, absolutely, you're doing some bad things.
Matteo Collina [00:37:42]:
Yeah.
AJ Funk [00:37:43]:
But there are certainly ways to do this. And, I'm of the opinion, who cares if people are in your testing environment? Like, worst case, they blow up your testing environment and whatever.
Will Button [00:37:55]:
And in that case, you figured out how they could blow up your production environment without losing prod.
AJ Funk [00:38:01]:
Exactly.
Will Button [00:38:02]:
Yeah. I think there's, that's, you know, like, part of the undocumented learning curve of working in this industry. Mhmm. You know, because people who are early in their careers think things like, oh, I shouldn't expose staging until, you know, they learn that that's actually probably a good thing. But, like, nowhere in any computer science or course or or boot camp or anything do they cover these kinds of things. And so I think that's actually, like, a really valuable add on service that, you know, you get from rain for forest or that you get from working with people who are more experienced is just like learning that tribal knowledge that's gonna help you out later in your career so you don't have to reinvent the wheel and solve problems that we actually solved 30 years ago.
Matteo Collina [00:38:56]:
I mean, that's well, I mean, we've unfortunately had to append our documentation with, like, here explicitly are the sensitive pieces of data that are relevant to our 3rd party application. This is sensitive. This is sensitive. Like, this is not sensitive. This is like the app application ID, not sense. Like, do not try to encrypt this. Do not try to secure it because people will try like, how do I do this? I'm like, you can't. Like, stop it.
Matteo Collina [00:39:20]:
Like, this has to be public, on your website, in your application. People have to be able to see it. You're not gonna be able to get around that, and I feel like it's more than just experience. I feel like there's a whole level of pragmatism there, like weighing the cost versus the reward of actually trying to sanitize a piece of information. And having a third party testing service, as you mentioned, just reinforces that in a way. Like, you are going to have to expose that to be tested, must show that it's not actually sensitive information.
AJ Funk [00:39:52]:
Yeah. Definitely. To to me, it just reminds me of, like, the myth of the 100% test coverage. Right? It's like, can we a 100% encrypt everything? Absolutely not. Like, people your end users need to see this information. I see I've seen some interesting attempts to kinda obfuscate those things. Like, I've seen some libraries that, prevent you from opening the JavaScript console, for example. And it's like, what are you hiding in there? Maybe maybe you should just not put sensitive things
Matteo Collina [00:40:17]:
in there.
Will Button [00:40:19]:
Here's a here's a wild thought. How about
Matteo Collina [00:40:21]:
you just don't do that?
Jillian [00:40:22]:
Have your credentials, like, encoded in the HTML on your page. Maybe.
Matteo Collina [00:40:27]:
I I mean, I can't believe you 2 are joking about this, honestly. Like, one of the most common attacks against So
Jillian [00:40:34]:
I don't do UI, so I can joke about all this because none of this is
Matteo Collina [00:40:38]:
It's okay. It's Absolutely none of this. I'm just
Jillian [00:40:40]:
like, uh-uh.
Matteo Collina [00:40:42]:
Jillian, you'll have plenty of opportunity to get your models encoded with, AWS access keys and secrets, and then you just ask the model, hey. Can I have an access key and secret that are valid that work for any AWS account?
Jillian [00:40:55]:
I did actually accidentally push my AWS credentials to GitHub once, and, like, the amount of emails that I got from AWS was just, like, it was unreal. It was a very it was a very bad day for me. It was a very, very bad day. So I've done other stupid things, but I don't do the same stupid things. So I can sit here and be very smug about this. Like, this is.
Will Button [00:41:18]:
That is like that is I've always I've often wondered about that. Like, the speed that AWS and other malicious people can identify that you committed an AWS access key to a GitHub repo
Jillian [00:41:31]:
It was instant. It was, like, instant right then. Because as soon as I did it, I was like, oh, no, and tried to, you know, and, like, try to, like, make the GitHub repo private. And, nope, it was instant. They knew. They knew it was out there.
Matteo Collina [00:41:44]:
Yeah. I mean, it's bad. I mean, I think I saw a bunch of statistics on this that for AWS keys and on GitHub, it's about 30 seconds to 2 minutes having been exposed in the repository anywhere in any format. So, like, commit at the beginning of the repository where it was there, but then got removed. So it's not in plain text anymore. You have to go back through the Git history. It's still about 2 minutes. Then there's exposure on, like, Stack Overflow and places like I don't know who uses Facebook in connection with their work, but, that was another place and then Instagram and Reddit, somewhere between 2 and 4 or 5 days, and then there's a couple other ones where it's 6 and more.
Matteo Collina [00:42:23]:
Some of those you have to thank, like, GitHub for. Like, they'll actually discover secrets there. So if you provide a third party application that has credentials, like, at auth risk, we have our secret keys registered there. So if one of our customers exposes keys for our service on GitHub, we'll get notified, automatically revoke those keys, and send them an email telling them that they did something that they probably did not wanna do, multiple times if that's if necessary because that's happened as well.
Will Button [00:42:54]:
I wanna switch topics here real quick, AJ, because you've been with Rainforest QA for over 7 years now.
AJ Funk [00:43:01]:
Wow.
Will Button [00:43:02]:
Yeah. Which is unusual in the tech industry. So I'm curious about what are the, what are the things that you look for in a job that have been fulfilled at Rainforest that keep you there that long?
AJ Funk [00:43:20]:
Mhmm. Yeah. For sure. 1st and foremost is the people. Actually, when I was interviewing with Rainforest, the last person I talked to told me something. Like, I stay at Rainforest because of the people. And I was like, okay. That's that's a word.
AJ Funk [00:43:35]:
That's what everyone says.
Will Button [00:43:37]:
Oh, we're family. Right? Yeah.
AJ Funk [00:43:40]:
And then I, I quickly drank the Kool Aid, I think, and I found myself saying that on interviews. And I'm like, I know this sounds like a load of crap. And I think, you know, the hiring process is super, super important. Right? Yeah. Both finding people that are qualified for the job, obviously, but our good culture fits. We have a pretty small team, so there's nowhere to hide. If you are not doing your job or you're not up to par, you're gonna be exposed pretty quickly, which leads us to have a very, reliable team.
Matteo Collina [00:44:12]:
You know? We we are distributed
AJ Funk [00:44:12]:
globally, so there's a lot of, distributed globally, so there's a lot of, hand off. You know? I'm going to sleep. You're waking up. Here's what I did. And I trust when I wake up that you're just gonna have this thing done. And if you're not one of those people, you're probably not gonna fit at Rainforest. So really, really qualified, experienced, smart, reliable people makes life so much easier. And then the other piece of it is, you know, the mission that we're on, the technology that we're building.
AJ Funk [00:44:42]:
I think, when I was first exposed to it, the first time I shipped code with Rainforest, it was kinda like, wow. How have I been shipping code before this? And the answer was I was probably breaking things all of the time, and you don't notice until user catches it in production 2 days later or whatever. And it's it's something that I'm really passionate about. I think as a front end engineer, we get really caught up on the details. Right? There's always visual layers. There's these very specific human interactions. I like building things that humans are actually interacting with, And that kind of naturally leads you to a quality assurance mindset. Right? I want everything to be perfect all of the time.
AJ Funk [00:45:26]:
How do I ensure this? And so the combination of really great people and working on something that I'm actually really passionate about, and I want to see the rest of the world adopt these correct ways of testing things, in my opinion, of course, just makes it makes it easy to work here. Yeah.
Will Button [00:45:43]:
Right on. That's cool. That's cool. Is there a you guys obviously do a lot of, front end type testing. Is there a particular industry or vertical that you have got a lot of experience in or something that has worked really well that makes a really cool story?
AJ Funk [00:46:07]:
Something I've been involved in that makes a cool story.
Matteo Collina [00:46:10]:
Oh, I
AJ Funk [00:46:10]:
don't know if I have a good answer for you, honestly. I've been I like I said, I've been at Rainforest for so long. That's all I can think about, I guess.
Will Button [00:46:17]:
Right. Do you do you attract, like, a certain, like, customers with, like, financial apps or with, like, web based gaming apps, or is there a particular vertical that tends to gravitate towards your sir your service?
AJ Funk [00:46:31]:
I think not really. And I think that's one of the things that makes it cool is it's a very generic testing tool. Right? There are some, some limitations. But in general, if you could load your app on a machine, you could probably test it with Rainforest, not caring about what the tech stack is, those kinds of things. So there's a very wide range of users that we have, from yeah. There's some financial, some financial companies, doing some, what I always find interesting kind of, like, testing visually things like spreadsheet style apps, like their tables and things like that. And then we have some really cool, like, visual tools, like, like drag and drop interfaces where you're building things, like, you know, Lego style building, where there's probably, to my knowledge, not any other great way to test something like that. Like, what do you say? Are all my Legos on the page? Yeah.
AJ Funk [00:47:28]:
They are. Are they kinda oriented this way? Like, yeah. They are. But how does it look? Right? How what does the user see? So the the real sweet spot is really visual based applications because I don't think there's other great solutions for them out there. But in general, being a kinda generic visual testing application, it really applies to anything.
Will Button [00:47:50]:
Right on. For a lot of web based front ends, it's all Node. Js based. Do you have a favorite Node. Js type tool? Are you like a a React fan or Next JS or Vue? Give a personal preference?
AJ Funk [00:48:09]:
Yes. I am a React fan boy for sure. I I I started, you know, re re rewind all the way back to, like, the jQuery days and
Will Button [00:48:19]:
stuff. Right.
AJ Funk [00:48:19]:
I see that, and I have nightmares still. We have some we actually have some of that floating around in our, like, like our admin applications and stuff where it's like a rails back end, and they're like, yeah. We got jQuery in there. And then my first thought is always like, well, like, how do you test that jQuery? And answer is we don't. I'm going through a couple of waiting for us to test that and call it good. And I I started, I started with Angular back in the day.
Will Button [00:48:45]:
Oh, right on.
AJ Funk [00:48:47]:
Back Angular 1, anyways, was kind of the reverse of React where, like, we're gonna put your JavaScript in your HTML. React took the approach of we're gonna put your HTML in your JavaScript. You know, just smush it all together. And it's come a very, very long way, I must say. So, yeah, I find working with React very easy and intuitive, and it's very nice that the, the general JavaScript community has supported that and has pushed that forward because especially, you know, with with all software and technology, but especially in front end development, it's really easy to pick the the the wrong tool long term. Right? I picked this thing. It's great. And then we find a better way to do it, and they just abandon the project.
AJ Funk [00:49:30]:
Right? This is true with, I mean, anything open source. And we've run into this a lot of times, right, even with, open source testing tools. And, actually, we had a very large Enzyme test suite on our React application, and we ran into something like this. There was a new, new way of testing React apps, which was the React testing library. And Enzyme kinda said, yep. That's a better way to do it. We're gonna stop supporting after after a good version like Rack 16 or Rack 17. I'm like, well, we want to upgrade to React 17.
AJ Funk [00:50:06]:
It's like, well, none of your enzyme tests work. So for you. Yeah. Exactly. Exactly. Too bad for us. And so now you start weighing the options of, well, how do we upgrade? Right? Do we just say, let's not upgrade, which is gonna bite you really quickly. Right? Especially at the pace, all these JavaScript libraries are being updated.
AJ Funk [00:50:25]:
I want that new shiny thing. I want support for that thing, and I don't wanna be stuck in the past. The more you get stuck in the past, the harder it is to catch up with everything else. Right? And so our options were basically rewrite all these however many thousand enzyme tests, or we could just nuke them all, which is, it reminds me of, like, these, these memes I see about, like, junior engineers and the intern where they're like, their commit messages. I nuked all the tests because they were failing, and I kinda make them pass. Or, like, return true in all the tests because, yeah, it's one of the past.
Jillian [00:51:03]:
The only reasonable way to do things.
AJ Funk [00:51:05]:
Yeah. And and it it sounds, kinda like an overreaction, but as we started to kinda think about these testing philosophies, we're like, we have end to end test coverage on all of these things. Right? And a lot of the front end tests, even though they're unit tests, they they load things in a headless browser and are kinda recreating what an end to end test does. So we chose to keep all of our actual unit tests, all the kind of business logic that didn't use Ensign, new call the Ensign tests, and just lean into a rainforest test because we know if the rainforest tests are passing. We don't need all of these redundant tests anymore and instant productivity boost. Like, I don't have to maintain all of these things anymore. I don't have to upgrade them. I could just get them out of my way, and I can upgrade all my dependencies.
AJ Funk [00:51:57]:
And because we have really good m n test coverage, we could do that confidently and know that we're not breaking things. So, yeah, choosing dependencies can be quite tricky sometimes, especially in the JavaScript world.
Matteo Collina [00:52:09]:
Did you find some places that you still wanted to reintroduce some of the React testing library for, I don't know, component level testing of of the UI? Or have you kept with, like, a 100% of the decision to not have that layer of testing anymore regarding the UI components because you focus on the full picture end to end testing for the user flow and also whatever you have, with the interaction with the back end.
AJ Funk [00:52:37]:
Yeah. We still have some of it, and we drew the line at user interactions. Right? So, React has this idea of hooks, which are basically just chunks of logic that's just a function that I can use inside of a component. We stopped having any, React Testing Library tests that were actual user interactions, no clicking on things, and instead, we used it to test the functionality, the logic of those hooks. So it's essentially a unit test, but it's testing a specific React thing, and it requires the the testing library to do that. Everything else kinda gets hoisted up to the end to end testing level. And it's nice to just say, hey, designer. Hey, product manager.
AJ Funk [00:53:19]:
Like, go at this test coverage while I'm busy hacking on things, and I don't have to worry about this anymore.
Will Button [00:53:24]:
So that's actually a go ahead, Jillian.
Jillian [00:53:27]:
Oh, I was just gonna say I'm so impressed with people who can keep up with, like, the UI and JavaScript plan because I've tried. I've tried, like, a few times, and it just it then everything changed. I was like, alright. I'm not doing this anymore. I'm gonna go I'm gonna go do high performance computing. That hasn't changed in, like, 30 years. It's gonna be hard.
AJ Funk [00:53:45]:
Yeah. You definitely start feeling like Sisyphus. You're just pushing that rock up the hill in this state. And every time you get to the top, someone tells you that you're actually on the wrong hill. So
Will Button [00:53:57]:
I was gonna say that seems like a a really interesting, approach that I hadn't thought of when we initially started talking. But, like, you you can replace having to write a lot of your tests in your React app by using Rainforest. Right? By just focusing on what the end user experience is and testing for that, you can save yourself from having to write a lot of tests in the React standard library.
Matteo Collina [00:54:23]:
So that's where the trade off is, though. Right? Because these tests then are testing more functionality at once. And so if there is a problem, you don't necessarily know, like, which line of code is causing the issue or what interaction there is. So, you know, there really is, like, how valuable is that flow? I think and that's something that as you pointed out, AJ, like, you sort of have to determine upfront, like, where is the value of your testing and how do you get the most value out of which pieces you're adding and where you're validating and etcetera. And so, yeah, I mean, in your case, the ends the enzymes weren't actually providing the right value in the first place. Yeah. So definitely switch them over.
AJ Funk [00:55:01]:
Yeah. Absolutely. And it is kind of a question of redundancy too. Right? Like, is redundancy good? Sometimes. Like, I can be really, really sure, and I can have some extra confidence that the things isn't isn't gonna break. But most of the time, it just slows us down. Right? I find that the often the best time to add more unit test coverage is when something breaks. Right? Because if my end to end tests are all passing, but something's broken, very often, it's some kinda edge case.
AJ Funk [00:55:28]:
Right? It's some either some weird user behavior, some weird input, some weird sequence of events. And those things are usually better captured in a unit test, because it's it's easier to kinda implement that specific scenario, that specific line of code that is the offender here, versus creating, you know, a whole new end to end test to just cover some edge case. Those tests are gonna just get longer and longer and just be kinda confusing, honestly. It's like, well, why why am I just, like, clicking in all of these random spots doing these things trying to cover these edge cases? Like, just try to unit test for it and call it good.
Jillian [00:56:06]:
I really like the emphasis on, you know, testing for business logic and just in general having, not everything controlled by the engineers. Because I find for myself, you know, like, I'll I'll write something and then I'll hand it off to a user. And then they immediately start using it in some way that I didn't even think of. And then, you know, and then we do, like, a couple rounds of this. So being able to cut back on that person who writes thing, who does not actually use the thing, and then just immediately being able to push it off to an end user.
AJ Funk [00:56:34]:
Yeah. Absolutely. And yeah. And and things like, testing and staging environments are great this. We push code to those environments all the time, give them to a PM, and say, go run and try and try and break this thing. Right? Don't always wanna do that in production. Right? Like, if the thing's not fully baked, I don't wanna crack something in the database or whatever. And so having places to push that and have people early in the process iterating on this, finding the major bugs, the minor bugs, the stylistic bugs, is is super, super valuable that having one of your users find it later.
AJ Funk [00:57:09]:
Right on.
Will Button [00:57:10]:
So you live in Tahoe. Do you get outdoors a lot?
AJ Funk [00:57:14]:
I do. I, I live here with my wife and my dog. He's a, lab husky mix. So he kinda thrives in the summer, thrives in the winter. There's lots of snow out right now. And so we're out we're outside pretty much every day, you know, snowboarding, hiking, kayaking, you know, all that kind of stuff.
Will Button [00:57:31]:
Oh, right on. How long have you lived in the Tahoe area?
AJ Funk [00:57:35]:
I've been here for about 5 years now. Right. I grew up in the San Francisco Bay Area, and I was part of the great COVID migration out here. We we always wanted to get here eventually, and, I was lucky. I was still working at Rainforest at the time and already remote. So the transfer up here to from remote, near an office to remote, it actually doesn't matter how far you are from the office, was very easy. And we're also real fortunate that we were not the only ones doing this migration, so we've made lots of friends that were like, yeah. We live down the street from you in the city, and we all live here now.
AJ Funk [00:58:11]:
So it's it's a very different life, but, we love it, and I don't think we're ever leaving.
Will Button [00:58:15]:
Ah, that's cool.
AJ Funk [00:58:16]:
Yeah.
Will Button [00:58:17]:
Right on. Tahoe's a beautiful area.
AJ Funk [00:58:19]:
Yeah. It really is.
Will Button [00:58:23]:
Cool. Alright. Should we move on to some picks? Before we do, any final thoughts on QA, rainforest, tips, guidance that you wanna leave us with, AJ?
AJ Funk [00:58:37]:
I think just recapping is finding the balance between confidence and velocity. Right? Everybody needs to set their own bar for quality. Like, how what is my ratio between confidence and velocity? Determining that for yourself is is the most important thing here. And, keeping in mind that not only is the velocity, but a lot of times it's the sanity of your engineers. Right? Like, we don't wanna spend all of our time writing tests. So finding that balance and, doing things in an efficient way is is the key to success.
Will Button [00:59:14]:
Right on. I and I think that's very use case specific too, you know, because the right answer for a financial app is gonna be very different than the right answer for, like, a social media app. Absolutely. Cool. Alright. Jillian, calling you out first. What'd you bring for a pick today?
Jillian [00:59:36]:
I am gonna pick Drive by Dave Kellett. It is a sci fi graphic novel, and I think it's on its I think it's, like, releasing the 4th one this summer. But it's so good and it's so nice and wholesome, which is very nice because, like, I really like sci fi, but I don't really like violence or gore or, you know, icky fluids or just I I don't like any of that. Okay? I don't like any of it. And this is just so wholesome and adorable, and the main character is very cute. So that's it. That's the pick.
Matteo Collina [01:00:07]:
Right. I got a whole
Jillian [01:00:07]:
bunch of copies for Christmas, and I'm, like, making people read them. And I'm gonna I'm gonna have, like, a little little indie graphic novel cult going on soon enough. It's gonna be
Matteo Collina [01:00:15]:
great. Right on.
Will Button [01:00:20]:
Alright. Warren, what'd you bring?
Matteo Collina [01:00:23]:
Yeah. So I just got back from a long hiatus being away from the show. I was on vacation, and so I think this pick is really accurate. Very short book, highly recommend, Tao Te Ching by, Laozi, which is the founder of Taoism, spelled Taoism in case in case you've seen it written but never pronounced before. And there's just so much good stuff that is in the book that can be applied to everyday life, working environment, etcetera. It's incredibly short. There's only, like, a 108 principles or so. And, it starts off great with, the DAO that can be told is not the eternal DAO.
Matteo Collina [01:01:05]:
Like, you can't write down the whole truth. There is something that's never said. It's impossible to convey everything. And I know it sounds so philosophical, you know, to go down this path, but I I feel like going through these really helps to put into perspective thinking outside the box with solving, certain problems or interactions or the communication we have every day. Highly recommend.
Will Button [01:01:27]:
Right on. Cool. AJ, what you got for us?
AJ Funk [01:01:32]:
Yeah. My, my my reading and listening choices are kinda all over the map, but I did have an interesting one, recently. It was called the light eaters. It's about plants and specifically this idea of plant intelligence. So, obviously, intelligence is a loaded word. They're not intelligent like you and I. They're not debating QA strategies and things like that, but they do have a lot of intelligent like behavior. You know? They communicate.
AJ Funk [01:01:55]:
They recognize their kin. They hear sounds. They transform themselves based on the, the visual appearance of environment around them. And so I found it really interesting and gave me a lot to think about, especially, you know, when I'm out in nature with the with the wife and dog just kinda staring at trees and stuff. So, yeah, check it out.
Matteo Collina [01:02:15]:
No. Oh, yeah. That's intelligent for sure. A 100%. Totally with you. There's a there's a good one. If you are out and there's plants or grass being cut and you notice the smell of, you know, freshly cut grass, what is that? It's a fear, intensely a fear pheromone that's been sent off to warn other grass that that there is danger around. Like, that is the sign of intelligent life.
AJ Funk [01:02:38]:
Yeah. For sure. There there's lots of super interesting examples in this book. I'm just like plants acting like animals, essentially. And it's, it's kinda a mind blowing experience.
Will Button [01:02:51]:
I read a book recently. I can't remember which one it was, but I've been studying mushrooms a lot lately, and and this book showed where mushrooms actually act as a communication agent for trees in the forest. And so, like, a a specific, you know, set of insects can start attacking trees on one end of the forest, and then the mushroom, because it's the the mycelium that grows underneath the entire forest floor, will relay that information to the other trees in the forest. And so by the time the insects work their way down to those trees, that those trees are producing a sand or a pheromone that actually, repels the insects by the time they get there. And I thought that was super cool.
Jillian [01:03:39]:
That is cool. I mean, on a, like, a mushroom and foraging Facebook group, and everybody just, like, takes pictures of fun mushrooms that they find when when they're out and about. And it's it's just such a nice little group because it's so chill. That's it. There's, like, there's no drama. There's no nothing. It's just look at this mushroom I found.
Will Button [01:03:58]:
There's a an app called Inaturalist that I use for that. You can take a picture of not just mushrooms, but anything you you find that you can't identify and then upload it to Inaturalist, and it will it will try to auto detect what it is for you, but then other people will come in and and confirm or tell you what that actually is. That was pretty cool.
Jillian [01:04:19]:
I used to do that a lot as a kid. I'd have, like, the field guides and go out with my field guide and try to, like, identify all the plants, but now we have an app for that.
Will Button [01:04:27]:
There's an app for that.
Jillian [01:04:28]:
Always.
Matteo Collina [01:04:29]:
Oh, Will, what's your what's your pick?
Will Button [01:04:32]:
My pick is there's a series on Netflix called Kunk on Earth, and I thought my sense of humor was, like, really, really dry. But this lady
Matteo Collina [01:04:45]:
She's a man.
Will Button [01:04:45]:
Takes it she takes it to a whole new level. This series is just hilarious because she sits down. It's, you know, like a the history of Earth base basically, but she'll sit down with legitimate world renowned experts in their field and ask them the most off the wall questions. And that to me, the hot was the highlight of the series. It's just the looks on their faces when she would ask them these questions that had absolutely nothing to do with what they were an expert in. But super entertaining series, definitely 10 out of 10 stars, kunk on earth on Netflix.
Matteo Collina [01:05:21]:
100%. And, you know, there's actually 2 other things. There's kunk on, Britain, I think, and then there's, like, one on Christmas and Shakespeare. So you have some extra biscuits too.
Will Button [01:05:32]:
Oh, sweet. I will have to check those out because I love her sense of humor.
Jillian [01:05:37]:
A mafia mystery. That's I've never heard that term. That's fun.
Will Button [01:05:42]:
Yeah. It's very, very accurate. Alright. That brings us to the end of the episode. Thank you everyone for listening. Jillian, Warren, thank you for joining me, hosting the show. And, AJ, thanks for coming on the show, man. It's been a pleasure talking to you.
AJ Funk [01:05:59]:
Thanks so much for having me. It was a lot of fun.
Will Button [01:06:01]:
Right on. Glad to hear that, and I will see everyone next week.
Real-World Testing: Insights from Rainforest QA Expert AJ Funk - DevOps 231
0:00
Playback Speed: