CHARLES MAX_WOOD: Hey everybody and welcome to another episode of JavaScript Jabber. This week on our panel we have AJ O'Neill.
AJ_O’NEAL: Yo, yo, yo, coming at you live from a dark room in Provo.
CHARLES MAX_WOOD: Dan Shapir.
DAN_SHAPPIR: Hi, coming from all the way from Tel Aviv.
CHARLES MAX_WOOD: Amy Knight.
AIMEE_KNIGHT: Hey, hey, from Nashville.
CHARLES MAX_WOOD: I'm Charles Max Wood from DevChat.TV. And this week we have a special guest and that's Dave Caro. Dave, do you want to introduce yourself?
DAVE_KAROW: Yeah, hey, hey, Dave Karow. I'm coming to you from Redwood City, California. Kind of the tip of. Silicon Valley, looking forward to today's conversation.
CHARLES MAX_WOOD: I was gonna say, you're kinda there between San Jose and San Francisco, right?
DAVE_KAROW: are. I guess, yeah, Silicon Valley's kind of morphed. It's the whole peninsula, maybe, from the bottom to the top now, but, yep, we're right in the middle.
CHARLES MAX_WOOD: Very cool.
When I'm building a new product, G2i is the company that I call to help me find a developer who can build it. G2i is a hiring platform run by engineers that matches you with React, React Native, GraphQL, and mobile developers that you can trust. Whether you are a new company building your first product or an established company that wants additional engineering help, G2I has the talent you need to accomplish your goals. Go to g2i.co to learn more about what G2I has to offer. In my experience, G2I has linked me up with experienced engineers that can fit my budget and the G2I staff are friendly and easy to work with. They know how product development works and can help you find the perfect engineer for your stack. Go to g2i.co to learn more about G2I.
CHARLES MAX_WOOD: Now you work for Split, do you want to just kind of give us a rundown as far as like what you do there? And then we can dive into the ideas behind Killing Release Night, which is always a great opportunity to get some pizza.
DAVE_KAROW: Yeah.
CHARLES MAX_WOOD: Like forever clock.
DAVE_KAROW: So I'm continuous delivery evangelist. You might also use the title dev advocate. I focus on being a continuous delivery evangelist because my career has mostly been about sustainable software delivery. How do we make this something that we can do over and over again without burning out? Uh, and so I get to travel around the world and talk about new patterns where people are rolling more stuff out faster, but more safely and learning. Uh, and, uh, it's a pretty good gig. Uh, and I hope I can share a few things today about that. Uh, my background really comes from, uh, web performance testing, load testing, uh, IT project management kind of done a bunch of different things. And the last couple of gigs have really been about sort of shifting left and doing more sustainable software delivery practices. So we don't kind of work on a death march to a big explosion and then sort of have to recover. You know, how do we make this something we can do over and over again and have it be mostly, mostly smooth? Nothing perfect. But how do we make it? How do we work as professionals and actually spend more time high-fiving and less time calling home saying we're not going to come home for dinner?
CHARLES MAX_WOOD: I was gonna try and sarcastically say I've never actually done the death march to the explosion. But I've kind of suppressed those memories. Let's just put it that way. Some of those are awful.
DAVE_KAROW: Yeah, I sometimes when I give a talk, I'll ask how many people have worked on a project for six months that ended up on the shelf. And I usually raise two hands.
AJ_O’NEAL: Those are the best. You know?
DAVE_KAROW: Yeah. And you know, modern software delivery is really about not having so far between the iterations, right? I mean, geez, what if we worked 18 months on something that doesn't actually hit the mark, uh, and you know, that's, that's, that's even worse, right? So, uh, yeah, it's, it's good to have, uh, uh, it's good to have perspective on how we can, you know, you're never going to be perfect, but how can we actually learn way, way faster and have the sort of, uh, the explosions be smaller and hurt fewer people.
CHARLES MAX_WOOD: Yeah, absolutely. Yeah. I mean, I think the one that kind of comes to mind the most was my last full-time job. Though I did have some freelance contracts that never saw the light of day, but that was mostly because my clients weren't ready to launch them more than anything else. And so it was, it was market management stuff and not necessarily, you know, that I hadn't deployed it every week. But, um, yeah, I mean, yeah, we worked and worked and worked on these features. Turned out they were features that our customers didn't want, but, Yeah, then we launched it and then we were there till like 4am Trying to make everything fit together the way that it fit together on our development machines.
DAVE_KAROW: Yeah, I think one things we wanted to get into today would really what is this idea of killing the release night and How does that work and and and why do it? It's funny if you go back if you you know I think the book continuous delivery hit its 10 year Birthday recently and there's a great quote. Uh, in there that as humble wrote, you know, the reason Dave and I wrote the book in 2010 was we just didn't want to spend our weekends and data centers doing releases anymore. We thought it was a, a, uh, Leap here way to, uh, uh, spend our time. It was miserable for everyone. We actually want to enjoy our weekends. It was about making releases, reliable and boring. And, and the thing is that, that we all got so used to releases being a huge fire drill and, and, uh. But that's not the way people deploy software who are deploying many times per day. They don't have hundreds of fire drills per day. They actually figure out how to make it way less likely to have a fire drill. And yeah, this free pizza is nice.
CHARLES MAX_WOOD: But it's not free. If you're up that late, it is not free.
AIMEE_KNIGHT: Sorry. Just, just bring you dinner and do your laundry.
DAVE_KAROW: You know, I worked at a place where they had this standard dinner that would come at seven o'clock if you're working late and, and, uh, You know, that was just a different mindset, which was, they kind of were hoping that you'd never leave. But when you're designing for it being broken all the time, it's no way to run a railroad, right? So the whole, going back to Kitu's delivery, the idea was how do we make it boring to do a release and a deployment, right? But now we're in a world where we want to be able to flow software all the way to the user really, really often. And how do we make that kind of rarely a big deal? And that's really what we're going to get into today. So I'm kind of, yeah.
DAN_SHAPPIR: I would like to mention that I would rather not be paid in pizzas.
AIMEE_KNIGHT: Same, same. I have a precursor question, which I'm sure you'll probably get into, but I think it's going to kind of paint the picture here. So literally, like five minutes ago, before we jumped on our call, this is where I'm at currently. We have very little unit tasks. A lot of into end tasks and so we're kind of trying to like flip that pyramid so that we can get to, we're doing, we use XITE so the front-enders are able to like immediately deploy to production right now but you know it's not as hardened as we would like you know a lot of things get out that shouldn't because we don't have a lot of like unit test coverage so in your I guess my question is, in your opinion, do you think that kind of flipping that testing pyramid is a precursor to this typically?
DAVE_KAROW: Yes, that's a great question. Sometimes people will say, wow, if you're if you're because one of these we'll get into is testing and production, right? And sometimes people say, Oh, you're not testing at all until you're in production and absolutely not. All I'm really good at testing and production. Yeah, I'm just really bad at fixing the issues. We all end up testing in production no matter what, no matter who you are and what you're doing ahead of time, but the teams that are flowing a lot of software out often have really good unit tests and sort of smoke test level coverage. That's all automated. You can't flow, like in extreme cases, continuous deployment, right? Where a person commits code and if everything passes the test, it flows all the way to production. But let's just talk about continuous delivery, which is I'm trying to push small changes out really often all the time. The only way these pipelines work is if there's automated testing that handles the fundamental components. I worked at SOSTA a while back and they had a really good practice on the front end, which was if you wanted to build a new front-end feature, in order to get your manager to agree you'd finished the work, you had two things. One was the feature, the other one was a test to prove that it worked. And Uh, what was cool about this was those tests ran at least three times a day. So if you committed some code and, uh, it broke somebody else's code or someone broke yours, you wouldn't go past your morning coffee break or your lunch or your afternoon coffee break before you knew that, right? So you knew that within, within minutes or hours, not days and weeks, right? Finding out six weeks later that you've got a bug sucks, right? But if you have, so they basically committed that if anytime they built a new or changed a piece of the front end, they had to have a test script that ran through it. Now that you're talking about end-to-end, you know, that the more you get to end-to-end, the more you get to brittle and interdependencies. But these were tests that were about like how this new widget they added is supposed to do what it does. Right. So it wasn't an end-to-end user scenario. It was really, I've just changed this part of the page that does this thing. Here's my unit test that proves it works. Right. And so they have this library of those that were all kept always valid. Um, and, and they would catch things quickly. So I think absolutely flipping that pyramid around, you are probably know that the end-to-end tests are way more brittle than, and you know, things break because you've improved something. Not because something's broken. Uh, your test just don't work and you move the cheese. Um, and waiting until you've got all the pieces of a new solution done before you can write an end-to-end test means that you go a long time before you ever test it, which is crazy. So yeah, you want to, you want to have test for the small pieces, test for the API, test for the, for the, for the piece of a page that's doing something, test for whatever it is that you're, you know, iterating on.
DAN_SHAPPIR: I have to ask something. It seems to me that at least partially this is all made possible because thanks to the fact that a lot of us are much more often developing services rather than, uh, deliverable products. My own experience was prior to working where I work now, which is at Wix, I used to work at an enterprise company, enterprise software company, that wasn't even necessarily developing for the web. And a lot of the time we were literally delivering, you might say delivering a CD to a customer. And so, for every couple of months, we would ship out a new version of the software. This really changes. How you, you know, when I moved to Wix and it's all cloud-based services that you can deploy whenever you want, that kind of changes the dynamics and makes all of these things like continuous delivery and continuous deployment possible. So is this methodology also relevant when you're talking about delivering like old school software or is it only relevant when you're doing online services?
DAVE_KAROW: Would you consider laser printer firmware to be old school?
DAN_SHAPPIR: I think I would.
DAVE_KAROW: Yeah. So there's a great story at HP where they actually move from the sort of slow broken way of building stuff to using continuous delivery. And, um, uh, I was actually a delivery comp in Seattle a few weeks back and had a fantastic hallway conversation with, um, Dr. Nicole Fosgren and, uh, and with, um, David Farley and the guy, one of the attendees was saying, well, I don't know if this will really work in my shop. And it was great because Dr. Fosner, and she said, look, the reason I got into research was I was tired of hearing my boss say, those are interesting ideas, but they won't work here. And so she was gonna prove with data that it would work in mainframes and it would work in firmware and it would work in SaaS and it would work right. And Dave Farley has done medical devices and, and, you know, obviously a trading exchange kind of sounds more like SaaS than it does an install piece of software because it's this living, breathing beast. But the reality is that these practices actually, you know, with SaaS, you need this. And in one of the cool things about SaaS is nobody, you don't have to get your customer to install it, right? Like you did with a CD. But actually these practices, if you look at the book, Accelerate, awesome book, if you haven't read it yet. Um, accelerate really documents how this applies to a wide variety of environments. And there's, there's, there's sort of a couple of handfuls. Well, okay. Fingers and toes of, of concepts that if you follow, you can more reliably faster build better software with fewer incidents. Right. So it actually does apply. I mean, I, I, I don't know, then you, uh, I don't know if you remember back in the old days, uh, when load runner would, uh, do a release every two years and people say, wow, two years, that's kind of a long ways apart to take that long to build something. And it wasn't that it took a long time to build something. It's that every time they did a release, they broke so much stuff that it would cost the customers hundreds of thousands of dollars to fix their existing test infrastructure when they install the new release. And so the customer would need to take it more often every two years.
DAN_SHAPPIR: Yeah, so yeah, it's kind of amusing that you bring up Loadrunner because at the previous employer, I was actually involved in a project where we were working with in conjunction with the load runner people. So we were supposed to deliver something and they were supposed to deliver a load runner version that would provide certain functionality. And we were the startup with the startup mentality. So when the customer said, hey, can you deliver that? We said, sure, we can do it in a couple of weeks. And the load runner people said, hey, yeah, for sure. We'll put it into our backlog. It will be ready in two years. Yeah.
DAVE_KAROW: Right.
DAN_SHAPPIR: So. Yeah.
DAN_SHAPPIR: I could definitely identify with what you're saying,
DAVE_KAROW: but it would get back to what Amy was saying earlier about unit tests versus end to end tests. The, the more you can have smaller pieces that are independently testable, everyone can kind of mind their business and, and, and cause fewer sort of end-to-end problems. If you're, if you're, if people talk about contract testing or whatever, like, you know, when API, it's kind of a really clean concept or a little micro service, but really anything, anything you're building. If you know what it's supposed to do and you have the ability to prove that it does what it's supposed to do and whenever you change what it's doing, you change a test. You know, it's funny. We're theoretically we're not really talking about testing in today's call.
AIMEE_KNIGHT: So It's so much a part of it.
DAVE_KAROW: Yeah, I know. But it was a great question. You need, which is that, how do I get to the point where I can flow and this is no those concepts. How can I get point where where where value flows out to the customer. Often and with greater ease, like how do I flow more stuff so it's not a two-year backlog. And right. And, and the way we do that is we actually decouple. So another great book, you know, the, the, the unicorn project behind me on the walls, a picture of unicorn. So the unicorn project really gets into some sort of just a handful of core values that you follow. And what happens kind of locality, which is how do I, how do I make it so I can work in one in my group can work kind of alone and not have to worry about too many interdependencies. Uh, and that I can actually make a difference.
AJ_O’NEAL: Is that the sequel to the Phoenix project?
DAVE_KAROW: It is.
AJ_O’NEAL: Okay.
DAVE_KAROW: And actually takes place. It's, it's sort of a parallel cool, uh, made that one. Uh, it happens at the same time as the Phoenix project. He wrote it after. Um, but it's actually a different perspective on the same things happening.
AJ_O’NEAL: From like, Ender's game versus Ender's shadow.
DAVE_KAROW: So it's, it's a. That one's another really good easy read, kind of a page turner. I mean, if you've been in the space long enough to have been through, have some scars, both those books are pretty easy to read pretty fast because they kind of tie together a lot of interesting concepts. And testing and production is one of those, which is I need the ability to actually, when I do start rolling stuff out, before I affect everybody, how do I test it without affecting my customers on the actual infrastructure I'm gonna go live on to make sure it's really behaving like I thought? And then how do I get feedback quickly roll something out to everybody and blow it up.
AIMEE_KNIGHT: Yeah, like I really like that idea because I don't know, I feel like, and this is just my personal opinion, but I do kind of feel strongly about it. There's so many organizations that I feel like don't utilize like QA or QE in the best way possible. Like I don't, I really don't like working in organizations where I as a developer just like toss stuff over the wall to them. Like and I feel like that's bad from a mentoring standpoint. Like if you break something, like you should feel that pain and you should fix it because I don't know, there's just like something about holding yourself accountable to following something all the way through to production. And I just feel like it drives like the way that you write the code and your communication and so many other things, whereas like if we just like toss it over the world to QA, yeah, that personal responsibility.
DAVE_KAROW: So, Amy, there's a great video that the, the Jira team at Atlassian did, um, on, on their journey to quality. And maybe I can get into the notes after the episode because I don't have it handy here, but, but basically the idea was that they wanted to, they couldn't, they couldn't, I'm using air quotes, afford to have one QA person for every developer.
AIMEE_KNIGHT: Yeah, there's no way it doesn't scale.
DAVE_KAROW: They had like five for 70, right. And instead of actually having QA be a gatekeeper, QA was, was coach to help the developers write more testable code and write their own tests. And, and it took them five years to go from defects to actually almost like a zero leak rate where stuff just almost never made it out to production that was broken. Um, and, but it really interesting journey. That's a pretty cool video. And another one I'll drop a name, uh, uh, Talia Nasi she's worked for WeWork. Talia has a several videos on the web. Uh, if you look for, um, Talia Nasi and it's N A S S I uh, testing in production. She gives a pretty rigorous talk about how to use these concepts to, you know, staging is never reality. Uh, and, and, you know, none of your, none of your environments other than production are just like production. And, and so if you don't have the ability to test in production, um, you're missing out on a lot. And, um, she, so she gives a really good talk. She's, she's usually at testing conferences, but, uh, that's also a really good one to look into. That's a pretty quick user about like 20, 25 minute talks. Pretty good.
AIMEE_KNIGHT: Yeah. Like if we just want to be smart with like the salaries that we pay people, it seems like, you know, focus QA on things like that rather than, you know, stuff that the developers should just be doing in my opinion, like as part of the process of their feature development.
CHARLES MAX_WOOD: Right. Yeah. I want to kind of steer us a little bit here just because I mean, this is all important, but this is kind of the night after the release night, right? So you, you've had a day for people to get on and figure out that you broke all the things, right? But the release night itself often goes poorly. So how do you start making your way toward continuous delivery so that we can actually, yeah, not have release nights?
DAVE_KAROW: Yes, so it's funny, there's really, there are a couple core concepts that build on each other, right? So continuous integration is definitely one of the first things. And for that, you've gotta have things like have the source all centralized. It doesn't mean you have a centralized source control server, but you have a way of having one version of truth for your source. And you need to have automated testing, the tests when things are checked in. And it's funny, we're at Delivery Conf, Jess Humble was pointing out, look, the idea behind continuous integration was you needed to build at least, you needed to commit your code at least once a day. So no long-running branches. You're talking earlier about trunk-based development, right? You don't, peel off a branch of code and work on it for three, four or five weeks, being completely out of sync with the rest of your team and then try to merge it later. Actually, I just said that but it's true for a lot of people that are listening. You don't want to be doing that nightmare, right? You don't want to be doing that. If you're checking that code in on a on a at least once a day basis, you're going to find problems faster. And so that's one of those quotes I often will talk about which is which is continuous integration. This is Martin Fowler. Continuous integration doesn't get rid of bugs, but it makes them dramatically easier to find and remove. So the first thing, Chuck, I would say is you've got to have continuous integration happening. You've got to have the ability to check your code in and have tests run so that you can find the, air quotes again, obvious bugs quickly, right? And then on top of continuous integration, you need to work on continuous delivery. And that's really how do I build releases in a way that's not dramatic, a full on release, everything that's needed to actually have a deployable product, right? And it's kind of a, there's this theory of constraints, right? Which is figure out the thing that's the hardest, the scariest thing that causes the most trouble and deal with it first. And then that makes your life easier the next time you go through the process, right? And so you work this hygiene, like, how do I get to the point where turning the crank is not so much drama? And once you've got sort of continuous integration and continuous delivery happening, by the way, this can be for one developer know, people, oh, this is how you deal with giant organizations. But Gene Ken, when he was on our webinar, he was talking about how he's doing all these different projects. And if he didn't have this kind of discipline and integration and automation, it'd be really hard for him to kind of keep up with what he's doing. And, and it's super important when you've got lots of people working on one thing, but it's even matters if you're just one person, you got to have the ability to kind of, you shouldn't have a checklist of stuff you got to do. You should automate as much of that as possible. So you, you know, there's no psychic overhead to, to iterate, right? iterate, iterate. And then kind of that feeds into where we're going today, which was, if you're going to be automating this stuff, and trying to get to the point where you're, you're shipping more often, instead of making, you know, shipping or thing you rarely dare to do, like, how do you, how do you take care of kind of what I've identified as sort of like a handful of really key goals for progressive delivery, which is if I want to gradually move my code out to production in a way that I don't blow everything up all at once, I have some goals to do, right? I want to figure out how can I reduce or eliminate downtime so that I don't have to take stuff down? Uh, how can I limit the blast radius when things go wrong? Right. And that's both in time and scope, like how many people are impacted and how long is it broken? And then I want to be able to facilitate this flow. How do I make it easy to move lots of separate pieces through the pipe without having lots of dependencies that slow people down? And then lastly, how do I learn? How do I make sure that I'm learning that I've got feedback that kicks in both on a system level and a user level. And when you. We kind of stack these practices on top of each other. This is how you get to the point where, where you're, you know, maybe you're not going to be shipping every eight seconds, but you don't need to like, but you're going to get to the point where when you're moving value out to the, to the customer faster and you're actually having fewer incidents, that's one of those crazy things kind of counterintuitive, but feel good book accelerate and the DevOps research associates in the state of DevOps report, it's all kind of the same body of knowledge, which is that. The people that are moving faster actually have fewer incidents of less severity. They actually are safer. So, because if you can ship in 15 minutes or five minutes instead of three days, then if something goes wrong, you can probably fix it pretty quick. And so these things are like all kind of stack on top of each other. You know, if you don't have, if you don't have discipline around managing source, you know, you got to start there. If you don't have the ability to automate your building and your smoke testing, you gotta get that. Right, you just kind of work your way up. Is that?
DAN_SHAPPIR: It's interesting, it's interesting. We recently had a podcast about security. I forget the number, we need to check it out. But it was, he brought exactly the same point, which is that one of the key to enabling secure web services and web products, is to be able to ship rapidly so that if a security problem is found in your product, you're able to rapidly push out the fix for that. So I guess essentially the same thing.
DAVE_KAROW: Absolutely. And when you get to the more advanced progressive delivery techniques, you also want it to be at the point where you can actually shut something off without even having to do a hotfix. Because hotfixes generally you squash one bug and you create two more, right? Like if Um, how do I, how do I put myself in a situation where if I, if something does leak out, that's bad, but I can turn it off in seconds and triage it away from the customer instead of like furiously having a war room to figure out a what's broken and B how are we going to fix it? And then we're going to build this and we're going to do some minimal testing on the, on the hotfix and we're going to patch and like, you know, um, right. How do we really like. It's funny because technical people like I bring up these sort of soft, it's about less drama. Right? Like, how do we how do we do our job without let with with fewer heroics and less drama? I mean, it's exciting to be a hero.
CHARLES MAX_WOOD: It's stressful to though, right? Like, yeah, I've got to go slay the dragon.
DAVE_KAROW: Yeah, well, there's the Phoenix project. There's like the one guy and I should have his name on top of my head, right? Who always is it Brent? I think it's Brent. Right? Right. Yeah. He's nodding. So Brent like knows everything and whenever something's broken, you can go to Brent, but that means that Brent is like completely crushed all the time because he's in the middle of everything.
AJ_O’NEAL: And he's like the highest-value person getting the least amount of work done.
DAVE_KAROW: Yeah. And he's like seriously crispy burning out and, and,
CHARLES MAX_WOOD: but he's on a cruise. He doesn't, he's not, he doesn't exist anymore.
DAVE_KAROW: Yeah. He's a bottleneck, right? Yes. Right. And, and, um, to the extent that we can actually democratize the ability to get stuff done and not have it be through a handful of wizards. Um, that's another one of those patterns, Chuck, which is that the, one of the anti-patterns is having a choke point of wizardry. Like, you know, I was talking to a customer once who was like almost up for like, uh, buying a sort of democratized load testing platform and said, yeah, you know, it's great what you've done, but we've kind of built our own thing and, and, um. And I said, well, how do you do the reporting? Oh, well, we've got this one guy in San Francisco that can think, they can do the reports for anybody and like, yeah, you realize he's going to be a little bit of a bottleneck, right? And they're like, Oh yeah, you know, we'll get to that later. Um, uh, so it's tempting to always, to sort of like have a really smart, throw a smart person at something. Um, uh, but to the extent that you can actually democratize. So everybody on the team has access to understand what's happening and to be able to make stuff happen. It's one of those patterns, like pattern anti-pattern. You need to democratize the ability to get stuff done and you need to avoid choke points, whether they're technical or human, where, you know, bottlenecking, right?
When it comes to test maintenance, the biggest complaint that teams have are flaky tests. Tyco is a Node.js library built to test modern web applications. It creates highly readable and maintainable JavaScript tests. It's simple API, smart selectors, and implicit weights all work together toward a single goal fixing the underlying problem behind flaky tests to make browser automation reliable. Tyco is open source and free to use head to tyco.dev and get started. That's T A I K O dot dev.
DAVE_KAROW: So, uh, so, so, you know, killing your lease night was one of the things we're talking about, right? Which is we did a blog recently where we actually went, uh, one of our co-founders, uh, Pato, he likes to do what he calls, uh, uh, data journalism. Or maybe that's archaeology or whatever. And he wanted to see, so people who are using feature flags and a feature gallery platform approach to how you get stuff done, are they indeed getting away from late night releases? Are they getting away from not coming home for dinner? And there's this really beautiful bell curve of what it is we documented. When are people creating feature flags? When are people changing the state of feature flag, turning them up, turning them down? And when are people hitting the kill switch, the sort of, oh my God, turn it off, turn it off, turn it off button. Right. And the vast, vast majority of that was all happening during working hours. And it even took a break for lunch. Like, you know, you sort of see this peak go up and then it would take a dip for lunch and then, and then, and then it rolls back down and there's no 2am Sunday night spike, right. When I was in load testing, we were doing that stuff, you know, Sunday night, 2am for commercial sites or something, right. That we that people actually who use these practices of wrapping new code with feature flags and rolling them out to dev and test users only first, and then dogfooding them internally, and then rolling them out to either free users or friendly users or whatever, and sort of gradually rolling it out, these people are able to roll stuff out when everybody's there instead of doing it late at night when people either aren't there or are very fried. That's again, one of those things where when you start making that turn, if you can figure out how to be able to ship in the light of day, even on a service, it's up for everybody. Um, it's, it's kind of transformational, right?
DAN_SHAPPIR: Um, so, um, I totally agree with everything that you said. We actually have a similar policies over at Wix about when we, uh, flip the switch and, you know, we generally like to avoid, uh, enabling sort of things before the weekend and stuff like that unless we intentionally want to experience something over the weekend, but then we prepare stuff in advance for that. But maybe you should elaborate a little bit more about what exactly you mean when you talk about feature toggles in this context.
DAVE_KAROW: Sure, absolutely. So a feature flag or a feature toggle, and there's a great blog post on that on Martin Fowler's site, the idea is just I'm going to wrap some portion of my code with a conditional statement, simple case of be, you know, if that statement, whatever, right. And that statement is going to make a function call to something that knows what, whether that, that block of code should currently be on for the current user. So instead of thinking about like a config flag on a server or a command line flag on launching a service or something, literally, this is a, a function call that will be evaluated every time a user passes through this code path. And I evaluate it for the user in their current context every time they pass through. And what that gives me is the ability to have one version of code out there and to have to control who actually executes that code and who doesn't. And the simple example here is just sort of status quo versus something new, right? So like, let's say I've, we'll talk about something visual first. Let's say there's a, you wanna have a, you've got a feature that shows recommendations and normally it shows three recommendations and you want to test out well what if I have five recommendations there and I weight them based on something about the user or something right and And and you don't know whether that's gonna affect performance whether that's gonna affect whether people click the choices or whatever and so. When you go to roll out this new feature you wrap it with a feature flag and it could be you know show longer list or whatever it's called naming kind of matters you'd probably want to have an intelligent name to it and when the user passes through you hand that function call the current user context. It could be just the user ID, but it could be more. It could be like, what kind of user are they, how long have they been on the system. Whatever, anything you want, anything you want to use it for decision. And then the function call is calling essentially a subsystem and whether they've been a homegrown system, it might be a service. In splits implementation implementation in SDK that's in your code and and then there's a rule set in the SDK that can be kept current all the time. And so along comes Dave and you want to decide whether Dave should run this new code or not. And in this case, you may be able to decide, hey, I want the devs and the testers to have this, but nobody else. And so if Dave's a dev or a tester, he's going to see this feature. And if he's not, it's a no-op. And what makes them kind of special is that they're controlled outside your code base. So the answer to the question, should Dave get this feature or not, isn't something you have to do a new deployment for or drop into config file in place for, or restart a server for anything, it's you externalize the rule set, so you can change it with a dial anytime you want, right? And this is kind of pure sort of feature flags, feature toggling, is I wanna take that stuff out of the code so I have one version of my code, but I can actually change who should get a particular sort of code path or not. And then depending on your sophistication, you want to track who went through that path, which way did you send them? Then what happened? Right? And this whole feature delivery platform model that split makes just a service is one that's existed at, you know, Facebook and at LinkedIn and booking.com. They all have the ability to selectively expose new features and then watch how it goes for the people they selectively exposed it to and react based on how it's going to be data-driven. Like, are they getting more errors? Are they buying more widgets? Like what's happening? And when you build a full loop on that, you have the ability to A, release something on a very limited basis, even though it's in your whole release, that's only exposed to some people. And then to be able to dial it up and down and see how it's going. And what people kind of moving on a journey. Some people figure out feature flagging, either homegrown or they'll buy a feature-flagging product. And the reason they're doing that is to see how it is to try see how things are gonna work before they ramp them up. But the, how they figure out whether things are going well or not is usually ad hoc. It's usually like, well, we'll look here, we'll look there, we'll check the logs. And the teams that have been doing this for the longest who have the ability to crank out tons of software, they all have that automated. They all have. They all have the ability to figure out, I call it the sense making. They all have the ability to figure out how it's going, um, without heroics and without tailing the logs or digging through their analytics or, or any of that stuff, it's just the automate that part. Right. I know that's a long answer, Dan, but I hope that is that a good setup for what we're talking about in terms of what are the feature toggles and why we're using them?
DAN_SHAPPIR: Yeah, it's, it's really good. And I'm really geeking out over here because it's, uh, it's actually exactly the way that we work at Wix. So I'm able to get this validation for a lot of the stuff that we're doing. Yeah, exactly. Yeah, but it brings me to another question that I have, or actually two questions, but I'll space them out. So the first question is, how would you differentiate, or would you even differentiate between a feature toggle and an A-B test?
DAVE_KAROW: Okay, great question. So the one is dependent on the other. So you pretty much would have to have a feature toggle or something very much like it to be able to do an A B test because you need to be able to control who gets what. And if you're going to do an A B test, you probably want to be a little more sophisticated about how that feature toggle works. So there's really two aspects. There's control and there's kind of observation. Right. So control is dictating who gets it and why and observations figuring out and the sort of answer the question. And then what happened. Right. So, so but in order to do A B testing or experimentation, you need to have you. It's more viable to the extent that you're disciplined about how you assign who gets access to the feature. If you're just kind of randomly, you want to avoid human bias, right? And so the, the, the rules for the feature toggle need to be smart. And if you're deciding, Hey, I'm going to use a certain population, but I want to be sure that I, that I truly split them in a way where I'm not capable of introducing bias in terms of the outcome, because I'm trying to answer a question, which is better. I don't want to bias it by controlling who sees which version. Right? So you need the ability to control, and in Splits world, we call this manage, monitor, and experiment, and manage is managing the exposure. Right? And so that we do things like we have deterministic bucketing, which is if you want to use randomization based on percentages, you put people in buckets. We take a, we have a unique seed for each separate feature flag, which would be each separate A-B test too. And we calculate a hash based on their user ID and put them in a bucket from zero to 99. And no matter which part of our system they come through, whether they're coming through, whether it's a JavaScript SDK or it's Java or it's coming through PHP, all those people will actually be put in the right bucket no matter how they come in. Because it's not just a random choice. It's literally a seed applied to a hash that puts them in a bucket from 0 to 99. So in order, and then on the observation side, you need actually the ability to calculate whether you have enough data to trust the data in the, in stats. That's called power, right? Which is to what and power is based on how many data points are coming through and how consistent or inconsistent are they? The noisier the data, the more data you need to be able to prove you really learned something. Um, and so you have to be disciplined. You can't just say, wow, I've put a thousand people through here and they do better than the people I put through the other direction. So we're heroes. Um, it could be that you're still within random chance. Um, right. And so you feature flags are really a control thing and AB testing when done with, with, if you really want to be able to trust the results requires rigorously controlling who got in there and then being rigorous about crunching the data. Right. And it's not just like one example is I'll use Dan is an example where a team is trying to get users to create more tasks in the app. And the team is creating 11% more tasks. Now, if you figured out that they're creating those tasks that just by doing database queries, well, we queried this group and we queried that group, and there's 11% more tasks in the other group. But you weren't paying attention to other things that were going on, you might think you're a hero when you've done something bad. The example we give is. Um, what if latency was going up for the people that were creating more tasks? What if the new code was really slow? Right. And, and this is notion of something called a guard rail metric, which is things I want to pay attention to, to make sure I don't go off the side of the road. And if you don't automate that, you're not going to know it, right? If you're just querying a database to see, well, how do our experiment go? Well, you'll miss the fact that you think you're accomplishing your goal, but you're also totally sacrificing something else you care about. We call them organizational goals. It's split. But But so it's kind of a continuum of maturity, right? Like if you're just trying to get some directional information, you can run some queries. But if you actually want to really be able to use this to figure out what's really working, what's not, then more rigor helps.
DAN_SHAPPIR: Cool. So, okay. So now that brings me to my other question, which is also kind of hinted to by what you talked about. So you were talking about the fact that in order to be able to do effective A-B testing and even evaluate the quality of your feature toggles, you need to incorporate that into your measurements. But before you do that, I would guess that you would also need to incorporate it into your testing system. And what I mean by that is, let's say I have some feature toggle which wraps code. So now I need to, if I'm going to do a unit test on the unit that contains that conditional code,I would probably need to test the unit both with the feature toggle enabled and with the feature toggle disabled. In fact, if I have multiple feature toggles, I probably want to test all possible permutations of the feature toggles. So that's in the context of the testing. And then I guess it's kind of a similar thing when I start looking at the measurements. But let's start with the testing first.
DAVE_KAROW: I would say to your point you just made, I would say yes to the testing both sides of the flag. And I would say no to the all possible combinations. And here's why. First of all, I don't want your head to explode. People who are disciplined with testing are used to having like a state diagram and knowing all possible states the system could be in and making sure that you've got coverage, right? Which makes sense. But it's more effective to take more of a kind of a contract testing approach to this, which is if you've got something you're working on and you're gonna give it,two or three different states or four or whatever, right? These flags that could be different directions. You wanna test that thing, unit test that thing in the different states, right? Now, if there's something that has a contract with that thing, you wanna make sure that you don't violate that contract in any one of those three or four states or whatever they are, right? But you don't wanna give yourself a matrix which says, well, I've got 40 flags currently and I wanna test all of them in all the states they could be in. You know, if you have the time and energy to do that, great, but actually to prioritize, you actually want to focus on what are the states, what are the states that I know I'm going to be wanting. And to test more localized, the more you can localize your testing, the less you're going to get into this combinatorial sort of giant number of scenarios that you need to worry about. But yeah, you would all these systems basically have the ability for you to kind of force the state of the flag to be one way or another. So you can automate your testing. But I wouldn't get away from the combined, sort of the giant matrix of all possible combinations to the extent you can.
DAN_SHAPPIR: So I totally agree that if you're doing proper unit testing and your definition of unit is not all my code, then you should probably not have too many feature toggles within a single unit. I mean, otherwise your unit is probably too big. So, you know, like if you're unit testing, you're testing a particular function, that function probably should not have more than one or two feature toggles inside of it. Otherwise it's getting kind of getting out of hand and maybe it's time to split that function. But it does still mean that you do need to have some integration between your experiment or feature toggle system and whatever testing framework you're using for your unit tests, correct?
DAVE_KAROW: Yeah. And actually I would say this gets back to naming and kind of human practices and hygiene, which is that you want to standardize the naming of your flags so that you can find them programmatically, which would help you confirm that you've got your coverage where you need it to be. Right. So a lot of people actually have a naming convention, which sort of has like the team or the module, plus the thing that's being affected in the name and the very virtue, the nature of how the flag is structured lets you find it programmatically anyway. But something you brought up actually brings another good point, which some of the people will say, well, if I've got this flag, do I put it like lots of places in my code everywhere that it matters? And that's an anti-pattern. So the, the, the, the, the best practice is to have flags be at entry points. So, so you want as few locations as possible and you want them as close to where the business logic related to the decision is, and you don't want to go through, we're used to sort of like going through our code and sort of, if you have like gold, platinum, and silver customers, there might be a million places in your code where it asks which kind of customer they are and decides what to do next, right? When you're dealing with feature flags, you're actually, you wanna have fewer locations where they live. And again, well, it's something like an API, it's really easy, because it's the entry point. If you've got a kind of a big blob of code, you wanna figure out what can call this, and you don't wanna have a feature flag show up 20 places in your code, the same flag in general. Right? So you want to, there's some best practices that are in some e-books we have. But there's these, it kind of gets back to simplicity, which is if the thing shows up dozens of times in your code, it's going to be really hard to debug. And it's going to be really hard to understand. The other thing about feature flags in general, they should be short-lived. So you don't really want to have an ever-growing number of flags in your code. You really want to use them for transitions to go from not having this feature to having this feature. Once a feature flag has been rolled out to 100% of the users then it should come out, right? Or if it goes to 0% of the users, it should come out. And so should the code it's wrapped around, right? So there's kind of a hygiene factor here. The most extreme case I've ever heard was somebody who said that their team was only allowed to have two active flags in the code ever at any one time, which is pretty frugal. But like that particular team could only have two states that their code could be in, right? I think that's probably a little extreme, but whatever.
DAN_SHAPPIR: How long do feature toggles live in your experience in most cases?
DAVE_KAROW: So you've seen it done right. Yeah, yeah, yeah. So I'm going to get back to the cadence of the team that's putting them in. Right? And most teams will depend. Let's say a team is trying to ship every two weeks their particular bit. It's likely. That you're going to leave them in at least one cycle beyond when you think you're done, but not much more than that. So, so, so if you, if you, and you might actually, by the way, there may be a situation where you're building components of a transition and you may need to build 15 things and test them independently until you've got everything right. And then you're going to make, then you're going to make the big move. In that case, those flags could be around for months as you're dealing with all the dependencies, right. And like, you're sort of like. Like one example that Lasting gives is that they wanted to do a complete rebranding of everything. And they wanted to announce it at their user conference. Right. And so they went through all of their code and changed the colors and the logos and fonts, whatever, all this stuff. And they put all that stuff behind flags and they were able to test every one of those things as they went. So those flags were in the code. Right. And then at the user conference live, they literally flipped the state of all the flags at the same time. And suddenly you refresh the page and all over the world, people have got the new branding which is very different than actually somebody doing a commit or a push or a deploy in the middle of the user conference and praying that everything comes up. Right. Um, but in that case, those flags were around for many, many weeks, months, probably as they chip through their list of dependencies they wanted to have done. But for a particular team, it's more about your putting something in your testing it, you're rolling it out and you want to basically have it be kind of plus one for life. So once you get to a hundred percent. You maybe leave it around for one more iteration, just in case something that takes a long time to notice pops up, but then you pull it out. So I can't give you days or weeks or whatever. It's more about the cadence of when you're touching that code. And some people will literally set kind of a time bomb. They'll create a Jira ticket when they put in a flag for taking it out. People will trigger based on percentage usage. People will trigger on time. Basically, you can't do that. Yes, it adds a little overhead to have to manage the removal of these. Um, but the, the speed and safety you get out of having them, uh, is worth the, the cost of having to go back and pull them out when you don't really need them anymore and leaving them in forever is clearly the only thing you want to leave in long-term is what we would call an ops toggle, which is something it's like designed to shed load, uh, under extreme situation, something that you're, you've leaving there because you want the ability to instantly change the state of that because of heavy load or something else going on.
AIMEE_KNIGHT: I really like that you're making that distinction and making that call out because I've definitely worked out places where they're like, no, we don't want to add feature flags because it's just going to clutter the code base. And yeah, like the call out between like an ops toggle and a feature flag, very different things like the ops toggle. I've used that like very successful. Like you want to run different code in different environments slightly. Um, but yeah, like the feature flags. I kind of when they said that I was like, well, how else are we supposed to do this? You know, so.
DAVE_KAROW: Yeah. And the thing is that feature flags, I think what also helps to think about feature flags as being per user per session. So if you have some very huge effects, everybody thing you want to be able to change. And it's like you have the on-prem version of your software versus the cloud-based version of your software. That's not really a good candidate for a feature flag. That's a candidate for, for like a compile-time thing or a launch time thing or a config file thing, right? But that, right. So it's really about where do I want to be able to change this a user in a session at a time, wherever that helped me, um, is, is, is another kind of way to look at it. Um, and, and yes, you could argue that like, you know, an ops toggle was designed to shed load in a heavy situation could be something, a signal you send to the server, it affects every user. But that's kind of a gray area thing, I think. And then the last thing I'll say is that in our industry, there are some people that kind of want to say that feature flags are a great way to handle entitlement. And I'm not really in that camp. You know, gold user versus platinum user versus whatever user. That's a very persistent thing that should be kind of core to how your software works in my mind, not something that you have flags all over the place for. But that's kind of a philosophical battle we can perhaps get into another day.
DAN_SHAPPIR: I did want to make one more additional comment, small comment, where I found feature toggles to be really useful is that sometimes you have changes that are kind of cross-cutting between systems. And sometimes deploying all the systems at the same time can be very difficult or maybe even not possible. So making the changes behind the feature toggle in various places that are deployed independently and then being able to turn on or off all of them all at the same time is something that can be extremely useful. In fact, sometimes there's just no other option to accomplish such a thing.
DAVE_KAROW: Yeah, and that's really, you talked about that example we were talking about with Elasti where they changed the branding, right? Like they had many different systems built in different languages by different teams in different parts of the world. And they wanted them all to A, be independently tested and B, be able to go live at the same time, feature flags was kind of the only way to do that. So yeah, that's totally agree. And the other thing people, people say, well, but wait a minute, how do I deal with databases when I'm doing this? Like if I'm, if I got data structure changes, like what, how do we do that in the world of feature flags? And there's some fairly straightforward patterns for that. Generally the idea is you're being additive. You add columns, you don't change what's in a column. And, and, you know, these concepts are probably not alien to most of you out there who are doing this kind of stuff anyway, but you know, you, you, you, there are various strategies for how you get from a prior state to future state. And again, uh, if anybody wants, you can look at the eBooks we do. There's it's probably in the best practices ebook. Um, uh, we talk about like, how do you deal with data?
AJ_O’NEAL: So I want to ask a question in a different direction. So one thing I've seen is where people really buy in to the continuous uh, testing delivery deployment, et cetera. And this might, this maybe this is a little bit of scope, so let me know if it is, but I think you might have a good answer from your experience. Um, so they get all on board with it, but they kind of, they, they kind of go like too far and then they complain about how they can't get work done because they no longer know how to run a local system that like they, they've gotten so far into the cloud that they don't know how to just be able to make a quick test locally and they're waiting 20 minutes for their deploy pipeline to clear up to be able to find out if something's going to work in the, the dev environment.
DAVE_KAROW: Yeah, that's interesting. I mean, that, that kind of gets back to why a lot of people. So you'll hear this conversation where people have moved to containers and it's like, yeah, but my whole solution is just too big to run on a laptop. Like, right. But the idea there was how can I make it so that everybody can run? That's again, one of those patterns you want to, you want to give. You want to give the people ability to have instant feedback. Right. Developers should be able to, to try something within a, within a handful of minutes or less, and they shouldn't have to wait 20 minutes or half an hour.
AJ_O’NEAL: Um, I mean, I, I'm used to waiting about 10 seconds.
DAVE_KAROW: I hear you. Right. Well, no, certainly you don't want to tweak something and have to like, go get coffee before you can see what it did. I'm with you. Right.
CHARLES MAX_WOOD: Yeah. That's losing proposition. What's on Twitter half hour later. Oh, oh, I finished. What was I doing? Oh yeah. Oh, it's lunch. Oh, I have a meeting. Oh,
AJ_O’NEAL: do you, do you have any principles or guidelines for people that are in that situation? Cause I would imagine there are a lot of people listening in that situation right now. I know like salt salt lake is kind of a tech hub. So we've got like a dozen billion startups, but that's, that's a common thing people are talking about around here and I imagine in other places too.
DAVE_KAROW: Well, so if you talk about my world, like I've spent most of my time talking about, you know, progress, the delivery and feature flags and that sort of thing. Um, one of the, one of the points here is that, is that if you're a developer who's, who's iterating on tinkering code, you need to be sort of be able to do that build and test if you're talking about trying different things to the extent that they can already be in your code and you're playing around with them, feature flags. Let's do that with zero latency. Like you can change them on the fly anytime you want, but you're probably talking about, I want to, I'm writing some code and I want to quickly see if it did what I thought, you know, and it's, it's really, there's two different things. Right? So one of them is if you want to be nimble and be able to see the outcomes of things without having to do an entire deploy and wait for stuff to you know Containers to come up or whatever then feature flags help you there because they're instantaneous. Going from one state to another but if you're talking about I'm writing these three lines of code and I want to see if they behave the way I thought. They were behaving you do need that local feedback and I'm not an expert on that
AJ_O’NEAL: Okay. Thanks
A couple of years ago, I put out a survey asking people what topics they wanted us to cover on devchat.tv and I got two overwhelming responses. One was from the JavaScript community. They wanted a React show. And the other one was from the Ruby community and they wanted an Elixir show. So we started both. The React show though is React Roundup. And every week we bring in people from the React community and we have conversations with them about React, about the community, about open source, about what goes into React, how to build React apps and what's going on and changing in the React community. So if you're looking to keep current on the current React ecosystem and what's going on in react, you definitely need to be checking out react roundup. You can find it at reactroundup.com.
CHARLES MAX_WOOD: Yep. One last thing that I wanted to ask was you talked about how feature flags should be short-lived and encapsulate, you know, small chunks of functionality. But I've worked on teams where you kind of get halfway in and then there's something that comes up that makes it so that you have to and move on to something else. And so now you've got a feature flag in there around code that you may or may not pull out.
DAVE_KAROW: Right.
CHARLES MAX_WOOD: But you still may want to keep track of. Yep. Until you get back around to it. So, I mean, do you recommend that you just kind of revert that part of the code until you're gonna actually need to toggle it or?
DAVE_KAROW: Oh, so do you mean that you put the toggle in, you didn't finish the code behind the toggle.
CHARLES MAX_WOOD: Right.
DAVE_KAROW: Got it. So it's not a matter of you put it in and you moved on to something else and going back.
CHARLES MAX_WOOD: You've got it.
DAVE_KAROW: I think it gets back to sort of testability and whether you're causing a lot of grief for people to have to write that code.
CHARLES MAX_WOOD: Right.
DAVE_KAROW: And this gets more into the how do you structure? Are you you know, how do you how do you write your code this sort of like these are these are human things as much as they're technical. Right?
CHARLES MAX_WOOD: Yep, absolutely.
DAVE_KAROW: Yeah, I would say this, the one thing I would I would point out here, and it's a little self-serving, I'll admit, but when every team has their own way of handling toggles, then it's really hard to keep track of which toggles are in what state and what they're for and all that. When you centralize on a single toggling infrastructure across the org, it's much easier to know what something's for, who owns it, what's the state, right? So instead of it being some random thing that an SRE stumbles onto, or that somebody is trying to deal with an incident stumbles onto it doesn't know what, who whose is this and what's it for? You know, so the what you're what you're getting at is two issues. One is, what do I do about this sort of not not quite finished code that's in there? It's kind of lying dormant in there. So a, it should be in your some kind of ticketing system for starters,
CHARLES MAX_WOOD: Right Yep.
DAVE_KAROW: Right. And B, the state should be easily determined by anybody, not just the person who put it there.
CHARLES MAX_WOOD: Gotcha. Right. I like it.
DAN_SHAPPIR: I wanted to add to that, that a feature toggle is not supposed to be a way from my perspective to disable code that's not ready yet. I don't consider that to be the purpose of feature toggles. If it's a feature toggle is something that I need to be, obviously there might be a situation where I enable it find out that there's a bug and then I disable it until I fix it. But I don't intentionally wrap something in a feature toggle and then just not turn it on because I know for a fact that it's broken and not ready yet. I don't consider that to be a purpose of a feature toggle.
DAVE_KAROW: I'll stir things up a little bit for you and point out that one of the reasons feature toggles came into existence was trunk dev, trunk based development. And people who are practicing trunk dev who are committing at least daily to master, but are deploying for master needed an ability for something that wasn't quite finished yet to be out, but off, you know, to be deployed, but off. So your mileage may vary. Right. But that is actually one of the reasons people came up with feature flags was that they needed the ability. If they wanted to be having so, so, you know, it, it's it's when I give a talk, I'll sort of say, why do I have my flag set at 0%? And there's two answers. One of them is it's not done yet. The other is I want to test it in production, but from our users hit it. Right. And you're, you're Dan, you're basically saying, you don't really like that first use case, like you don't like the idea, but I don't know where that code's going to live, you're probably going to have a long lived branch if that code's being worked on and isn't done yet. Right.
DAN_SHAPPIR: Yeah. It, you have a point. I guess what I'm saying is that. There are certain stages within the development process where you know that by definition, the code's not ready yet. Obviously, you want to keep these durations to being as short as possible. That's the whole point of continuous integration and so forth, but they still exist. So I'm not going to push something and wrap it if it's not going to pass my lifting stage, for example or if all my tests fail, because we were talking about before about the fact that I want to test both with the feature toggle on and off. So if I know that the code is just not ready completely, then I'll probably not want it to be there behind the feature toggle. I might still want it to be like on a branch for a little bit at least. But you're correct, once I merge that branch into master, then from that point on, it will be behind the feature toggle. And there's a good chance that during the various integration tests, I might run into problems that I still need to fix before this code can be enabled in production. And certainly in those points in time, it will not be open to anybody. In fact, right now I have like some like two tests that are currently close to everybody that are in production in the deployment code are part of the Wix deployment.
DAVE_KAROW: Right. Don't tell us the string to search for. All right.
DAN_SHAPPIR: Turn them on if you feel lucky.
CHARLES MAX_WOOD: All right. I'm going to push us to picks because we are, I don't know if we'll have time if we don't. So this has been really fun, Dave. If people want to go dive in and learn more about you or about Split or about what you're working on. What are the best places to do that?
DAVE_KAROW: Okay. So they can definitely follow me on Twitter at at Dave Caro, D-A-V-E-K-A-R-O-W. I try to put anything I do up there. So at least catch it there. The, if you go to split.io and hit the blog, you can find my tag in there and see the pieces I write. Um, if you, uh, I'm on speaker deck. So if you go to speakerdeck.com and search for Dave Carroll, you can find the most of the slides from the talks I gave in public. Um, and if you, uh, you'll find me on YouTube as well. Uh, we do have a split channel and I'm also up on, you know, I have a, I have a video up on go to his channel. That's been pretty popular. Um, I think people are welcome to check me out on LinkedIn. I'm, I'm pretty open to accepting connections on LinkedIn if you're actually in a related business. Uh, probably that's probably it.
CHARLES MAX_WOOD: All right, cool. Well, let's go ahead and do our picks. Amy, do you want to start us with picks?
AIMEE_KNIGHT: I'm going to go with, uh, I've been just focused on a lot of perf stuff lately and although this is probably pretty dated and I don't know if it's been updated, um, cause it's back in 2014, which in dev years is like, God, we barely even had angular back then. Um, anyways, but it's, uh, designing for performance. So, uh, while I haven't read the entire thing, I'm just kind of glanced at it a little bit, I feel like literally just the title, um, is probably good indicator of like, there's probably still value in the book and some of the concepts in the book, um, just because, uh, working in like a large app, um, especially when we're just getting. You know, that like Files from design that we need to implement I feel like it would be helpful for like design and dev to work together on these types of things and just kind of Sometimes educate designers on like, you know trade-offs with like performance implications of things that they pass off. So that'll be one of my picks and then another one Kind of in the morning. I have this routine like before I start work where I like sit down with some tea and kind of look over Hacker News. Something kind of interesting that I saw on there this morning was some of like the biological reasons why people may be early risers versus night owls. So that'll be my other pick. And that's probably it for me today.
AJ_O’NEAL: Wait, what? I want to learn about this.
AIMEE_KNIGHT: Yeah, like it just looked really interesting. I kind of glanced at it, but Um, like TLDR is like kind of like the way that your brain is made up and will indicate, uh, like I didn't read the entire thing, but, um, yeah, there's like different, um, like proteins and stuff in the brain based on whether or not you're like a morning person or a night owl. So there's literally like biological things in your brain for different people.
AJ_O’NEAL: So we'll definitely put a link to that if you can find it again.
AIMEE_KNIGHT: Yeah, I guess to find out the team performed a series of protein structure and biochemical analysis of the CK1 mutation originally found in hamsters. I don't know, but yeah, they're kind of looking at like people's brains and why that makes them want to go to bed late or wake up early.
CHARLES MAX_WOOD: I'd be curious to know if the one causes the other, if the other causes the one.
AIMEE_KNIGHT: I probably did a poor job of explaining the article because I kind of glanced at it.
DAVE_KAROW: I was a night owl that married a teacher. And I literally went from a guy who seemed to get a lot of work done at one or two in the morning to a guy who needed to go to bed by nine or 10 because I was getting up at six.
AIMEE_KNIGHT: I've always kind of been a morning person. So yeah.
CHARLES MAX_WOOD: Interesting.
DAN_SHAPPIR: Your morning is the middle of my night, Amy.
AIMEE_KNIGHT: This is true.
CHARLES MAX_WOOD: Amy gets up in the middle of the night. That's what I heard Dan say. All right, Dan, what are your picks?
DAN_SHAPPIR: Well, okay, so I started with one, but I actually added another one as Amy was talking. Because she was speaking about web performance and relatively old resource, I thought it might be worthwhile to mention a really useful, modern, up-to-date resource about web performance. And that's a Google resource called web.dev. And within that, they have a section called web.dev slash fast. I'll post the link to that as well. And it's just a collection of items about various performance best practices, performance measurements, what they mean, lots of useful information. So if you're into web performance, that's a really useful resource. They also actually also have an online performance measurement tool kind of similar to Google PageSpeed Insights. It's essentially the same thing. They're both like house. So that's one link I would like to mention. And the other one is actually something that we did at Wix about a week ago, which I really enjoyed a lot. We called it the blogging storm. Turns out that quite a number of people had, and Wix R&D had ideas about things that they wanted to blog about. So we brought all the relevant people together in one room for one day. And we basically all blogged together. And we gave suggestions to each other and reviewed what other people wrote. And it was really, really useful. And quite a number of blogs have come out of it, myself included. I hope to publish my blog within the next couple of days. Wix even gave us additional resources, like a technical editor who would go over a blog post and make them better. So it was a really enjoyable experience. I liked it a lot and I would recommend it to other companies, at least those who can afford it.
CHARLES MAX_WOOD: Nice. AJ.
AJ_O’NEAL: All right, so I found out kind of not the hard way, thank goodness, that if you want to get above eight terabytes of storage, you kind of have to have a raid system. And the reason for this is that basically drives are really cheaply made and eight terabytes is kind of the limit where unless you want to jump up to paying say, you know, four to $600 per drive, um, then you are going to end up with really, really crappy drives that have really low reliability, really high failure rates, et cetera which, um, oddly enough, some of them are sold as a, as, as backup drives. Um, but so I, I invested in, in a raid system. I don't, I can't speak to whether or not it's going to, you know, work as it's supposed to, because you only know that when a drive fails, but I got this thing called center aid, cause it was a little bit cheaper than other options and had good reviews on B and H. And then I got some data center drives that are renewed. And we all know you shouldn't buy used hard drives, but I don't know when considering the option of either paying $130 for a consumer-grade drive or $150 for a drive that otherwise would have been $400, I actually feel more confident in taking the renewed drive that originally cost four or $500 and putting that in my RAID system with, uh, you know, where, where one of them is the spare will not spare per se, but the parody drive. And the reason I'm doing all of this, the reason I needed to upgrade drives is I started doing video work because I'm working on a course for people that wants to level up their dev skills, which I'm calling Beyond Code Bootcamp. So I'm really interested if any of our listeners have questions that are looking at getting into development or kind of feel like you're kind of stuck on the the front end and want to know how to get from A to B in your career, I'd love you to hit me up with those questions. I'm at solderjs on Twitter or coolaj86 at gmail.com. And that's that.
DAN_SHAPPIR: I have to mention that I've been watching your videos and enjoying them a whole lot. So way to go.
AIMEE_KNIGHT: Same.
CHARLES MAX_WOOD: Nice. I'll go ahead and jump in with some pics of my own. So I don't know if I picked this last week. I won't pick it next week, but I've been watching the Expanse TV show and I'm really enjoying it. They have-
DAN_SHAPPIR: Must be, it's like the third time you've picked it, I think.
CHARLES MAX_WOOD: Oh, could be. Anyway, good show. All right, then I won't say anything else about it. I'm also gonna do a pick. I think AJ picked this a week or two ago. It's Course Creator Pro, and it's a course on creating courses. Anyway, I'm really digging it been working through it. I'm working on putting together a course on how to, uh, get ROI from podcasts. So I think the obvious one is sponsorship, right? So how to sponsor podcasts, but I'm also going to be talking a bit about how to, uh, be a guest on podcasts and how to make that work and how to run your own podcast and how to make that work. And then eventually, I think I'm going to do kind of a master class on how to create and run podcasts. So anyway, the information there has been terrific and I've really, really, it's really well put together. So yeah, AJ got me into it, so kudos to him. But yeah, that's gonna be my other pick. Dave, do you have some picks for us?
DAVE_KAROW: Yeah, I definitely wanna put out there the book, Accelerate and I'll put a link out there on Amazon for that. You know, the book is kind of in several pieces. There's a chunk in the middle about how they did their research and why you should trust them optionally skip that part, but the sort of the practices that are working in all sorts of different orgs that make building software more sustainable and sort of more fun and more effective. We all kind of want to be effective. At the end of the day, we all want to have an impact. You know, it's not much fun working hard to get a little out of it. So accelerate for sure. And I mentioned the Phoenix project and the Unicorn project those are sort of dramatic stories wrapping around the same ideas. So if you'd rather read it sort of a solo proper to understand these concepts, then go with Phoenix Project and Unicorn Project. If you wanna be a little more academic about it, then accelerate, but those are all really good resources. You know, we don't have to be in IT management to benefit from understanding how these new ways of working can make life better.
CHARLES MAX_WOOD: Awesome. Well, thank you for coming, Dave. This has been a lot of fun. And- Just a fascinating topic to go into.
DAVE_KAROW: There's more out there. Just Google progressive delivery and maybe throw my name in there. Hopefully you'll find something interesting.
CHARLES MAX_WOOD: Nice. All right, well, we're gonna wrap this up. Thanks Max out everybody.
AJ_O’NEAL: Adios.
DAVE_KAROW: See you later.
AIMEE_KNIGHT: Bye.
Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit c-a-c-h-e-f-l-y.com to learn more.