AIMEE_KNIGHT: Hey, hey from Nashville. Very excited about what we're talking about today.
STEVE_EDWARDS: Woohoo! and AJ O'Neil. How you doing AJ?
AJ_O’NEAL: Coming at you live from a nice pizza pie.
STEVE_EDWARDS: You're sitting on a pizza pie?
AJ_O’NEAL: No, I'm eating one.
STEVE_EDWARDS: You're eating a pizza pie. You said you're from the pizza pie.
AJ_O’NEAL: Well, I'm coming at you from the pizza pie where I am.
STEVE_EDWARDS: When the moon hits your eye like a big pizza pie, that's amore.
AJ_O’NEAL: Anyway, coming at you live from amore.
STEVE_EDWARDS: There you go. Amore. That's love in Italian, by the way. Our victim, a guest, excuse me, guest today is another podcast host from another podcast, Will Button. I think I said that right, Will, right? Pardon me.
WILL_BUTTON: Yep, that's exactly right. How y'all doing?
This episode is sponsored by Sentry. Sentry is the thing that I put into all of my apps. First, I figure out how to deploy them. I get them up on the web, then I run Sentry on them. And the reason why is because I need to know what's going on in my app all the time. Yeah, I'm kind of a control freak, what can I say? The other reason is, is that sometimes I miss stuff or I run things in development, you know, works on my machine, I've been there, right? And then it gets up in the cloud or up on a server and stuff happens and stuff breaks, right? I didn't configure it right, I'm an idiot, and I didn't put the AWS credential in.I didn't do that last week, right? That wasn't me. Anyway, I need that error reported back. Hey Chuck, I can't connect to AWS. The other thing is, is that this is something that my users often will give me information on, and that's, hey, it's too slow, it's not performing right. And I need to know it's slowing down because I don't want them going off to Twitter when they're supposed to be using my app. And so they need to tell me it's not fast enough, and Sentry does that, right? It puts Sentry in, it gives me all the performance data, and I can go, against the load, that's way too long. And I can go in and I can fix those issues and then I'm not losing users to Twitter. So if you have an app that's running slow, if you have an app that's having errors, or if you just have an app that you're getting started with and you wanna make sure that it's running properly all the time, then go check it out. They support all major languages and frameworks. They recently added support for Next.js, which is cool. You can go sign up at sentry.io slash sign up. That's easy to remember, right? If you use the promo code JSJABBER,you can get three free months on their base team plan.
STEVE_EDWARDS: But Sears, before we get started into that topic, why don't you give us a little background on yourself, who you are, why you're famous, how you got into programming, that kind of stuff.
STEVE_EDWARDS: All righty then, good stuff. So before we get started into our topics, I'll give a little background of where I come from with DevOps. So I can recall back when I was living in the Drupal world around 2000, I don't know, 2009, 2008, around then a guy, one of the developers at the shop, very small shop that I was working with, his name is Steven Merrill, started getting into Hudson, the build tool Hudson, which, and it was actually, we talked about that in the very first podcast episode I ever recorded for Acquia. But, and then if I remember, something happened with Hudson and it was forked and became Jenkins. And so Jenkins, I know is still out there going strong at my previous employer. We were still using it. We had a guy that was a full-time DevOps on Jenkins and got to live in that world. So that's my limited experience with Jenkins. I'm sure there's, there's much more out there in the world. So let's talk first. DevOps skills that developers need to know. What kind of things do people like me who live on the front end and maybe someone like AJ who lives on the backend need to know about DevOps?
WILL_BUTTON: You know, I think. Really, you can summarize all of this just by walking through the CI-CD system because that's kind of where the two worlds of DevOps and software engineering overlap is through that CI-CD process. And so if you just take it through each step there, there's different DevOps skills, I think, that you can pick out and explore a little bit. And the goal here is really not to become an expert in DevOps but just to learn enough about it so that once the code leaves your control, you kind of have an understanding of how it lives, how it operates, and what it does. Because having that understanding may influence some of the design and coding decisions you make while writing that code. So one of the things I like to do whenever I start working with someone early on is just address, like, how do you get the local development environment up and running on your system so you can write code. And I try to measure that in minutes or hours, whereas a lot of places, that's measured in days whenever you bring on a new developer and then get them configured with the repo and get their development environment set up and then run whatever local dependencies they have. It can be a pretty arduous process to get them up and running. But I think if you take the time to compose that, you can shorten that down to where it happens relatively quickly, and at the same time, build out a system that looks very similar to how the application runs in production. And I'll use Docker a lot for that, Docker and Docker Compose, so that whenever someone new comes on, they'll check out the repo, and then have a make file in there where they can just type make up, or something similar to that, that launches the Docker Compose environment and then runs the Docker containers with the dependencies for databases or caching or elastic search or whatever dependencies you have locally. And then one of the things I really like to do with Node.js is run Node.js in the Docker container itself with hot reloading so that as they make changes and save the files, it reloads the code in the Docker container. And one of the benefits, though there's a couple of benefits to that, one is it runs in the exact same Docker container that it's going to out on production. So you can identify any thing that has to happen to that Docker container, where you need to make changes to it or install additional dependencies into it. But also it allows the person writing the code to see that Docker container and interact with that Docker container. And learn a little bit about the way that that's constructed and built. And you can introduce things like mounted volumes, if you've got shared data that you're exposing to that container or the ports that you expose or running. One of the things that you're seeing a lot of talking about currently is running rootless Docker containers so that the Docker container is not running as root, but actually running as a user that has no permission so that it makes it more difficult to do privilege escalation on an application.
WILL_BUTTON: Actually quite a bit. It's funny because I just released a video on this on my YouTube channel yesterday about which providers should you choose. And like the top three are AWS, Microsoft, Azure, and then GCP or Google Cloud Platform. And I think if you're going down the Kubernetes route, they all three have equal offerings. But I'm actually really hesitant to point anyone towards Kubernetes unless they have a team that's familiar, that has strong sysadmin skills and either existing Kubernetes skills or the in-house resources that can pick up Kubernetes skills. I think if you don't have those resources at your disposal, there's a lot better ways to run at scale and still get some of those features without the overhead and complexities required to run Kubernetes. I'm a big fan of for example, which is like it's like Kubernetes, but it's Kubernetes abstracted away so that you don't have to worry about it being Kubernetes. And it's very scalable, very performant, but it gives you that same features and flexibility that makes Kubernetes popular.
AIMEE_KNIGHT: Okay, that kind of answers my question. I was just curious from somebody who does this a lot what your thoughts are.
WILL_BUTTON: Yeah, I try to avoid Kubernetes unless they have...
AIMEE_KNIGHT: That definitely makes sense. Yeah, I was more thinking like if you ever came across a customer who like, you knew, you just knew that they were like going to scale pretty large or they were already larger. Anyways, we can we can circle back to Docker though. Sorry. I was just very, I was dying to ask somebody that.
WILL_BUTTON: Right on. And it actually ties into the whole CI CD process, you know, because that's exactly your result. And it's whenever you you know, whenever I talk about deploying and how you how you architect that, I kind of break it down into those two categories. You know, what scale are you expecting and in what resources do you have available? And the pool of resources really comes down to either skills or money. If you have the in-house technical expertise and skills, you have the option to do so at a lower cost. But if you don't have those skills, you either have to spend the money to hire those skills or spend the money using a platform that doesn't require those skills, you know, just because by nature of it, if you run on something like Fargate or even someone like Heroku, you get to leverage their skills, but you do so at a higher cost than if you were managing Kubernetes yourself.
STEVE_EDWARDS: So sticking with some of the basics, I mentioned Hudson and Jenkins, which are well-known tools. What are, I got a couple of questions here. First, what are some of the other more well-known or what are the types of of platforms or tools that you use with your clients that you use for DevOps, such as maybe a Travis or a Circle or maybe self-hosted stuff. What are the more prevalent ones that you see being used?
WILL_BUTTON: Yeah, with larger clients, you'll almost always find Jenkins. It's like the thing that's never going to go away. It's also the thing that everyone loves to hate because it's not, I wouldn't call it user-friendly, but the things that you get as far as the features and capabilities for certain clients make it worth the frustration to fight through. But there's also CircleCI is really, really common. Travis is also very common. And GitHub now has GitHub Actions, which I think is going to grow in popularity. And I think for most clients that I've dealt with, they don't have these huge, complex requirements when it comes to the CI CD pipeline. Whenever you open up a Merge request, you run tests, you run the automated test suite, you may run some linting. And then once that's merged in, you'll build a Docker container, push that container up to the repo. And then depending on the exact needs and requirements of the clients, it'll either deploy automatically or stage that for a deploy to the production environment. And the deploy itself, from almost everyone that I work with, they're on some type of cloud platform either Azure or AWS. And so then the deployment there just comes down to an API call. And almost all of the tools that we have for that today, you know, CircleCI, Travis, Jenkins, all those just have those features dialed in to where it's very simple and easy to integrate with that.
STEVE_EDWARDS: And then another real quick definition of terms, at least when it comes to DevOps, you'll see so continuous integration, continuous deployment, those are obviously two different things because they're different terms. So can you give a couple definition for those terms and where they come into play in the DevOps process?
WILL_BUTTON: Yeah, so continuous integration really means getting your code that you've written merged back into the main branch as quickly and as early as possible. And the reason behind that is just to minimize these opportunities where you go off and you do a lot of work and then someone else is doing a lot of work. And so whenever you merge all this back into the main branch in your repo, you've got these huge deltas that you have to try and reconcile and get itself. We'll try and do that and manage any conflicts. But even when there aren't any conflicts, the larger the pull request is, the more code that changes, the more likely you are to overlook something or to not have a clear idea of what's going on. And so that's where continuous integration comes in is to allow you to make frequent small commits to your main branch so that you keep good visibility over it and you don't really deviate a whole lot from that. Continuous deployment on the other hand is the actual process of sending that out to production. And a lot of people tend to think that that may indicate something that happens automatically, just because it has the word continuous in it. But that's not necessarily the case, depending on your environment. There are some environments where as much as you would like to, it's just not a good idea to have code deploying to the production environment without someone specifically saying, yeah, we're ready for this to happen. And that may be because you don't have the resources or the infrastructure to support it, or you may have different constraints that are around that deployment, like having to make external changes like database migrations or database schema changes to support that. But either way, the continuous deployment process is staging that so that it's ready to go to production and start serving the customers with that new code.
STEVE_EDWARDS: Thank you for those definitions. I know a lot of times you get acronyms and terms thrown around and movies don't always necessarily know or shall we say less experienced people don't always know what they mean. So I know I was confused on those for a while till I dug into them. So, you know, obviously the rage, everything these days is serverless, you know, to paraphrase one of my favorite Fletch quotes. But so you've got, you started out with AWS with their Lambda functions and Netlify has their server functions and Azure. So a lot of your platforms are going to have these functions that bits of code that you can run on the backend where you don't necessarily on on their environment where you don't, in theory, you don't necessarily maybe need a backend yourself. So how does DevOps play in a serverless world?
WILL_BUTTON: I think it has very little impact on DevOps overall. It removes the fact that you have to build, install, and configure servers. But really, that's a small part of everything that goes on in DevOps, just from a day-to-day perspective. For most people, I don't think a lot of people are building servers every single day or on a frequent basis. And so this just kind of... The one thing that serverless does, I think, is opens up the opportunity to focus on a lot of areas that may have been neglected in the past. So if you free up the time or the overhead that you would dedicate to running OS patches on a server, then you can tackle other things. You can address things like improving your monitoring and alerting to increase the availability of your system. And there's also other aspects of it. You've got caching servers and database servers and load balancers and all of these other pieces that make up your overall application. And so the server was just one part of that. That took a lot of time if you were managing servers manually and hold your efforts away from other things that you can now focus on.
STEVE_EDWARDS: Okay, so let's talk about deployment. So there's a lot of different types of deployments. You mentioned Kubernetes. And let's stop there real quick. You've mentioned Docker, you've mentioned Kubernetes. I don't know if we've got a real good definition of what Kubernetes is. My understanding of it, the front end person, is basically it's deploying on Docker at really, really big scale. I don't know if that's an accurate description or not. But can you talk about what Kubernetes is and why it's such a big player, I guess, in hosting and deployment?
WILL_BUTTON: Yeah. So what Kubernetes does is a lot of our applications are built to run in Docker now. And so Kubernetes is just this underlying orchestration system that allows you to define, here's my app. Let's say we've got an API and a front end application. And each of those runs in a Docker container. So we can define that we want each of those to run with three replicas or three running copies of that for high availability. And they should sit behind a load balancer, and then they have access to a particular database. And so Kubernetes allows you to just define all of that in a config file, and then it makes sure that your application meets those requirements. So if you say, I need three replicas. If one of them crashes, it'll launch another one for you. And then networking is another thing that it provides that makes it easy because you have consistent networking. So whenever I deploy my UI and my API, I can rely on a consistent DNS name for those to find each other in a Kubernetes environment. So I don't have to try to pass around names or store entry names in an environment variable.
STEVE_EDWARDS: Okay. So. I'll read the note as we have it written here. Kubernetes, AWS, Heroku, et cetera. Where should you deploy your app? Now, obviously, I think that's going to depend on the nature of your app. You know, if you're running maybe PHP back in versus Node back in, maybe for Ruby or Java or Go or Rust or whatever. Although I'm sure some of these hosting environments can handle all of those. So what are some of the different options for deploying apps in different types?
WILL_BUTTON: Yeah, this ties a lot back into the question that Amy asked earlier. And it comes down to scale and resources. And my background, I work a lot with early stage startups. So there's typically fewer resources at a technical level there. For most early stage startups, the first thing they hire are software engineers to build the application. So you don't have a lot of sysadmin or DevOps types resources available. And the other thing is scale. Since they're an early stage startup, they typically don't have to worry about high volume or large numbers of customers, although that's the ultimate goal. And so for those types of environments, I'll generally try to steer people away from AWS or Azure or GCP and steer them more towards something like Heroku because Heroku has just a really great onboarding system that does a lot of the DevOps and the CI CD stuff for you just by answering a few questions or filling out a few configuration values. And it allows them to deploy quickly, easily, and safely and keep their existing resources focused on what they need to discover as a company to be successful as a startup. But then as you grow, as you hit that scale and as you get more resources, you're going then there's this opportunity to move over to something like AWS or Azure or GCP, because in doing so, you can leverage the economy of scale. You know, if we're running a much larger environment with more containers and more backend resources, then we can take advantage of the cost savings in AWS to reduce our operating costs, which makes it healthier profit margin for the business to operate their application. So I think to summarize like that into a short, shorter answer that someone could take away and do something with. If you're just starting out, I think Heroku is the way to go. But as you grow, you'll be able to save money and increase your capabilities by moving towards one of the larger cloud providers like AWS, Azure or Google.
When I went freelance, I was still only a few years into my development career. My first contract was paid 60 bucks an hour. Due to feedback from my friends, I raised it to 120 bucks an hour on the next contract. And due to the podcast I was involved in and the screencasts I had made in the past, I started getting calls from people I'd never even heard of who wanted me to do development work for them because I had done that kind of work or talked about or demonstrated that kind of work in the videos and podcasts that I was making. Within a year, I was able to more than double my freelancing rates and I had more work than I could handle. If you're thinking about freelancing or have a profitable but not busy or fulfilling freelance practice, let me show you how to do it in my DevHeroes Accelerator. DevHeroes aren't just people who devs admire, they're also people who deliver for clients who know, like, and trust them. Let me help you double your income and fill your slowdowns. You can learn more at devheroesaccelerator.com.
STEVE_EDWARDS: Speaking of AWS, did you ever hear about their Infinidash new product?
WILL_BUTTON: No, I haven't heard of that.
STEVE_EDWARDS: Oh, no kidding. It's a whole joke thing.
WILL_BUTTON: Oh, wait. Yes, I did. Because I was thoroughly confused on that for a while.
AJ_O’NEAL: You're like, oh, dang, I need my Infinidash certification now. Right?
STEVE_EDWARDS: Oh, and all the people kind of, oh, that was hysterical, especially when Werner, I forget his last name, like the head of AWS tweeted out, yeah, we'll be having an announcement about it. Anyway, that's a whole side note I've talked about in previous podcasts. That made me laugh so hard for the longest time. Anyway,
AIMEE_KNIGHT: sorry, when you mentioned the AWS was at Fargate, I feel like I've heard of that. I don't know how to pronounce it, but would that be comparable to like GCP's autopilot? And would you still steer people away from like using something like autopilot if they were large enough?
WILL_BUTTON: I'm not actually familiar with autopilot, so I don't know if it's comparable or not.
AIMEE_KNIGHT: It's like they're managed GKE. So you have like just you can run GKE yourself, which would be like GCP version of Kubernetes, but then you can have autopilot, which has a lot of like best practices built in.
WILL_BUTTON: Yeah. So that sounds like it's, it's very much on par with far gate.
AJ_O’NEAL: So first, first question I've got, you did not mention the cheapest, most reliable, easiest to use of the cloud providers. Why w where's digital ocean?
WILL_BUTTON: Actually, yeah. Digital ocean is there. And they have a very strong following. And in creating this video last week, where I broke down the leading cloud providers, they just don't have the market share when compared to some of the other cloud providers. I mean, like their competition, you know, if you look at their offering, DigitalOcean's offering, they're direct competitors with Azure, AWS and GCP, which are just huge mammoth beasts. And whenever you look at the overall environment in terms of market penetration and market share, it's hard for them to even show up on the radar even though they have a great product.
AJ_O’NEAL: I get they kind of, for investors, are positioning themselves as competition to that, but DigitalOcean isn't like that at all. DigitalOcean is easy. It's not complex. It's like anybody can figure out how to set up a DigitalOcean instance. It doesn't take a degree an AWS science master's bachelor's to be able to get something up and running on digital ocean. You don't have to have a degree in YAML configs to get something up and running on digital ocean. Like you would say with, with Heroku or, or Docker compose, you know, you can just use simple, simple tools and get things up and running. Bam. And so I don't like, I would not put them in the same category as that. Cause AWS is like, if you want to, I don't know. I, it just and they're reliable. AWS, one of the promises of AWS is that they're unreliable. Their service agreement is zero reliability, 100% availability. Anything can fail at any time for any reason. And the default behavior is broken deal with it. Digital Ocean, I love it when I get the service announcement emails from them because they say, Hey, just so you know, your hardware went down, we migrated you seamlessly to another instance didn't even have to reboot the system. Everything's fine. You're good to go. Good to go. Just wanted to let you know. And name any other provider that gives you a seamless, beautiful, wonderful experience like that.
WILL_BUTTON: You know, I think in terms of ease of use and capabilities, I think DigitalOcean is very comparable to Heroku because I think Heroku is just as easy to set up and be successful with. But Digital Ocean is marketing themselves as an AWS Azure GCP type environment. And I think I'm curious to see how that's going to work out for them long-term because I think they would be much more successful to say we're on par with Heroku simple, easy to administer interface and services, but instead they're trying to compare themselves to those other guys.
AJ_O’NEAL: Interesting. Yeah. I hadn't really. I mean, I guess I've used them for a decade, so I haven't thought of them like that. I can see that that's where they're moving and I can see they're well positioned because instead of starting from complexity, they're starting from simplicity. So, as they add more functionality to their service, it's just a button click type of thing rather than a, okay, let's go into three different interfaces. Anyway, so I just wanted, I was surprised that you didn't give DigitalOcean love when you were talking about everything because they are the most reliable and the easiest to set up in my experience. And I move people over to DigitalOcean to get them away from all the complexity. Cause this is number two, this is my other problem, which maybe you'll sympathize with. People take the runs on my computer problem and they basically just ship it to the cloud. So like, I have no idea how this works. I don't know what's going on. I wouldn't know how to fix it. No one on the team knows how to fix it or configure it or whatever. We happened to get a configuration that worked and then we shipped it to the cloud. And so now you have the runs on my computer problem, just kind of a scalable runs on my computer, as opposed to actually understanding like why something worked or didn't work and having a document that says like, this is what this, cause my ideal scenario would be that you don't need to have something like Docker to run locally because you have a read me and it says, here's three steps. If you run these three scripts in the scripts directory, everything's going to be good. Right. And then you don't, you don't have to have all of that complexity that no one understands hidden away. You have simplicity. I guess this is what I would say. Like if you're going to get something to run in Docker, can you first simplify the process that it only takes three steps or one step to run anyway? Cause if you could do that, then you just copy that script into Docker and you know, you're good and everybody can replicate it. It works on everybody's machine. So I'm kind of more of a try to solve the works on everybody's machine problem first, and then ship that great product to the cloud, not hack it together, and then ship something that you have no idea what it's doing or how it works. It just like, it happened to be that I copy and pasted some config file from Stack Overflow and, and it, it seems to work maybe. So like, do you run into that? What is your, what is your opinion on the matter?
WILL_BUTTON: I agree with you. I agree with you a hundred percent. And with, with a small twist there, I agree that to run it locally, it should be one command. And that's the reason that I use Docker locally, because if I build it to run locally in Docker, and it requires all of these dependencies installs and all the different requirements to make it run, I can bundle all of that into the Docker environment so that the person running it locally doesn't have to have anything installed on their local workstation other than Docker itself and they can run one command and bring it up. Now that may not be the simplest way to get it running on their environment. But the one thing it does is that's also very, very similar to how this is gonna run in production. So now when they look at something running locally, later on when it's 2 a.m. and the site goes down and they're on call working a production outage and they're looking at production, they've got an apples to apples comparison. And they can see what's running locally versus what's running in production. And it makes it easier to troubleshoot and understand what's happening at scale because they're both built and executing this in a similar fashion.
AJ_O’NEAL: So I get that for Python and C plus plus, but why would you need that concern for node or go or rust or any of the modern programming web, web languages?
WILL_BUTTON: Just so that it's the same. Like, cause one of the things that you've got is, you know, you don't have, like if I'm running it locally, let's take talk Node.js if I'm running it locally using NodeMine or whatever, or having a specific version of Node running. It's running as a single command, but whenever I put it inside of a Docker container, now it's running as a command in a specific environment under a specific Docker container that was built off of a specific flavor of an operating system that may have different memory limits. One of the things that can happen there is if I'm running Node.js, I'm running Node.js Node.js locally on my workstation where I have 64 gig of RAM, I may not see a memory leak that's happening in production on a Docker container that only has four gig of RAM. So if I can replicate that locally, I can see similar performance metrics and similar operational patterns on my local workstation and out in production.
AJ_O’NEAL: That's an interesting take. I get that.
WILL_BUTTON: And I think that's one of the things that I think that's one of the areas where sharing DevOps principles with software engineers really pays off is whenever you have those on-call issues. And everyone who works on the application is familiar with not only how to write the code, but how it's operating out in production. Because then once you understand how it's operating out in production, you have smart people on your team so they can start making educated guesses whenever they're troubleshooting this, trying to isolate what the root cause is.
STEVE_EDWARDS: All right, so if there's no more questions, we can head into the fun time we know as picks.
Hey, folks, it's Charles Max Wood. And I just wanted to jump on real quick and let you know that I am putting together a podcasting course. I get asked all the time. I've been coaching people for the last six months. How do you start a podcast? How do you put it together? What do I need in order to get it going, etc., etc., etc. I've put together the curriculum and I did it through coaching a whole bunch of people. And now I want to share it with you. You can go check out the course. It's actually going to be a masterclass. It's going to be a four week masterclass, or actually walk you through the entire process of watching a terrific sounding podcast and putting together content that people want to listen to. And you can find it at podcastbootcamp.io.
STEVE_EDWARDS: Will, being a podcast host, I'm going to assume that you have some knowledge of picks. Do you have anything prepared for us?
WILL_BUTTON: As a matter of fact, I do. I picked-
STEVE_EDWARDS: Right on.
WILL_BUTTON: For this podcast, I picked the book, Site Reliability Engineering, and I'll throw a link to it in the chat. But the reason I picked this book was for this particular audience, I'm sure that no one listening to this podcast has any desire to become a site reliability engineer. And if you do, we should probably talk about getting some professional help, because it's just not a healthy decision to make. But this book is weird. This book is so well written though. It's a collection of essays from people who helped build and scale Google. The chapters are broken down where as a software engineer, you can look at the different chapters and see things that you are either working on or that resonate with your environment. You can just flip over to that chapter and read a really short essay by someone who has taken this to a massive scale and gotten their and get their collective thoughts on it. And so in just a short time, you can learn a great deal about what's happening with this application. And I think it does a great job of influencing architectural and coding decisions that you make when writing code locally. So even if you don't wanna become an SRE, I think there's valuable lessons in this book for people who write code for a living.
STEVE_EDWARDS: So I'm assuming by the title, I have not heard that term before, site reliability engineer. So I'm going to assume that that's just based on person that makes sure the mice are still running in the wheels and keeping everything going and the glue and the band-aids and the paper clips are holding everything together. Is that right?
WILL_BUTTON: Yeah. Typically you don't get to get into an organization with SREs until you're operating at a pretty significant scale, but their primary role is to monitor production. And they're usually the first responders during a production outage, but when they're not working on outages. They're looking at scalability and performance and efficiency and then feeding that information back into the engineering team to make it a more performant application. Right on, right on, right on, as Matthew McConaughey would say. Or no, he's, all right, all right, all right. Anyway, for those of you not familiar with that, that's a dazed and confused movie quote. AJ, what do you have for us for picks? Take your button off mute, AJ.
STEVE_EDWARDS: All right, Amy, let's squeeze yours in before you got to take off.
AIMEE_KNIGHT: Yep. So I'm going to pick, I don't know. No, nobody is paying me to say this, but as somebody who like entered into DevOps, like I kind of did it off and on a little bit at different jobs. Um, but I'm like full on DevOps now. And I know we kind of talk... This is why I have my opinions or whatever. I don't know. If you're listening to this podcast and you're interested in taking a dive into it, I'm not saying don't use one of the other cloud providers, but I just can't say enough good things about my experience with GCP. So again, not saying that others are bad or anything like that. I just come from a person who this is new to me and I've had an awesome experience learning all of their different services, including Kubernetes. Like I've had, of course I have, you know, I work under people so they can guide me in the right direction and stuff like that, but I've just had an awesome experience. So, uh, if you're looking, sorry.
AJ_O’NEAL: No, I was just saying, I mean, it's okay to say AWS is bad. We all know it.
AIMEE_KNIGHT: I'm just saying I have had such a good experience, even learning Kubernetes and GCP and they have different tutorials and stuff like that to get you started called quick labs, which I'll try to drop a link for quick labs because that would be my pick. But that's it for me because I do have to jump.
STEVE_EDWARDS: And so before you go in, I just wanted to clarify one thing real quick, because I was a little hazy. So because you're saying you like GCP, does that mean saying all the other ones are bad?
WILL_BUTTON: That's what I heard.
STEVE_EDWARDS: No, that's what I heard.
AIMEE_KNIGHT: You know, because I feel like we probably have a lot of like people who this isn't a demo podcast and maybe they want to fiddle around with stuff like I've had such a great experience. And just hearing from my managers, like if you are gonna like grow to a scale, like everybody, they seem to really love GCP.
STEVE_EDWARDS: So. Yeah, no, we get it. Yeah, it's, everybody has their own preferences, you know, and that's yours. So.
AJ_O’NEAL: Well, just remember, usually the mark of being most popular is that it's probably the worst. We see this PHP, AWS, most of the NPM modules on GitHub.
STEVE_EDWARDS: Easy on the PHP there. Okay. All right. Thank you, Amy. So that leaves me and as we all know, my dad jokes are the highlight of this podcast and people tune in just to hear them many times.
AJ_O’NEAL: I'm gonna unmute so I can laugh audibly.
STEVE_EDWARDS: Yes. So I do have a bit of good news. A couple, a few weeks ago I was quite saddened that one of my sources, standup t-rex on Instagram was gone, but he's back. I happened to find him the other day so I have one of my sources for jokes and I am so happy. So I'll start out with a computer based one.
AJ_O’NEAL: Oh that wasn't a joke.
STEVE_EDWARDS: No, no, no, no. No jokes yet. So I was curious to see if you guys had heard of the band called 1023MB. They're good, but they haven't been able to get a gig yet.
AJ_O’NEAL: Wait, that's last week's joke, wasn't it?
STEVE_EDWARDS: Did I say that? Yeah. Okay. Well, Will hadn't heard it, so at least I had to share that with him. And then I just clicked on something wrong and lost my joke. So I got a couple animal jokes here. So what's the worst part about going out to eat duck? The bill. So, side note, Will, you might be of the age like myself to remember these, but do you remember all the duck shirts that used to be around in the late 80s? So you had like Mallard Justin and with the duck that had the head backwards and Just Another Bill was the one I had.
STEVE_EDWARDS: Love those shirts. I actually found them at a shop one time on the Oregon coast and I was so excited.
AJ_O’NEAL: I think I remember Just Another Bill.
STEVE_EDWARDS: Yeah. And then.When I was in school, I failed my anaconda breeding class because of a late assignment because my homework ate my dog. So anyway, those are our picks for the day. Thank you to Will for coming over to our podcast and joining us to talk about DevOps. Will, if people would like to contact you, what is the best way to reach you and share in your sage wisdom?
WILL_BUTTON: So on Twitter, I'm at WFButton. And you can also catch me on YouTube at my YouTube channel, DevOps for Developers.
STEVE_EDWARDS: Alrighty, AJ, I think you said we can reach, right?
AJ_O’NEAL: Say what?
STEVE_EDWARDS: You said where you can be reached, right? I forget, you were cool, AJ86, and now you're...
AJ_O’NEAL: Well, I'm still cool, AJ86, it's just, don't follow me there. Just follow BeyondCode.
STEVE_EDWARDS: BeyondCode, there we go.
AJ_O’NEAL: Underscore BeyondCode on Twitter, but I've got the links in the show notes. Yeah, if you wanna search on YouTube, it's...Beyond code bootcamp. But I actually don't do the live streams on that because I try to just push the high quality stuff there and do the playlists there. Because the live streams are not, you never know how a live stream is gonna go. But usually it's frustrating because you run into real world problems.
Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. To deliver your content fast with Cashfly, visit c-a-c-h-e-f-l-y dot com to learn more.