CHARLES MAX_WOOD: Hey everybody and welcome to another episode of JavaScript Jabber. This week on our panel we have Dan Shapir.
DAN_SHAPPIR: Hey hey from Tel Aviv where it's awesome weather.
CHARLES MAX_WOOD: Steve Edwards.
STEVE_EDWARDS: Hello from Portland where it's rainy as always.
CHARLES MAX_WOOD: I'm Charles Max Wood from DevChat.tv. And here in Salt Lake, it's funny because we had a whole bunch of heat, and then it was cooling off, and I was like, oh, this is going to be nice, but then it rained and rained. So, can't win, you know, kind of got both of what you guys are talking about. This week we have a special guest, and that's Gareth McCumsky. Gareth, do you want to introduce yourself?
GARETH_MCCUMSKEY: Hey there, folks, I'm Gareth. I am from Cape Town in South Africa, where it's middle of winter in autumn and freezing cold. And I'm a solutions architect at serverless Inc, a company based out of the United States. We built the serverless framework, probably the most popular serverless application development framework available at the moment. And that's me in short time.
CHARLES MAX_WOOD: Nice.
Your app is slow and you probably don't even know it. Maybe it's fine in most places, but then the customer loads the page up that one page and after a couple of seconds, their attention disappears into Twitter and never comes back. The reality is there are performance issues in your app, and they're affecting your customer experience. What you need to do is hook up your app to Scout APM and let it start telling you where the slowdowns are happening. It makes it really easy. It tells you how slow things are and what the problem is, like N plus one queries or memory bloat. It's also built for developers, so it makes it really easy to identify where the fix needs to go. I've hooked it up to some of my apps and I saw what I needed to fix in a couple of minutes. Try today for free and they'll donate $5 to the open source project of your choice. Just go to scoutapm.com slash dev chat and then deploy it to your app. Once you do that, they'll donate the five bucks that scoutapm.com slash dev chat.
GARETH_MCCUMSKEY: You gave a talk as Dan did and Steve did actually at JS RemoteConf. And so I'm just going to let people know that you can go and get the videos for the talks at JS RemoteConf.com. I'm still charging for tickets for another few months, and then we'll probably just put them up on YouTube or something. But yeah, you can go check those out. To get us started, Gareth, I've changed the way that I like to start the shows. What I'm looking for is the story or question that somebody would tell you, where you would turn around and say, oh, well then you need to use serverless. So what is that? How does this usually come up in conversation?
GARETH_MCCUMSKEY: Serverless is a tricky thing to talk about at the moment. It's a very-broad topic that's really blossomed out over the years. There are a lot of traditional use cases that you point to. If somebody's saying, I want to do X, then I point at serverless. But that's really, like I said, blossoming out. And this is mostly because of the fact that serverless is essentially a way to use the existing managed services of the cloud in building your solution, whatever that solution might be. And these cloud vendors don't just create something, spit it out there, and wait around for everybody to sign up they're constantly building new stuff all the time. So that's why this sort of breadth of use cases keeps widening. But if I have to look at sort of the traditional stuff, if there's one massive thing that folks are looking at potentially for serverless as a solution, I'd say it's something along the lines of a REST API, for example, where somebody's building a web application, for example, with some kind of static front end, like a Jamstack, for example, which is becoming more and more popular these days. And they need some way to build a backend API to that because they're a React developer. They have a team of React developers and the team of back end developers, but they need to get this API up and running. Serverless is a really great way to go ahead and build that out.
CHARLES MAX_WOOD: Yeah, it makes sense.
DAN_SHAPPIR: And well, about makes sense, about the fact that it makes sense. I mean, I could build an API for a front-end application, you know, way before Serverless. I mean, I've been building APIs for stuff, you know, even before the cloud just to show my age. So what makes serverless so great for building APIs compared to just, you know, they're setting up a server on the cloud somewhere and, you know, using that to build an API.
GARETH_MCCUMSKEY: You can do that.
CHARLES MAX_WOOD: It's funny I said it made sense and you called me out for that. I just want to say normally that's my approach, Dan. Still, I'll build an API on my own and deploy it to a server.
GARETH_MCCUMSKEY: Yeah, there's a lot of How can I put it? I mean, I suffer from the same myself. I mean, I've got an old IBM 5150 back here, needing a bit of repair work. I've got a couple of actual servers sitting in a box here. So I tend to play around with a lot of old tech. I spin things up all the time that I don't really need to because I like building things from as scratch as I could possibly get it sometimes. But when it comes to building a solution for essentially the people that are paying your salary or your own organization that you're trying to build. There's a lot more you have to take into consideration. You can't just say, well, this is the way that I used to do it. I like the way this sounds or the way this feels, anything like that. It tends to be a lot more, you tend to need to approach the problem with far more consideration of the eventual effect. So what serverless gives you is, the very short version is, it gives you a way to use resources in the cloud to replicate features that you could deploy by yourself, but then removes away all of the work involved in first of all setting all of that infrastructure up yourself, but also maintaining it over time. So the basic way I like to think about this, especially in the API context, like I said, there's a ton of use cases, but in this API use case, there's three basic things you need for any kind of API and that's a network. You need to be able to accept HTTP requests. You need a complete compute. You need some way to receive those requests and compute over them and analyze them. And you also need a data store because ultimately what you're going to be doing is storing data somewhere and retrieving data from somewhere. And that's really what you do when you spin up any resources. So if you look at the sort of traditional method in AWS, for example, that would be, you'd be looking at EC2 instances, for example, you'd spin up a set of infrastructure that looks, that is essentially on the lines of three EC2 instances, load balanced across availability zones in AWS for good redundancy. You have a load balance in front of that as well, to split your traffic across those instances. So now you've got network and compute sorted out. Now you need to spin up a relational database, which you normally need to vertically scale, usually to the maximum predicted traffic that you think you might get on your solution, and maybe a little bit extra for headroom, because you never know, maybe it's far more than you expect. And then you still normally end up spinning up an extra relational database instance as your read replica, just for that kind of redundancy again. And this is familiar. We know how to do this. This has been a problem that we've solved over time, except for the fact that that is not necessarily instant. Somebody has to actually take the time to create and set up all this infrastructure. Somebody has to take the time to be aware of the shortcomings or the shortcomings of this kind of infrastructure in the way of handling operating system updates. If I have application software running on these servers like my Apache or Nginx or whatever software might have run in my PHP, my node. I need to stay on top of that and maintain those applications and make sure that they don't run old versions that can become vulnerable. This normally ends up being far more work than most people realize. More and more, we're actually seeing a lot of organizations slowly moving away from doing everything entirely spun up of themselves. Even if it's just along the lines of... An organization that I've worked for in the past used to have their own caching solution set up as replicated across EC2 instances, which they ended up realizing was costing about as much as running CloudFront, which is AWS's built-in caching mechanism that you can pretty much put in front of anything and acts as a CDN. They ended up just switching to that because it was infrastructure-wise costing them the same, but they'd have to spend a lot of time and effort maintaining these EC2 instances for their cache. And they found that their own cash kept pulling over on Black Friday because it just wasn't enough capacity to manage all the load that kept coming in there the single time. So what serverless does now, and I've been talking about EC2 stuff, but what serverless does is it'll say, instead of a load balancer and Apache or Nginx sitting in front receiving network requests, I'm going to use a service that exists in AWS called API Gateway. And API Gateway is designed purely for accepting HTTP requests. You tell it, create me an endpoint that can receive get requests or create me an endpoint that can receive post requests. You can allocate validation on top of that that says my post request should have the following JSON schema applied to it. And all of these features are something like an API gateway that can then handle those HTTP requests for you. It's also automatically going to manage my load for me. So I don't need to worry about a load balancer and the number of EC2 instances. Since you spin this up, and I do have a maximum capacity, but I know what that is ahead of time. I can plan for that. And the same goes for services like Lambda, which can then take over the compute portion of that sort of that tripod. And then I can look at the data store instead of a relational data store that requires a specific capacity or EC2 instance size. I can use a tool like a DynamoDB, for example, which has a far more flexible scaling model than something like EC2 or RDS does.
DAN_SHAPPIR: So essentially, if I'm like summarizing it down, what you're saying is that I can set it up all on my own. But, and I won't reiterate all the components that you listed as part of that on-grown, let's call it quote unquote solution. But then I would have to have somebody with the appropriate expertise on hand about managing all these various services. I need a solution for monitoring them. I need to be able to properly upgrade them. I need to be able to handle the traffic in various scenarios, either over the, like you said, when there's a spike or in the middle of an upgrade and all these things. And essentially, if I'm using serverless, then somebody else is doing all this work for me. Does that more or less summarize, in a nutshell, what you said?
GARETH_MCCUMSKEY: Yeah, pretty much. There's one extra aspect to that that I can add as well, in that when I'm spinning up, for example, I've got the base recommendation from somebody like AWS is that you have three EC2 instances for your web application so that you can spread them across the three availability zones that they have in every region. This is a good redundancy practice. I know a lot of small organizations tend to not do that, you'll only have one because a little bit of downtime now and then isn't an absolute killer for them. But for organizations that want that redundancy, you generally need to spread things like EC2 instances across three availability zones. But the downside of something like EC2 is that you then have a virtual machine running permanently, 24-7, 365, billing you by the minute, even if you have no traffic. Even if maybe your application only has traffic from nine to five, and two o'clock in the morning, there's zero traffic, there's nobody coming to the site, and you've still got three EC2 instances that are sitting, executing, and consuming your wallet. RDS is exactly the same thing. You have a relational database, which again, if you remember, I was saying, you're going to have the size of your relational database be as big as you can, as you expect your traffic to be, to make sure that it can handle that traffic, even when there is no traffic. Because scaling a relational database up and down vertically is a very difficult thing to do without having service interruption. So there again, you've now got a service that is charging you by the minute, usually quite a hefty fee, even when there is no need for it to be running at that point. And there's services related to relational databases that vendors like AWS have made some changes to and they call it serverless Aurora. But even that on its own has some of its own problems with how to handle that serverless aspect, which is why I mentioned a service like DynamoDB for handling more gracefully that sort of serverless, you know, pay-per-usage type of model.
DAN_SHAPPIR: Serverless does come with its own restrictions and limitations. I mean, you know, there are no free lunches. If I'm going to get all the benefits of serverless, I'm going to have to abide by some of these restrictions. Can you elaborate on how I would need to architect or build my solution differently for a serverless type approach than for the standard server approach that we're all fairly familiar with?
GARETH_MCCUMSKEY: One of the big ones is that if you want to get to a situation where you're receiving some of the benefit of running serverlessly, you can do that relatively easily depending on how you currently run your systems. Ultimately, if you want to move to a point where you're sort of making maximum advantage, are running serverlessly, your application is going to be architected very differently to the way it is now. So maybe I should walk through an example of what I mean by that instead of the abstract concepts. If, for example, you have an application running on Express, and it's running on EC2, it's running perfectly fine there, and you'd like to look into serverless as a way to move forward in the future. Maybe it's something you want to consider. The first approach we often see a lot of organizations making is trying to find a way to lift and shift their existing application and try to benefit from serverless. So fortunately, with an application, with an app built in Express, for example, this is possible to a degree because you can actually take something like Express and execute it within the context of Lambda, which is the compute environment I mentioned earlier, and then apply an API gateway in front of that Lambda. So now instead of receiving HTTP requests to the Express server directly or through some other load-balancing layer, API Gateway is the endpoint that gets called. That then will execute a Lambda function which can then execute Express inside the Lambda function. Why I say this is only partially beneficial is because now, instead of taking full advantage of the capacity API Gateway provides you by having multiple endpoints, each with its own capacity limit, you're limiting your capacity limit to that single API gateway endpoint that feeds into a Lambda function. And you end up in a situation where you're still in the world of Express inside of this monolithic development environment, and you're not necessarily exposed to the depth and breadth of services available to you that you could use to make your application, to enhance your application in a bunch of ways. So that's the first tier of what we often see folks doing when they're moving to serverless. The other side that we see on the other end of the spectrum is where applications are re-architected almost from the ground up to be using serverless. And this normally follows a strangler pattern situation where somebody has done some kind of Hello World, they've gotten the basics under their belt about how to use something like the serverless framework, for example, or there are other frameworks like AWS SAM and so on. But they've used these tools to help them help spin up an API gateway endpoint pointing at the and it does some impressive things. They now look at their existing application, and instead of lifting and shifting the entire thing, they're taking one portion of it, a typical strangler pattern, just taking one piece of that, breaking it out, building that out as an API Gateway Lambda solution in serverless, deploying that into the cloud, and then sending all traffic that normally would go through that feature to this new API Gateway Lambda serverless solution that they've built. And over time, you'll find that then the organization will strangle out most of the features of the application and end up being service. The other side, of course, is Greenfield. And in Greenfield, you can kind of do what you want. And that's where I ended up starting a few years ago, where we basically rebuilt an application from scratch and we had total freedom to build it the way we wanted. So that's generally the patterns we see people taking.
DAN_SHAPPIR: If I take the first pattern that of actually just running the node server within a Lambda. How beneficial is that? I mean, at the end of the day, it seems like I'm like, I understand that it's moving me in the right direction, but isn't it like just adding constraints without providing that much value?
GARETH_MCCUMSKEY: So the interesting thing is your constraints are actually being removed. The reason I say that is because normally when you're as a developer, if I'm building an express application and I need to run it on a virtual machine. I need to know enough about a virtual machine to be able to create that, operate it, spin up operating systems, manage traffic on that. I'm spending a lot of time managing the sort of virtual machine infrastructure behind my application. What I get with something like API gateway is I can essentially just say, create me an endpoint. And that's all it does. It creates an endpoint. You have, if I remember correctly, the base. A capacity you have on a single API gateway endpoint is 10,000 requests per second, which is something you can ask AWS to increase. So that's not what you are left with. The other benefit you get immediately is that besides the non-maintenance portion, you don't have to worry about managing that load. That capacity is instantly available to you. You also don't pay for that load. So it's not like having a load balancer up and running in AWS that's charging you by the minute. If you have no traffic, you get no bill. And besides that, there's also there's also a very generous free tier. I've actually worked on projects with folks where I've built an entire serverless solution doing thousands of requests a day and their bill ends up being something like 58 US cents because of Route 53 charging for DNS queries, even though their application is in production and running for them. That's because services like API Gateway, Lambda, and a bunch of others all have these free tiers that don't bill you anything until you go over those limits. Just alone, API Gateway gives you that benefit. You don't have to worry about load. You don't have to worry about load balancing. You don't have to worry about running behind on versions of your web service software, your operating system, and so on. You just receive requests, and it immediately gets pushed to a Lambda function. And then on Lambda itself, you end up with a similar situation where you get a benefit out of Lambda, because there, you, again, don't have to worry about a machine to manage and maintain and runtimes to manage. You essentially just upload your application, your Express application and it now has the ability to run by default, depending on the region, but for example, US East 1, you can run 3,000 Lambda functions in parallel. So if you happen to receive 3,000 simultaneous requests at exactly the same moment, those will all run in parallel. And again, that's a limit that AWS sets by default, which you can ask them to increase. So if you do a lot more volume than that, they can pretty much disguise the limit with AWS's limits that they set. But right away, you've gotten all that benefit. No more server to worry about. No more load balancing to worry about. No build unless you're actually executing in Lambda or receiving a request in API Gateway.
DAN_SHAPPIR: And in terms of the persistence of data, because we've been talking about the Express server and...
GARETH_MCCUMSKEY: API Gateway?
DAN_SHAPPIR: Yeah, the API Gateway, exactly. What about the data services that I'm going to need for that solution?
GARETH_MCCUMSKEY: So you can still use your existing relational database if you choose to? If you've managed that over time, I know Express and a lot of web frameworks tend to work really well with relational databases. All the tooling has been built for that. Generally, though, if teams are open to using a new technology, it's interesting because what I found building serverless applications is it's opened my eyes to the breadth of possible options out there, not just with things like an API gateway and a Lambda but even with the potential data stores that I can use. Traditionally, I've built applications on relational databases, Postgres, MySQL, whatever you, because that's what I've known, it works. We normalize that data, we build our schemas, that's just the way things are done. But if you can look at alternatives, something like a DynamoDB, for example, there's also MongoDB, and AWS provides that service now. But these NoSQL-style services end up being incredibly powerful for serverless applications. And I picked DynamoDB specifically because it's designed along the same kind of spectrum as an API gateway in Lambda in that it's got massive amounts of capacity available to you. 40,000 reads and writes per second, if I remember correctly, by default. But it also doesn't build you unless it's being used. And you don't have to manage any of that load if you go into non-demand mode, where it'll just keep handling your requests for data until you hit that sort of global maximum. And DynamoDB is a very different beast to a relational database. You design your queries very differently, just like if you're using Mongo or Cassandra or any of these other sort of NoSQL data stores. But it doesn't just mean that you're limited to something like a DynamoDB. DynamoDB is great for a web application because it's designed for OLTP. It's designed for transactional workloads where you know ahead of time what your queries are going to be because you wrote the application. You know what your queries are, what data you need to pull out of a database. So the big difference is instead of just right up front building a database schema with a normalized approach, you design your application and your queries at the same time so that you can design your DynamoDB tables to match the queries your application actually needs. And it's designed for things like very fast queries. It can do a single-digit millisecond query times. No matter how much data you're storing in a single table, there are folks with 10 gigs of data and the query time is as fast as if they only had one item in the database. There's no effect of noisy neighbors. So if you're consuming CPU on a very big query, for example, that's not going to affect the other queries in your system. It's a very, very performant OLTP data store. But again, you can use the relational database. Normally, if you want to get to an ideal server situation, you'll probably switch most of your architecture to using these services as directly as possible. But when looking at a team that's getting familiar with this technology, take things slowly. Building serverless applications is incredibly forgiving because of the nature of how these systems load balance and manage capacity and manage traffic and manage shaping themselves to your application in a way that if you happen to not be as optimal as you could be, it's not necessarily that bad a thing. You don't have to spend all that time and effort optimizing the hell out of things before you get into production. Unlike if you're running on an EC2 cluster with a relational database if you design a really bad query on your relational database, it can bring the entire thing down to its knees when you have a little bit of traffic. So I hope that answers the question.
DAN_SHAPPIR: Yes, yes it does. Thank you. And now, if I'm thinking a little bit about the architecture. So when I architect a solution that's built as, let's say, a single node express server, be it replicated or not. It kind of encourages or tends to encourage a kind of a monolithic approach of building everything within that node server because it's there, it's persistent, it's easier for the various components within that system to communicate with each other because they're all just running within the same software, within the same server instance. It seems to me that serverless kind of encourages a microservices type of an approach. But not necessarily, because you can also build serverless as kind of a monolithic approach as well. So you might have a single serverless function being responsible for the entire API, or you might break the API out across multiple instances. So what would be the approach that you would recommend?
GARETH_MCCUMSKEY: This is going to sound like a bit of a cop-out, but it's really a case of whatever works for you. And the reason I say that is, especially in serverless, serverless being so new compared to all the other ways we've built applications, a lot of the really good best practices are still being worked out. There's some have come up that have shown a lot more promise than others, for example. So you mentioned the microservices approach, and this has shown its strengths, but also its weaknesses in complexity. But maybe we should take a step back. For anybody who's not familiar with the idea of microservices, just very briefly, it's a way of building an application where you have a bunch of small tiny services, each doing its own little thing, that contributes to the overall application as a whole. And this is traditionally used in big companies, you know, you find this in the Ubers and the, what are the other big ones? I totally forgotten now. Facebook's.
DAN_SHAPPIR: It's all over. Facebook does a monolith.
GARETH_MCCUMSKEY: Yeah, but a lot of these organizations will use the microservices approach because for them, internally and structurally in their teams, it makes sense because they have a lot of small teams all building these small services that they can afford to maintain as small teams. Microservices is a difficult one to justify if you're a generic dev team, maybe five to ten of you. That traditionally has been an issue because of the infrastructure burden that's normally associated with microservices. That's really where this massive complexity with microservices comes in, because if you start looking into microservices, now, imagining serverless doesn't exist, you're going to. If you want to go microservices, the real way to do microservices these days is to look into Kubernetes and containers. You need to have somebody who can understand that entire ecosystem, who can help you manage your microservices with multiple containers over many EC2 instances with Kubernetes controllers that manage load and uptime and a host of other things. It gets very complex very quickly. For large organizations with teams that can afford to manage this, it works really well. But if you take into account serverless, building microservices becomes a lot easier because that infrastructure burden essentially goes away. And what we find is that if you're building an application as a set of microservices, we've always talked about, since I've started development back in the early 2000s, we've always tried to get this ideal of building something once and reusing it everywhere. But ultimately you build this fancy class and you try to incorporate it into a new application and it just doesn't fit the use case. It doesn't work. You have to make many changes and you can't incorporate those changes into the version you used before, and it becomes a bit of a mess to reuse. But what we found is that when you're building in serverless as a collection of microservices, your reusability goes up. Because instead of building a class that matches a specific use case, you're building something that's meant for a specific outcome. What I mean by that is, a simple example is if you look at an authentication service, what does an authentication service do? It receives a username and a password, it validates the password is somebody can post a login request and it will compare the submitted login to the stored login and then provide a JWT for example And that JWT can then be sent with further requests. So you have another function there that can receive the JWT and validate that this is a valid token before accepting the request and line that API request to continue and that kind of service you can reuse everywhere. In fact, I have an authentication service. I've reused about five times now across different projects just because it works across all projects. So microservices really encourages that approach. And because you're now able to build things in these smaller, self-contained packages with less of that infrastructure burden on top, it's far easier for you to build something that is very contained, unified and mobile that somebody can just take and deploy into their AWS account. And now they have that service up and running. When you're looking at tools, and this is where I guess I can bring up some of the ideas around tooling because if you're thinking about serverless, I've been talking about serverless via the cloud services like API Gateway, Lambda, DynamoDB, and so on. But if you want to build a serverless application, going into the AWS console, which if anybody's done that, it's a bit of a nightmare situation. Going into the AWS console, trying to go to API Gateway, which is one sort of design language and schema and format of doing things, and spinning up API Gateway endpoint manually, and then going to the Lambda service, which again is completely different configuration, different design language, looks completely different, functions completely differently, and adding a Lambda function in there and then combining the two manually and DynamoDB table manually. That experience isn't great. It's not replicable. You can't take that infrastructure. You just built it out easily and share it with somebody else. So tools like the serverless framework, for example, and I talk about that because that's the one I know best, lets you do things like configure a collection of infrastructure elements together in a way. I can take a YAML file, run a serverless deploy command on top of it, and it's going to create API Gateway. It's going to upload my Lambda functions, it's going to create my DynamoDB table, connect them all with the right permissions, and allow me to then send queries to my API Gateway endpoint that hit my Lambda function that talks to my DynamoDB table. And my friend, my developer I work with, can take that exact same thing out of our Git repo, deploy it into his AWS account, make changes, add extra features to it, and then merge it back in, and now we have extra infrastructure all Lambda functions that I can now deploy to my version of the service. So managing microservices in this way becomes a lot easier because now we're talking, we speak in a common configuration language that I can share with my team and we can all work on it together at the same time.
DAN_SHAPPIR: Essentially a configuration tool on top of the MyRead configuration tools provided by Amazon or AWS is what you're describing.
GARETH_MCCUMSKEY: Yeah, exactly. So the way to think about this is that there are services like CloudFormation that AWS provides, and there are competitors to that, like Terraform, for example. And these are fantastic tools if you're in DevOps. They were designed for DevOps teams to help these teams spin up and manage infrastructure across cloud accounts. Unfortunately, because of that focus on DevOps teams, they're not great for developers trying to build an application. And that's where tools like the Service Framework come in and will abstract on top of those tools because if I'm a developer, I'm thinking in terms of feature sets and functionality that I need to provide to my users. Whereas as a DevOps practitioner, I'm thinking along the lines of I need an API Gateway endpoint, I need a Lambda function, I need an S3 bucket, I need a Dynamo, and so on. So tools like the serverless framework let me think along the lines of I need a Lambda function that's going to be triggered by an API Gateway endpoint, and in there I'm going to give permissions to DynamoDB because my code is going to call the DynamoDB table, and so on. So it's just a way to help make it make lives of developers easier because cloud formation is a pretty complex, massive beast that takes a lot of getting into and a lot of complexity just to get started. Whereas other tools can simplify that whole process and let you think in the mindset of a developer instead.
DAN_SHAPPIR: And how well does this integrate into my development environment? I mean, one of the great things about working with simply with Node and Express is that I can just very easily have Node running locally on my own device and, you know, open a browser, hit local host, and I can do all my development and debugging and whatnot. Right from my machine, I don't even theoretically need to be connected to the internet. What's the development experience like when I'm working with this sort of a serverless cloud-based approach?
GARETH_MCCUMSKEY: Right up from the get-go, because Lambda is essentially Node, Python, and so on, your hampered in any way. You can still write code. Local development becomes a trickier beast to discuss. And the reason for this is because you're trying to maximize your ability to use what exists in the cloud. So another short way to summarize what serverless is, is that you're trying to maximize the use of managed services to reduce your need for undifferentiated heavy lifting. Why should I need to spin up an HTTP server when API Gateway gives that to me? Why should I need. Why do I need to build some way to manage files when I can store things in S3, for example? And because we're now working in the environment where we're maximizing our use of our cloud vendor in order to make use of these services, that makes local development a lot trickier. And I'm not gonna deny that. That's a problem that has plagued serverless for years now. And there are a lot of solutions that have come up along the way to try and solve this problem, all with varying degrees of success. One of the more common methods is that for the serverless framework, for example, that a lot of folks in the community use called serverless offline, which essentially spins up a local HTTP server that lets you send HTTP traffic to Lambda functions. The downside of this, of course, is that you can trigger Lambda functions with more than just API gateway endpoints. Lambda functions are event-driven. You can trigger them through API gateway, but that's because the API gateway event is triggered, that's then calling a Lambda function. You could do things like have SNS, which is a pub subsystem inside AWS, trigger a Lambda function instead. SQS is a message queue system. You could have S3, for example, which is an upload mechanism, where I can upload a file into S3, and just uploading a file can trigger a Lambda function, which can then perform some action on the stuff I just uploaded. And there's tons of these events all across the cloud vendors. I mean, if you build on the Lambda equivalent in Azure, it's the same situation there. The cloud vendor services trigger these functions. Using a tool like serverless offline, for example, which creates an HTTP server, limits you into building Lambda functions that can only accept HTTP requests. Not necessarily ideal, but it can get the job done for you in that case. And there's situations where folks go all the way to the other end, where you can get something like, I'm trying to remember the name of it now, there's a tool that you can actually use to essentially emulate almost all of AWS on a local machine. You can spin up an S3 clone, a DynamoDB clone, a Kinesis clone, a Lambda API gateway. And I'm sure you can imagine the problem with that is that you end up with the entire cloud on your development machine. And I don't have, I don't have enough money to buy a 32, a laptop with 32 gigs of RAM to handle all of that traffic. So what we're seeing is folks struggling to run things locally. And the other downside that happens is with these locally emulated environments, I mean, I've even written blog posts to try and help find ways to execute code locally, like using tools like Mocha. And then so you can create unit tests, which are purely there just so that your functions can execute on your local machine with mocking of AWS services in front of it so that you're not actually calling DynamoDB in the cloud and so on with varying degrees of success. Again, folks don't like the idea of mocking services potentially. So what we found is the other problem that ends up happening is you have all these complex environments to try and do things locally. You build it locally. It works according to the environment you built. As soon as you push it into the cloud the agile problem of it worked on my machine, and then in the cloud it doesn't. There's some other issue that needs to be resolved, something else you need to do in AWS to get your nice shiny new service you built working. So more and more, the solutions that even at serverless, at serverless Inc, we're working on solutions to try and make local development work, but in the cloud. So what I mean by that is, recently, we have actually had something in beta right now if folks are interested, called Studio that allows you to essentially deploy the service that you're building on your local machine into an AWS account, whether that's one your organization provides for you or your own. But in a way, very similar to like a React, where you can turn on dev mode, and as you make changes to a file and save, it'll automatically refresh your build, refresh the page in your browser, and you can see your changes immediately. Studio gives you that same experience. You run Studio, it deploys into the cloud, which can take a minute or two. That's the normal deploy cycle. But as soon as I edit the Lambda function, that Lambda function is instantly uploaded into the Lambda service over a couple of seconds, and I can use the Studio interface in my browser to then send these events into my Lambda function, whether that's an HTTP event, or an SLS, S3, SQS, or any event whatsoever. And that ends up being a lot much cleaner way to do local development in the cloud, because that fills a lot of birds with one stone. I no longer need to try and have some complicated local development system on my machine. If I'm using Studio, if my team is using Studio, I can give my service to somebody else and they can immediately start working on it and debugging it. I can debug any event that's hitting my Lambda function, not just HTTP events from API Gateway, which is a great win as well. I can pull all sorts. I can get my CloudWatch logs, so I can get my logging information from executing my Lambda function inside AWS. It's testing in the Cloud, so it's actually executing in AWS with IAM, which is the permission system that AWS has inside the cloud. It's using all the services that I would normally. So if there's some weird quirk with DynamoDB that my local environment couldn't emulate, I'm going to see that because I'm developing it in the cloud and so on. So again, while that means that you can't run things on your local machine if you're in the middle of an airplane in the sky, to some degree, you can still write code. Your IDE still can validate your code. You can still do those things. But if you want to test that in the cloud, sort of the local in the cloud testing. You will need a internet connection for that.
Whenever I'm stuck on what to learn next, a lot of times I just go back to the fundamentals and think about how I can make those things more automatic. The reason is, is because then when I focus on the fundamentals, I'm able to actually level up in all the other areas that I'm trying to learn. So I teamed up with Kyle Simpson to focus on the fundamentals of JavaScript. Kyle wrote the books, You Don't Know JS Yet, and his Getting Started ebook goes over just the fundamental fundamentals, so to speak, of JavaScript. And we're putting together a 30-day challenge where you can actually level up on this stuff, get it down, Pat, and then you can go and learn all of the other things that you're doing that are based on these things. So if you go sign up for the challenge, you can do it at devchat.tv slash bookcamp. That was Kyle's idea. You can get the following as part of the challenge. You get daily training videos, which are worth about $150. You get daily exercises and homework, which again, are about worth about $97, especially with the coaching that we give you around them. You get access to the private Slack channel, which is worth about 20 bucks. You get access to a premium podcast series that Kyle and I are going to record. It's an eight part podcast series where we talk through all the pieces of the book. You'll get three Q and A calls per week. And that puts you at about a $1,779 value. And what's great is you also get then the audio from the podcast, you get the video from the training, you get the experience from working and you get the visual reading learning from the book. So you're going to learn this in multiple ways. Once again go sign up at devchat.tv slash book camp, devchat.tv slash book camp, and you can get it for $197. If you use the code JSJABBER, you can get it for 147 instead. So go check it out right now, devchat.tv slash book camp.
DAN_SHAPPIR: Now you did just mention that I have access to all the various logs from the serverless service, but what about stuff like single stepping or watching variables. I mean, it's wonderful that we can write and debug our client that's working against that API endpoint. But what happens if I want to step into that API endpoint because it's returning some sort of invalid data and I want to try to debug that? Is there something that I can do about that?
GARETH_MCCUMSKEY: So there's many ways to answer that question. The first one I'm going to mention is that. A lot of folks don't use debuggers and just use console log, which works fantastically in this context. I know that's not an answer to your question. Yes, sorry, Steve.
STEVE_EDWARDS: I was just going to wonder how you function without a debugger, but that's just me.
GARETH_MCCUMSKEY: When I was talking about the idea of using Mocha before, where I essentially use Mocha to execute my code locally, that was awesome because I could use a debugger then. That meant I could step through my code remote services. But the other side of this is that one of the interesting things that you get when you start building serverless applications is you realize that code is secondary. That sounds like a bizarre thing to say, but serverless is firmly in the camp of low slash no code. There's a lot of ways that you can architect a serverless application that never actually executes any of your own code. That completely bypasses Lambda recommendation is if you can architect things in a way that doesn't require code to execute in Lambda, do it. Because ultimately, it's going to save you time and money. It's going to be far more performant. It's going to be far more redundant in the way that it handles errors. An example of what I mean by this is something like if I have an API Gateway endpoint that receives data, I can add validation onto API Gateway to confirm that the data is the structure I need. Then I can immediately point that endpoint to a DynamoDB table. Because if all I was going to do with my Lambda function was accept the data and put it into DynamoDB, I'm executing code for no reason. If I can just insert the data as is into a DynamoDB table, why not just do that? I saved myself the cost of running a Lambda function. I would have incurred a cost of running DynamoDB anyway, because Lambda would have called DynamoDB and it's not just these services. I can do this with things like API Gateway to SNS and many other services. You can connect them directly. And what you end up in a situation, and to come back to the actual question, is you'll find that a lot of the time, a solution that you would have spent thousands of lines of code building, you end up with maybe a couple hundred lines of code. And you end up in a situation where debugging code becomes less of an issue just because you have less code to debug in the first place. And this just means that there's another advantage of serverless in that, as a serverless developer, you're more concerned with the overall architecture of your application and how the various components inside the cloud vendor work together to provide the solution that you're trying to achieve than the actual code that you're writing. And the less code you have usually means the less downtime you have because the biggest single failure point often with applications is code. A developer does something stupid and brings a system down. That's usually the downside of an application these days. And then to actually address how you can get a debugger, that is something that we're actually working on at Serverless. So right now, we're building Studio as a sort of postman-style way for you to execute events on top of a Lambda function. But ultimately, we would like to find ways to also allow you to debug things inline. That is something that is on our roadmap that we're hoping to bring out sooner rather than later. We're a small team. There's always lots of work to do. But that is a nice feature to have. But ultimately, it's not necessarily as critical as it might seem as a traditional application would need.
DAN_SHAPPIR: It kind of sounds like the selling point that people often use with functional programming or declarative programming is that you don't need to debug it as much and it's much clearer and less code and so forth. But it always seems to me that if you reach some point where you do have to debug for some reason, it can get really, really tricky. So essentially, I'm kind of hearing the same thing. Which is not really surprising because like you described it, it's a more declarative and functional approach to building services and servers. So yeah, I guess it's kind of the same limitation. Another question that I had is you mostly, well, you did mention Azure, but you mostly spoke about AWS and stuff like that. How much of a lock-in do I get into when I use a serverless type approach? Because if I'm just running my own node server on a virtual machine, I can literally run it everywhere. That's kind of like the whole point. Here, it seems that I'm buying into a lot of proprietary and custom APIs, which can make life a whole lot easier for me, but kind of lock me in into that specific solution. So what do you have to say about that?
GARETH_MCCUMSKEY: So the interesting thing is, over the years, I've essentially been working with the cloud since about 2008. And the number of times that I have heard of or know somebody or have myself been involved in actually moving from one cloud vendor to another has been nothing. It's a problem I know that is talked about a lot, or at least it's a perceived problem that is talked about a lot. The reason I qualify like that is because there are very few cases where you actually need to move cloud vendor. I heard a story about somebody once who did have to do it. Not somebody I know personally. But the reason they had to was because the company that they were working for was bought out and the company that bought them out had a beef with Amazon. So therefore they had to move to Azure. And I mean, these are these are sort of political situations that no developer has control of and you can't foresee. But the vast majority of times, cloud vendor lock-in is a fear that is often not realized. I think a lot of the time, this a lot of times this fear comes from the historical facts. What happened in the past, especially in the 2000s with companies like Oracle, I'm going to mention, and various others where they would lock companies into using their products and services with draconian contracts that didn't allow much freedom. Whereas when you look at things like the cloud today, these services and organizations are built around a very different paradigm. Back in those days, there were nowhere near as many organizations looking to used database systems like Oracle, for example. So the business model there was, let's grab as many of these very few people as we can, lock them into these exorbitantly priced contracts because how else are we going to make money? There isn't a massive market. There isn't hundreds of thousands of people out there. That's completely changed today. There are hundreds of thousands to millions of people using cloud vendors all the time. The cloud is a race to the bottom, essentially. So if you're using a cloud vendor, it doesn't matter which one predominantly going to be pricing to keep and attract as many people as possible because there's a huge market to satiate out there. So generally that issue is not something that I've seen a lot of.
AJ_O’NEAL: You bring up something there, which is something I've been wondering about. My suspicions could be completely wrong. But with the rate that technology changes so quickly, sometimes something is over as soon as it began like a flash in the pan. And so when you're talking about this being a race to the bottom. And I think about the economics of that to me, that signals that the cloud is not going to be the end all be all. The thing is going to have to flip because when the price becomes a race to the bottom and there's no competitive advantage, some other innovation is going to spring up to create competitive advantage. So I personally don't feel that it's worth the investment. When people talk about the cloud and they say, oh yeah, it only cost us 58 cents. Well, the alternative is whatever cost you 58 cents would have cost you $5 on digital ocean, right? Or maybe less if you went with a lesser known provider, maybe $3. But the investment that you put into the cloud, the system like that, that stuff does start to get expensive. Like people use it in strange ways and it does start to get expensive and you've got this, these problems with local testing we've been talking about. And so, you know, I wonder. Is this really the end all be all? Is this the end of the line or is something else just around the corner? And maybe it's better to keep that intellectual property in a place that it's easy to transport rather than having it tied to these cloud vendors that are less accessible in terms of debugging and testing and then being able to switch easily.
GARETH_MCCUMSKEY: Well, the debugging and testing side of things is really just a tooling problem. And you could have said the same thing about web development back in the mid debugging and testing was a real pain in the ass. You couldn't really do that very effectively.
AJ_O’NEAL: It still is.
GARETH_MCCUMSKEY: It still is, exactly. That's my point. And that is just a tooling issue. Right now, there are vendors out there, there are people out there, there's open source projects, there's a lot of work being done in solving those particular problems. But there's always going to be problems that need to be solved in the development space. But what you find is that AWS, Azure and Google Cloud, Google Cloud to a lesser degree, I have to be honest, that...These cloud vendors are focusing on serverless very heavily as a way to attract people into their services. And the reason for this is the race to the bottom I mentioned.
AJ_O’NEAL: Lock in.
GARETH_MCCUMSKEY: What I'm trying to point out is that, spinning up a virtual machine, you can only be so cheap. Your CPUs cost a certain amount of money, RAM costs a certain amount of money, disk space, you name it, network and so on. That all costs a very fixed amount. But when you talk about the
AJ_O’NEAL: difference between 50 cents and $5, I don't get that argument. They're both zero in that case.
GARETH_MCCUMSKEY: My point is that the services that are provided, something like API Gateway, for example, if I do need more than $5, if I would normally spend a couple of hundred dollars on EC2 instances, but using API Gateway costs me 20 instead, the only reason I can get those savings is because instead of me using fixed infrastructure, there's a fixed cost to somebody like an AWS or in Azure, AWS can now sort of spread that cost out because I'm consuming a specific timeframe and there's many users of the system. It's not, I haven't locked away resources that aren't being used at 2am. I'm only using the resources I'm using, therefore I'm only being built for the resources that I'm using. What this means is that these services are now being worked on and optimized as a way to build an application. That's why you'll find a lot more investment and what's the word I'm looking for? A lot more effort put into optimizing these services because when you can spread the cost out amongst a lot of people, it becomes cheaper to run an application in that way. Does that make sense?
DAN_SHAPPIR: Yeah, but I actually have to kind of side with AJ on this and push the value towards a different direction. Because, and I've heard AJ say this before, I mean, we described in architecture before that kind of be the three servers, but realistically with these three servers, again, unless I'm in Uber, unless I'm a Wix where I work, you can probably scale to whatever scale you need to scale to these days. I mean, a single-node server can literally support hundreds of thousands of concurrent sessions if you want to go that way. I, so, in terms of cost of the services that you're paying to the cloud slash hosting provider, there is value in serverless potentially, but it's not dramatic, I think. I think from my perspective, the more important, again, you can disagree with me, but I think that the more important savings have to do with the management cost. I mean, if I need to get DevOps people on board and I have to pay the salary for these DevOps people, or if I need to allocate people to keep my system and service up and running and properly tuned and up to date. So these are the most significant costs, I think, than actually potentially paying the hourly cost of a server. And the other thing is that I think that serverless kind of encourages a more correct architecture for web services. And I think that you kind of said that microservices type of an approach fits larger organizations more or is needed for larger organizations more than it's needed for smaller ones. And that's true. But I also think that even smaller organization can benefit from breaking up concerns and not building this monolithic type of a solution that only one person within the organization fully understands from top to bottom. And if that person leaves tomorrow, then you're kind of stuck. And it's kind of like serverless kind of encourages a composition of services. You don't even need to call it microservices. Just think about it as composing independent services, which can really benefit your architecture. So that's my take on it about the cost of the machines. It's more about the cost of the people, I think.
GARETH_MCCUMSKEY: It's also not just the cost of the people, it's having the right people in the first place. So an example that might just make it a bit clearer what I mean, there was an organization that I was doing, I was doing a relatively small project. They were essentially a medical insurance firm that received leads of information, leads of people who were interested in medical insurance by email. These emails were structured in very specific way. They would go to an inbox and there was a script running on a server that would open up the inbox every 10 minutes or something like that, read the new messages out of the inbox and scan them for information and then push this information into the CRM system so that this organization could contact those people back and help them find the medical insurance that they were looking for. Relatively simple task to do. But the issue with this is that the organization didn't have any staff on hand to handle this. They didn't, they're not a development company. They were a team of essentially five or six people. They just needed a system that could push this information into their CRM so that they had a way to contact these folks. They were running on a server through a local, co-located service that you can find here in South Africa. But the server kept having problems. There were issues constantly. There was work that needed to be done all the time, usually related to the inbox and so on. Issues constantly. Ultimately, what serverless ended up doing was I was able to essentially rebuild this entire service using serverless. So I'm using a service like SES, which is Amazon's email service that can receive incoming mail and dropping these emails into S3, triggering Lambda functions using S3 so that I could read the contents of these emails and then pushing that data into the CRM. And what this gave me was a platform that had very little in the way of any issues. Any issues at that point was code related nothing related to the infrastructure that they had. None of this infrastructure had to go down for maintenance or need backups because of the way that all of this was managed and replicated by AWS automatically for me. No OS updates, nothing like that needed to be worried about. And the small organization could just continue doing what they needed to do. And that kind of thing happens a lot. That was just one example. And instead of somebody... And they could have maybe tried to find somebody in DevOps to try and manage this, even on a freelance basis. But the unfortunate reality in the market is that folks with those kind of skills are in short supply. And organizations with doing tens of thousands of requests a day on an application, they could only find somebody to do work on their architecture twice a week, because that's all that that person had time for and they could afford. So it becomes a tricky one. So it takes away that management, like you said.
AJ_O’NEAL: Well, you're comparing apples and tyrannosaurs here, though. Because if it weren't you...If it was someone else, I think this actually touches on a point though, because there's money in serverless. There's not money in education on how to run a server, but there's money in serverless because it's a service that someone can make proprietary. And so it drives it. Like DigitalOcean does a pretty good job of trying to create educational material. But anyway, if I were, you know, if DigitalOcean were recruiting people and they reached out to this company and said, Hey, we can get that running rock solid for you. Well, they could take a couple of the scripts from their from their blog and make it a scalable system that's self-automated, that costs five bucks a month. So I think to say they had somebody who didn't know what they were doing who clunked it together and then a super expert came in with a very specific niche technology and nailed it is going to fly no matter which way you spin the argument because we could just as easily say they had some guy who clunked it together who had no idea what he was doing with serverless and created a terrible system that didn't scale because the code wasn't actually written in a, you know, there's like in memory variables in the code or whatever. Right. And so it could have been just, just the reverse. It's a, I feel like it's a matter of education on the part of the person that starts it up. Like if, if somebody hired me as a consultant to come in and set up their web server and I set up with node, it's never going to need an operating system update because there's only one binary that's running on that server. There's nothing else that could ever be affected. Right.
GARETH_MCCUMSKEY: The downside, like you said, is that, uh, well, there's only so many A J O Neils out there and while education might be an issue, I think there are, there's a lot of effort out there to try and. Educate people. The real problem is that if I'm, how can I put this? Like if I'm,
AJ_O’NEAL: if we just said, look, use Ubuntu Linux, don't worry about there being 600 versions of Linux. That's just a distraction and a waste of time. We said just use system D. Don't worry about all the cool new-fangled systems that come out every six months. Just used Ubuntu Linux, use system D. Like problem solved. Don't have to worry about the operating system. You don't, but it's, I feel like again, it's an educational issue with serverless. You get a specific platform. Whereas with VPS, you have to decide, you have to make decisions and you need somebody, but that means same thing. You either have to find someone that's an Amazon expert or a Google expert or someone that's an Ubuntu expert or someone that's an Arch expert, you have to find an expert to do it.
GARETH_MCCUMSKEY: The downside is that, who do you tell that to first of all? You need somebody who wants to know that information, who actually cares. Most of these organizations that aren't a dead house, don't have anybody there that particularly cares about the tech stuff. That means you do need to go find somebody with the interest and potentially the skills to do this and they're in very short supply. That's just unfortunately the reality of our-
AJ_O’NEAL: As are you, you are in very short supply.
CHARLES MAX_WOOD: But I'm going to chime in here too and just say that there are tutorials out there that will walk me through writing a really simple script in JavaScript. And I can kind of muddle my way through that. And there are tutorials out there for me to put this on something like AWS Lambda, or use something like the serverless framework to get it up. And so if I'm a real scrappy entrepreneur that doesn't know how to code, I can figure out most of this and maybe get some help on writing the code because the rest of the infrastructure stuff I can do kind of muddle my way through it. And so it does remove a lot of the, not just the decisions, but a lot of the know-how that I have to figure out because once it's on Amazon or Azure or Google or wherever, it'll continue to run and they'll manage everything else. And you're saying, you know, you don't ever have to run upgrades because it's just running node. That's not actually true. I mean, I've run a ton of servers where I have to go in and I have to run updates on the other infrastructure that runs the server, runs the firewall, runs this, that, or the other because it's out of date and it has security issues. So there are other costs to this. I think the trade-off really comes down to, do you have somebody who with a marginal amount of effort can continue to maintain this, or do you have a pathway to getting this up and having it run somewhere without having to worry about all that stuff? And I think there's a place for both, honestly, but I think what it really comes down to is,Yeah, what resources do you have? How much of this are you doing yourself? How much technical knowledge do you really want to have? Because I don't have to understand Ubuntu. I don't have to understand how to get node on Ubuntu. I don't have to understand any of the rest of that stuff if I want to run it on serverless. But sometimes the problem is so hairy that I need kind of a custom solution and the best way to run that is on a VPS. And so there's a trade-off there as well.
AJ_O’NEAL: Me personally. I run into more companies, well, and it's because they're looking to hire someone like me. They're for some reason they found me. So I've got that selection bias. But I just run into all these companies where it's like a team of three and someone set them up with AWS and they have no idea how to do anything or change anything. They want to make a setting change to a firewall or a NGINX configuration and everything's just so layered and so deep and so complex. They can't figure out how to get something done.
CHARLES MAX_WOOD: Yeah, but that's a problem you're gonna run into in a business anyway. Fair, fair.
GARETH_MCCUMSKEY: The other side of it is that the tooling for building a serverless application, Charles has hinted at it, using a tool like the serverless framework, but like I said in the beginning of the show, that's not the only one out there, it's just the one I know the best because I have it at work for the company. But using a tool like serverless framework, for example, will allow me to do things like, I can configure, I want an API gateway endpoint pointing at a Lambda function, and this is the file with my code in it. And as a developer, it's far easier for me to understand that I'm creating something that has an HTTP endpoint pointing at my code. I don't need to then know about a virtual machine. I don't need to go into AWS and figure out what size of virtual machine I'm going to need based on the traffic that I might be expecting. If I'm a small company, that's something I have to worry about anyway, but whatever. I still need to know whether a T2 micro or a T3 micro or whatever is the best option for me. I then need to choose the operating system that I need to go into. And I might do your Google search, which can give me an answer, but it might be a bad answer, might not be the most optimal answer, but it's an answer, so I'll use whatever I get. I need to figure out whether I use Apache or Nginx or whatever else I might need. Maybe I've built this service in Python or whatever other language. I need to figure these things out as a developer if I'm running on an EC2 instance, for example. But with serverless, I need HTTP endpoint. I have code. I run a deploy command, and that creates that in the cloud. I don't have to worry about any of that infrastructure or any of those decisions that could be potentially harmful and that I need to maintain in the future. Because you mentioned maintenance is low and that depends because if another Heartbleed comes out, I need to know about that. But I'm not a DevOps person necessarily. I'm just a developer who spun up a solution. I'm not tied into the mailing lists and the forum posts of these types of threats out there. Maybe I see something vaguely in Medium or something like that. But I don't necessarily have the chops to go in there. I just spun up a first Ubuntu server the other day. I don't know what this Heartbleed stuff means. So these kinds of things become complicated where if I have an endpoint and I have code, I can, I can work that out. I can build that and deploy that.
AJ_O’NEAL: So you make, you know, you make a lot of good points. I, this is the, I do, I, I do want to scope the heartbleed thing though. That affected almost no servers.
GARETH_MCCUMSKEY: It was, it was an example.
AJ_O’NEAL: No, it's, it is an example. And there are catastrophic failures like that, that happen at a library that everything includes. You still do have to choose what language you're going to use. You still have to choose whether you want Node or Python. You know, you listed that in your list, but you still, you still got to choose that. I admit it takes away a lot of the education and I 100% agree on that. But the, the, the most of the vulnerabilities that occur on an operating system are completely unrelated to your application and will have no effect. If there's a vulnerability and Lib such and such, your Node app isn't using that. What do you mean? It doesn't matter whether you know it or not, cause it's not going to. Like when you install node, it's just node. It like whatever else you have installed on there, unless it's running a service, which you generally would have had to set up yourself, a vulnerability in an application you're not running doesn't affect the application. So not, not that that doesn't exist. Like you have a very real point there. Yeah. And I cannot, cannot argue against it. Other than to say that your chances of getting hacked using WordPress are far less or far like 100,000 times greater than your chances of getting hacked by having a node or a Python server.
GARETH_MCCUMSKEY: The other side of the coin is that most of the vulnerabilities that happen in applications are in the application themselves. You're right about that. Just to finish the last point, knowing about whether a vulnerability in an operating system or my runtime is something I need to worry about means I need to have the knowledge about that to know whether it's something to worry about. But besides that, the application, if I've coded an application badly in Express, I could be doing SQL, I could be vulnerable to SQL injection attacks or things like that. A lot of those problems get taken away when you're building the serverless context, because I no longer am dealing with directly with the request object, for example, from HTTP. I'm dealing with the event object given to me by API Gateway that has been sanitized and cleaned and made usable for me in my Lambda function. So I'm not dealing with that potentially dangerous payload coming from you know, somebody who's trying to hack my system that knows a vulnerability in WordPress that I'm running now or whatever it might be. So there's a lot of that situation as well where those, those problems get taken away from you as a developer.
CHARLES MAX_WOOD: Yeah. And ultimately my experience with this is that most people care about how fast you can deliver, how much work is going to be to maintain and things like that. And I think, I think, you know, you've both articulated a lot of the trade-offs that people are going to have to consider. Now I have a question and I, I'm going to ask this because I love stories and we should have started with this and I didn't think to ask it at the beginning. But Gareth, how did you get into serverless? What was it that made you so passionate about this?
GARETH_MCCUMSKEY: So in about 2016, I got a job with a company called Expert Explore. Relatively small to a company. They do some pretty nice volume, but they're not a contique here. They're none of these really big names. But their entire business model was built on the online platform. They didn't have it. You didn't go to TravelAgent and book a tour with Expert Explore. You had to come to their online platform and book through them directly, for better or worse. When I arrived there, they were having some serious issues with their online platform. The original system was built on WordPress 10 years before I arrived. So that's how old it was. They were growing quite nicely. They had a slow burn up till a few years before I arrived. And now that we're having these issues, especially with load. They'd already gotten a DevOps team from the UK because they couldn't find anyone locally to put their system in place in AWS with the traditional setup that you'd have with three permanently running behind the load balance and so on, replicated databases and the like. With the day I arrived, they launched their annual huge sale, their version of a Black Friday where they released the new tour dates for the next year. And I sat there and my first day, I don't have much to do. I can't really assist much. I've never looked at their code. I don't know how their systems work. And everything just fell over. The entire system collapsed. Though the one other member of the team that I was going to be leading, I was running around hopelessly trying to find ways to solve the problems. And then ultimately it was resolved two hours later when the traffic just died down and sort of the service could recover and things came back to normal. That's an extreme situation because there were a lot of problems that had to be solved in that case. So to cut a long story short, I spent a bunch of months just trying to optimize the existing stuff just to make sure that stuff worked as it should as much as we could. But ultimately, the decision came that we were going to look at re-architecting the system because it was old, there was a lot of problems with it, the code base had gone over so many changes over the years that it was an absolute mess. The poor guy who was there when I arrived had done his best to try and wrangle some kind of sense out of that code base. It was a very difficult task. I was looking around. I'm a PHP developer before I started with serverless. I'm used to building things with Symfony and Laravel on servers and so on. And I was looking to architect things that way. And I was asking around. I like to get information from folks about ways I can consider doing things. And somebody mentioned, go take a look at serverless framework, almost in passing. And I decided, okay, let me go take a look. I went to the site. I did this sort of hello world. Saw the promise expanded that out into something a bit more involved so that this actually looked like a really cool way to do things. I think we planned our first proof of concept. And the proof of concept we did is something I would normally recommend to folks is finding that one part of your application that you want to have that is important but not critical. So that when you re-architect or build it out with something like serverless as a test, if it falls over, it's not the end of the world. It's not great. You want that up, but if it falls over, it's not an absolute trade smash. You're not taking out the checkout system. So in our case, it was the review system on the site where you could see the reviews that folks had left for tours. And ultimately our proof of concept blew all our expectations away. It was highly performant. It never had issues. We ended up having a sort of demo sale as well, sort of later that year to sort of test things out load wise and it never blinks, never flinched. And from that point on, I kind of just accelerated from there. We ended up re-architecting huge portions of that platform using serverless technologies. To the point, I think they wrapped up last year with the last portion of the site, if I remember correctly. But yeah, we ended up the following year running the massive annual sale like they did the year before. And every year before that, they'd also fallen over, by the way. So they've never had a sale day that ran flawlessly. And the first sale day running on serverless, we had a small issue that affected us for 15 minutes, unfortunately. But yeah, in the end, the serverless rebuild of that system worked way better than we could have expected to the point where we actually brought down the third-party booking service down the line that would have fallen over years before if our system then hadn't. They throttled us because the request coming through to them for bookings was too high. That's not something we could have predicted at the time. That's where it started. From there, it's been a situation where I've just seen serverless solve so many problems so quickly and efficiently. That's kind of where I go to. And I ended up being fortunate enough to be invited by Serverless Inc. to join them on the team.
CHARLES MAX_WOOD: Yeah. I was going to ask that next. How did you wind up working for Serverless Inc.?
GARETH_MCCUMSKEY: Well, folks have asked me, how do you get a job with a company that you love? And really the main way to do it is just get involved. And that's with anything. It doesn't matter whether it's in tech or not. Other stories of folks just getting involved in a company that they think can do great things in the community and end up getting hired. That's really what ended up happening to me. I was using the framework so much for work-related situations. I ended up getting onto the getter and the forums and answering questions with folks who were having the same problems I had at one point and finding ways around them. Eventually, they came to me and said, you're doing this already for free. Would you want to do it for a salary? Would you like to join us permanently? That's where I'm now. I've joined Serverless essentially as the, what they call the solutions architect, but it's a startup. We do lots of, everybody does lots of things. I get involved in the community a lot.
CHARLES MAX_WOOD: Awesome.
Hey folks, are you trying to figure out how to stay current with React Native? Maybe you heard that Chain React Conference was canceled and you're a little bit sad about that. Well, I borrowed their dates and I'm doing an online conference. So if you want to come and learn from the best of the best from React Native, then come do it. We have people like Christopher Chedot from Facebook. He's gonna come and he's gonna talk to us and answer questions about the origins of React Native. We're also going to have Gantt Laborde from Infinite Red and several of the panelists and past panelists from React Native Radio. So come check it out at reactnativeremoteconf.com. That's reactnativeremoteconf.com.
CHARLES MAX_WOOD: All right, well we're way past our time to start picks. So I'm gonna push us that way, but I just, I love the stories and I love the kind of the, oh, I really want to work for X company. And so yeah, you get involved to the point where they're like, who is this person? And then they go check it out and off you go. I'm going to shamelessly plug my book that talks a lot about that particular approach. Yeah, let's go ahead and do picks. Steve, do you want to start us with picks?
STEVE_EDWARDS: Certainly. So it's now summertime and I have a nine-year-old son who is looking for things to do when he's not outside playing with his friends. And so over the weekend, I pointed it at Matt, a classic book that I had a copy of that I'd been given when I was little, is The Black Stallion. And so, you know, there's a whole series, but that's the classic by Walter Farley. He read the book in like two days. And so I told him that if you read the book, then we can watch the movie. So the movie came out in 1979. It was a Francis Ford Coppola movie. It had Kelly Reno as Alec, and then it's got Mickey Rooney and Terry Gahr as the main actors and then the horses. But we watched it last night and I realized how good of a movie it was. You know, it changes the story and always bugs a snot out of me when movies change the plot from the books where in the movie his father's with him on the ship as he's coming back from Africa and he drowns whereas in the book, you know, he's a main part of the story. But the gist of the story remains the same. But what I loved about the movie so much isn't so much the plot, it's the way it was filmed and particularly with the music, it's very minimalist. If you think about a lot of movies, especially when you get to the big scenes, the buildup scenes at the end where there's a lot going on, there's always, you know, some, you know, big music score in the background, some driving music. But in this one, there's like nothing from the whole point at the end where it starts to race through the race. You get a little bit of music at the end. And even throughout the rest of the movie, it's very minimalist music, not a lot, nothing that really overwhelm it. It really adds a lot. I think of it as sort of the Vin Scully method. Scully was a baseball broadcaster, he was a Dodgers broadcaster for 67 years. And he was well known for not talking over something happening. And the classic is the Kurt Gibson home run in game one of the 1988 World Series where he hits a home run that wins game one. And he doesn't talk for like a minute after the home run. He makes his phrase, he goes and gets a cup of coffee as he says, and he doesn't talk and just lets the crowd sound. So I really liked that part of this movie, that and the filming, it's beautiful. It's filmed in, a lot of it, some of it was filmed in Sardinia, Italy, and some of it was filmed on the Oregon coast actually a couple small towns down there, but just a really good movie, good plot and beautifully filmed and I really liked the music as well.
CHARLES MAX_WOOD: Awesome. Yeah, I haven't seen that movie since I was a kid.
STEVE_EDWARDS: Well, you know what's interesting about it? I forgot to mention this too, is it came out in 1979 and it won two Oscars. But if you read in some of the trivia on IMDB, it was actually put on hold for two years. I don't remember who it was exactly we're not going to release this as some sort of child's art house film. And Coppola had to use all of his influence to get it released. And then once released, it wins two Oscars. So, you know, sort of funny stories.
CHARLES MAX_WOOD: Yeah. It reminds me of like JK Rowling shopping Harry Potter around to like 50 publishers before one that doesn't even publish kids books finally puts it out because the editor, the Reddit loved it.
STEVE_EDWARDS: That same thing, you know, you talk about Lord of the Rings trilogy.
CHARLES MAX_WOOD: Yeah.
STEVE_EDWARDS: You know, he couldn't get anybody to buy that. And then he goes to, I forget who it was. The studio was United. I forget who the studio was that made it. They said, heck, we don't see this as one. We see it as three movies and look what it did.
CHARLES MAX_WOOD: Yep. All right, AJ, what are your picks?
AJ_O’NEAL: Okay. I was on neither one thing that I'm going to pick. So I'm going to do another self pick in line with this whole, you know, thing of there not being enough education. And, and I see something that is relatively simple being there's complexity in it because there's so many different ways you can do it. So I've got this site, my buddy, Ryan Burnett and I have been working on called webinstall.dev. And it's kind of, you know, if you first look at it, you might think it's kind of similar to Brew. It's not really the same goal, but it is kind of similar where we're trying to create a place where you can install the best tools, tools that don't have dependencies tools that are high quality, things like Node, Golang, Postgres, RipGrab, things that you use for development and for a server, accompanied with a cheat sheet of the things that you commonly need to know when you use that, like a quick reference. And so on there, I've got Pathman is something I created, Serviceman is something that I created. Actually, Ryan and I have both worked on both of those things. He's got some commits there too. And those are for managing your path and managing either system D or launch control. They're, they're cross-platform tools. And then, and then these tools that are, that are built by other people. And the purpose of it is we're just kind of trying to create these tools to make a streamlined and simple way to get up and running on a server. Cause both he and I do contract work and we both kind of run into the type of customer that we don't feel has the level of sophistication and support from what our perception is to, to run this serverless type of stuff and we just try to set them up on DigitalOcean and it's kind of a set it and forget it. Maybe come in once or twice a year with a security update or something like that as we have to do with our separate client base. Anyway, so I just want to plug that a little bit because I feel like we've gotten into a point where I'm kind of proud of it. It's looking pretty decent. It's functioning pretty well. We've got it pretty well tested. And if you are accustomed with Brew, then you know the problem of getting up to date or apt. You know, you know the problem of getting up to date versions and with Brew, with its growing up file permissions sometimes and stuff like that. So this is just like super dumb. It just looks at a releases API, fetches a tarball, unzips it to a folder, adds it to the path. And then when you switch versions, just changes the Sim link. It's, it's that dumb and idiotic. And we love that about it and think we want to find out who else likes it. So I'm just going to pick that for today. And, uh, you know, visit the site as a gift for me to say happy birthday. Today's my birthday. So check it out. Thanks.
GARETH_MCCUMSKEY: Happy birthday.
CHARLES MAX_WOOD: Of course, by the time you all hear this, still check it out for his birthday. Yeah. I haven't seen Ryan in forever. Say hi to him for me.
AJ_O’NEAL: Will do.
CHARLES MAX_WOOD: Dan, what are your picks?
DAN_SHAPPIR: Okay. So my pick is this. So one of the things that I kind of, you know, usually enjoy receiving is this Google Maps kind of sends you this monthly email of your timeline because usually I tend to travel quite a bit and I like to and it enables me to kind of reminisce about the interesting places that I visited. But in recent months, it feels like the service is kind of trolling me because when I look at the map that it sends, it essentially has a dot on my house and another dot on the supermarket where I go shopping, and that's that. So that's kind of an amusing thing. But that kind of brings me to my second point. So as I said, I do enjoy traveling, and we usually travel during the summer. And of course, we bought plane tickets to places, my wife and I, our kids, and obviously this all got canceled because of coronavirus. And it kind of brings me again, raises an annoyance that I have with the various airline companies. Now I gather that they're in trouble, that they are finding it very, very difficult to cope. And I appreciate that they can't refund everybody, especially when people bought non-refundable tickets. But having said all that, what really annoys me about them is how they're not upfront about stuff. Like you go into their websites and it feels like they're literally trying to give you the run round. I mean, I would be much more appreciative of them telling me something like upfront, we're not going to do anything for you, rather than having to force me like trying to browse through various pages that send me like in circles and I can never figure out what I have to click or then I send an email and they say they'll respond within a week, but then they don't. And when they do, they tell me to call their number. But when I call their number, they tell me that their lines are overloaded and just disconnect me regardless of whenever I call. So if you don't want to reimburse me, if you don't want to give me a voucher or anything, just say so. Don't give me the runaround. I don't think I can call this a pick, but that's my rant.
CHARLES MAX_WOOD: We like rant picks.
AJ_O’NEAL: Huzzah.
CHARLES MAX_WOOD: I could pile on, but we're out of time. Not on the airlines on, I think I anti-picked my phone company the other day, or last time. Anyway, I'm gonna throw out a few pics of my own. So we do have a few more conferences coming up. React Native Remote Conf is at the end of July. React Remote Conf is at the end of August. Angular Remote Conf is in the beginning middle of September. And Vue Remote Conf is going to be in October. So if you're looking for some conferences that you can attend that you won't get sick at, then definitely check those out. And yeah, I've been working on that. I also started a YouTube channel for PodWrench, which is a tool that I've been working on to help manage the podcasts. So yeah, definitely check that out. I'll put a link to the YouTube channel. My goal is to put up... And I missed yesterday, so now I'm kicking myself. Maybe I didn't. I'll have to check. Anyway, I've been doing a live stream on YouTube every day. And my goal is to have one up for all 365 days in a row. Yesterday just got weird. You can go check it out. I've got probably like eight or nine videos there. Most of them are just of me talking at my camera. But yeah, anyway, check that out, I guess. And I think I picked before my podcast playbook and those videos are gonna wind up on there. And I'm working on a few other things there. So lots of stuff to check out so you can find all that stuff. Gareth, what are your picks?
GARETH_MCCUMSKEY: So I'm gonna pick one that's like bang on topic first. I hope I can do two. The first one is serverless.com. We actually have a course that we've made available that we've developed internally with yours truly, basically the host of it. Obviously, we're going to put the link into the show notes. It's a great way to get started with learning how to essentially build a serverless application, not the lift and shift model that we were talking about before, but how to essentially create Lambda functions that execute code connected to AP gateway, blah, blah, blah. A really great intro to that. I'm still finishing the entire course. There are still episodes that need to be produced. Those are being done over time, but there's more than enough content there for somebody to get their teeth stuck in and get started with some serverless development stuff. So it's out serverless.com and the link will probably be in the show notes. The other one is completely off essentially from what we've been talking about. And recently, I got my hands on... So first of all, I'm a bit of a Linux nut. I love running Linux operating systems for my desktop. And recently, I got my hands on a System 76 Oryx Pro laptop. So if there is anybody out there that is looking for a Linux laptop that works with Linux operating systems, you don't have to worry about some odd Wi-Fi card that hasn't got drivers in the kernel or anything like that. System 76 is a fantastic company that lets you order a machine from them. But it's not just... You don't just have a couple of SKUs to pick from. You can completely customize the machine that you want to the nth degree. My machine, for example, has got an upgraded amount of RAM, up to 32 gigs, because I do some video editing. So it really works incredibly well. It has a GPU built in. It's an absolutely amazing machine. And it's nowhere near as expensive as most machines with that amount of hardware in it. So I'd highly recommend anybody interested in Linux machines to go to system76.com. And I guess we'll drop the link as well for that. I was just listening to another podcast yesterday. Scott Tulinski at Syntax FM, he did the same, he got the exact same thing. He was a whole podcast about how he had bought a system 76 Linux laptop from them. And he, he upped, you know, all the specs on everything and, and came out to about two grand, but he was talking about how, how nice it is. And they're, they're out of Denver, Colorado here in the States. Yeah. It was quite interesting to listen to talk about his laptop. And I'm based out of Cape town in South Africa. So they ship internationally pretty much anyway. So that's pretty nice too.
CHARLES MAX_WOOD: That's nice. I'll have to check that out. All right, well, Gareth, if people want to connect with you or learn more about Serverless Framework, where do they go?
GARETH_MCCUMSKEY: Well, learning about the Serverless Framework is easy. Our CEO and founder was able to grab serverless.com as the domain that was pretty handy back in 2015. So that's the first stopping point for Serverless Framework. And anybody who wants to potentially have a chat with me, I'm on Twitter, at Gareth MCC. I love talking about this stuff. I could talk all day. So if you want to bend my ear, if you want to ask questions, if you have feedback, whatever it is. Feel free, don't hesitate, just send me a message.
CHARLES MAX_WOOD: Awesome. All right, folks, we'll go ahead and wrap up. Until next time, Max out.
AJ_O’NEAL: Max out!
Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. To deliver your content fast with Cashfly, visit C-A-C-H-E-F-L-Y.com to learn more.