JSJ 417: Serverless with Microsoft Azure with Burke Holland

Burke Holland works for Microsoft on the Azure team in developer relations. He starts the show talking about how he got started in serverless. He’s careful to note that just because things are marketed as serverless doesn’t always make them so. In order for something to be serverless, it must be sufficiently abstracted in terms of technology, only require payment for what is used, and infinitely scalable. He talks about the statelessness of serverless, and the panel discusses what it means to be stateless. Burke reminds listeners that serverless is not for long-lived operations, but there are features in serverless providers that can help you get around this. Burke talks about how writing serverless code differs from standard or previous coding approaches and practices. He advises that serverless functions are best kept small, and talks about how to fit them in with other kinds of APIs.

Special Guests: Burke Holland

Show Notes

Burke Holland works for Microsoft on the Azure team in developer relations. He starts the show talking about how he got started in serverless. He’s careful to note that just because things are marketed as serverless doesn’t always make them so. In order for something to be serverless, it must be sufficiently abstracted in terms of technology, only require payment for what is used, and infinitely scalable. He talks about the statelessness of serverless, and the panel discusses what it means to be stateless. Burke reminds listeners that serverless is not for long-lived operations, but there are features in serverless providers that can help you get around this. Burke talks about how writing serverless code differs from standard or previous coding approaches and practices. He advises that serverless functions are best kept small, and talks about how to fit them in with other kinds of APIs. 
The panelists talk about the multi-cloud and why people would want to be on multiple cloud servers. Burke talks about what Microsoft has done with Serverless Frameworks to accomplish multi-cloud compatibility. The JavaScript experts discuss the advantages and disadvantages of picking JavaScript over other languages, and Burke talks about why he prefers TypeScript and the Easy-Off feature. They talk about speed on a serverless platform, especially concerning the cold start time, which Azure is relentlessly trying to lower. He does talk about some things that can be done to decrease load time and about premium functions. The panel discusses how to debug serverless functions and tools that are available, such as the Azure Functions extension. 
They talk about ways to set up more secure functions to keep things from racking up charges. Burke talks about some things Microsoft does internally to control cloud costs, such as sending monthly reports with reminders to delete and using tools like Azure Reaper to delete short-lived projects. Azure can also put spending caps on subscriptions, but when you hit that cap you can’t serve any more requests. Burke concludes by saying that most of the time, going serverless is a lower-cost way to improve productivity, and because it’s event-driven, it allows you to tie into things that you’re already doing in the cloud. Serverless almost always justifies itself from an ease of use point of view and a cost point of view. 
Panelists
  • Aimee Knight
  • Steve Edwards
  • Dan Shapir
  • AJ O’Neal
  • Charles Max Wood
Guest
  • Burke Holland
Sponsors
____________________________
> "The MaxCoders Guide to Finding Your Dream Developer Job" by Charles Max Wood is now available on Amazon. Get Your Copy Today!
____________________________________________________________
Links
Picks
Steve Edwards:
Burke Holland:
Dan Shapir:
  • Taking a vacation
AJ O’Neal:
Charles Max Wood:
Special Guest: Burke Holland.

Transcript


Hey folks, I'm a super busy guy and you probably are too. You probably have a lot going on with kids going back to school, maybe some new projects at work. You've got open source stuff you're doing or a blog or a podcast or who knows what else, right? But you've got stuff going on and if you've got a lot of stuff going on, it's really hard to do the things that you need to do in order to stay healthy. And one of those things, at least for me, is eating healthy. So when I'm in the middle of a project or I just got off a call with a client or something like that, a lot of times I'm running downstairs, seeing what I can find that's easy to make in a minute or two, and then running back upstairs. And so sometimes that turns out to be popcorn or crackers or something little. Or if not that, then something that at least isn't all that healthy for me to eat. Uh, the other issue I have is that I've been eating keto for my diabetes and it really makes a major difference for me as far as my ability to feel good if I'm eating well versus eating stuff that I shouldn't eat. And so I was looking around to try and find something that would work out for me and I found these Factor meals. Now Factor is great because A, they're healthy. They actually had a keto line that I could get for my stuff and that made a major difference for me because all I had to do was pick it up, put it in the microwave for a couple of minutes and it was done. They're fresh and never frozen. They do send it to you in a cold pack. It's awesome. They also have a gourmet plus option that's cooked by chefs and it's got all the good stuff like broccolini, truffle butter, asparagus, so good. And, uh, you know, you can get lunch, you can get dinner. Uh, they have options that are high calorie, low calorie, um, protein plus meals with 30 grams or more of protein. Anyway, they've got all kinds of options. So you can round that out, you can get snacks like apple cinnamon pancakes or butter and cheddar egg bites, potato, bacon and egg, breakfast skillet. You know, obviously if I'm eating keto, I don't do all of that stuff. They have smoothies, they have shakes, they have juices. Anyway, they've got all kinds of stuff and it is all healthy and like I said, it's never frozen. So anyway, I ate them, I loved them, tasted great. And like I said, you can get them cooked. It says two minutes on the package. I found that it took it about three minutes for mine to cook, but three minutes is fast and easy and then I can get back to writing code. So if you want to go check out Factor, go check it out at factormeals. Head to factormeals.com slash JSJabber50 and use the code JSJabber50 to get 50% off. That's code JSJabber50 at factormeals.com slash JSJabber50 to get 50% off.

Hey folks, I'm a super busy guy and you probably are too. You probably have a lot going on with kids going back to school, maybe some new projects at work. You've got open source stuff you're doing or a blog or a podcast or who knows what else, right? But you've got stuff going on and if you've got a lot of stuff going on, it's really hard to do the things that you need to do in order to stay healthy. And one of those things, at least for me, is eating healthy. So when I'm in the middle of a project, or I just got off a call with a client or something like that. A lot of times I'm running downstairs, seeing what I can find that's easy to make in a minute or two, and then running back upstairs. And so sometimes that turns out to be popcorn or crackers or something little, or if not that, then something that at least isn't all that healthy for me to eat. Uh, the other issue I have is that I've been eating keto for my diabetes and it really makes a major difference for me as far as my ability to feel good if I'm eating well versus eating stuff that I shouldn't eat. And so, um, I was looking around to try and find something that would work out for me and I found these factor meals. Now factor is great because a, they're healthy. They actually had a keto, uh, line that I could get for my stuff. And that made a major difference for me because all I had to do is pick it up, put it in the microwave for a couple of minutes and it was done. Um, they're fresh and never frozen. They do send it to you in a cold pack, it's awesome. They also have a gourmet plus option that's cooked by chefs and it's got all the good stuff like broccolini, truffle butter, asparagus, so good. And you can get lunch, you can get dinner. They have options that are high calorie, low calorie, protein plus meals with 30 grams or more protein. Anyway, they've got all kinds of options. So you can round that out, you can get snacks like apple cinnamon pancakes or butter and cheddar egg bites, potato bacon and egg, breakfast skillet, you know obviously if I'm eating keto I don't do all of that stuff. They have smoothies, they have shakes, they have juices, anyway they've got all kinds of stuff and it is all healthy and like I said it's never frozen. So anyway I ate them, I loved them, tasted great and like I said you can get them cooked. It says two minutes on the package. I found that it took it about three minutes for mine to cook, but three minutes is fast and easy and then I can get back to writing code. So if you want to go check out Factor, go check it out at factormeals, head to factormeals.com slash JSJabber50 and use the code JSJabber50 to get 50% off. That's code JSJabber50 at factormeals.com slash JSJabber50 to get 50% off.

 

CHARLES MAX_WOOD: Hey everybody and welcome to another episode of JavaScript Jabber. This week on our panel we have Amy Knight. 

AIMEE_KNIGHT: Hey hey from Nashville. 

CHARLES MAX_WOOD: Steve Edwards, whose mute button works. 

STEVE_EDWARDS: I'm here, I'm here. Yes, hello from Portland again. 

CHARLES MAX_WOOD: Dan Shapir. 

DAN_SHAPPIR: Hi from Tel Aviv where it's started to rain, so apparently it's finally winter. 

CHARLES MAX_WOOD: AJ O'Neill.

AJ_O’NEAL: Yo yo yo, coming at you live from the pleasant groovy summer of Utah. 

CHARLES MAX_WOOD: Summer? It is cold out there AJ. I'm Charles Max Wood from DevChat.tv and my book, The Max Coder's Guide to Finding Your Dream Developer Job, just came out on paperback. So go get a copy on Amazon. This week we have a special guest and that's Burke Holland. 

BURKE_HOLLAND: What's up? Also from Nashville, Tennessee, by the way, Amy and I are, yeah, I'm just across the street and I can see her house from here actually. It's amazing. Nashville is a small town. We all live in the same like one block radius. 

CHARLES MAX_WOOD: Yeah. 

 

When I'm building a new product, G2i is the company that I call on to help me find a developer who could build the first version. G2i is a hiring platform run by engineers that matches you with React, React Native, GraphQL and mobile engineers who you can trust. Whether you are a new company building your first product or an established company that wants additional engineering help, G2i has the talent you need to accomplish your goals. Go to devchat.tv slash G2i to learn more about what G2i has to offer. In my experience, G2i has linked up with experienced developers that can fit my budget, and the G2i staff are friendly and easy to work with. They know how product development works and can help you find the perfect engineer for your stack. Go to devchat.tv slash G2i to learn more about G2i.

 

CHARLES MAX_WOOD: the show before but it's been a while. 

BURKE_HOLLAND: Sure. I am Burke and I work at Microsoft like a hundred thousand other people and I work on the Azure team and specifically I work in developer relations. So my job is to make sure that the JavaScript experience in Azure is just delightful and that is a never-ending pursuit and that's what I do on a daily basis. 

CHARLES MAX_WOOD: Nice. We were talking before the show and it sounds like you've been doing a lot of stuff with serverless lately. So why don't we dive into that and just, I guess we should start out with a definition and maybe an elevator pitch on what it is. 

BURKE_HOLLAND: Yeah, that's probably a great place to start. I remember when I first started at Microsoft and I was coming from a very front-end place and I was trying to learn about cloud technologies and serverless at the time was a very popular phrase. And then it then turned into a buzzword in so much as it didn't really have any meaning anymore. It was really hard to pin down what it was. And then every time I would ask someone, I would get a different definition. And I worked with a guy who came from AWS and he had a very specific definition. If it's not this, then it's not serverless. And then you'd ask somebody else and they would just completely disagree with that. So I found it super frustrating because it seemed like everything was serverless and then nothing was serverless all at the same time. So when we talk about serverless, the way that I've come to think about this is that we usually think about it in terms of some sort of a product that's positioned as serverless. So that would be something like Azure functions or Amazon Lambda or GCP, which I think has cloud cloud functions, I think is their, their serverless platform. And these are things where the product itself is sold to you as serverless. It's literally on the webpage and in the marketing copy. And that's how you know it's serverless is because it says that it is. But these things being marketed as serverless doesn't inherently make them serverless. That's not what makes them serverless. What makes something serverless is three things. The first thing is, is that it has to be sufficiently abstracted in terms of technology. And this is where the word serverless comes from, is this idea that you don't know anything about the underlying technology where your code is running. You don't know what the operating system is. Usually you don't know what sort of hardware you're on. You don't know how many resources you have allocated in terms of CPUs or memory. All of that is abstracted away from you. You don't even know in the case of something like Azure Functions, you're not even sure about the actual runtime that is serving up your code. So for instance, if you create a serverless function that is accessible by an HTTP endpoint, you just write the code that returns the result you don't even know what the web server portion looks like. And that's one of those things where it isn't until you use it that you realize, oh yeah, when I'm building a web, when I'm wiring up a web server, that isn't really part of solving the problem. That's just something that I have to do to solve the problem. It's not actually part of the problem that you're solving. And so that's the first one. That's sort of like sufficient abstraction. The second one is that you only pay for what you use. So the way that I usually explain this to people is when you turn the water on in your house. You have a faucet, you turn the water on and off. And we could deliver water to people's houses where it just ran all the time. And if you ever needed water, you could just put a glass under the water and then take it away and then just leave the water running. And that would work. But we don't do that. And the reason we don't do that is because that would just be horribly wasteful. Right. But we do it with computing. If you think about it, we do it all the time. We requisition servers or platform as a service and we'll say, I want this many cores and this many gigs of RAM, and then it just sits there idle. But that's compute that you're paying for and not using, but that's how we use the cloud day in and day out. It's very wasteful. So the idea with serverless is that when your code is executed, you only pay while the code is executing. You only pay for the resources that you use, the what they call execution time. And then in Azure, you pay a small cost for per execution and per how much room it takes to store your code, which is very small. And so that's the idea. And then the third one is that it is infinitely scalable, meaning that if your traffic spikes, if you have retail services running serverlessly, and then on Black Friday everything spikes and you have massive load, then serverless should be able to handle that for you without you doing anything at all. You don't need to adjust the amount resources that you have, you don't need to add servers, none of that is your problem. It's just completely elastic and it scales up. And then of course it has to scale back down too. It has to go all the way back down to zero when nobody's using it anymore the day after. So those are the three things that generally make any product or technology serverless. Sufficient abstraction, pay for what you use and infinitely scalable.

DAN_SHAPPIR: Isn't there also something about the statefulness or statelessness of serverless? 

BURKE_HOLLAND: Yes. So there are some, there is some detail in there. Once you get into serverless, what you find out is that your, your functions are, what's the word, atomic in the sense that when they run, they, they're alive and then they're gone again and they don't have a state. So you have to store your state somewhere else and retrieve it from there. The other thing about serverless is that generally speaking there, it's not for long lived operations. So I know like an Azure functions will time you out at 15 minutes, I think. So if your function runs for 15 minutes, it gets killed, but there are features that are provided that are offered by all serverless providers that let you get around this. In Azure, they have something called durable functions, where actually you can have a process that runs for up to seven days. It's just a different kind of serverless function. I think as the tech evolves, we're figuring out ways to do things like one, handle state, and number two, handle long-running operations. 

DAN_SHAPPIR: So you don't actually consider the, let's call it attitude towards state as part of the serverless definition? 

BURKE_HOLLAND: I do not. I don't because I think that there's a lot of things that can be stateful or stateless rather. For instance, if you have a single page application that's running from some sort of a hosting provider, that thing is by definition stateless. There is no session state. There is nothing. There is only what you track. And this is what makes things serverless as well. Well, I don't know then if I said it that way, then maybe you're on to something because we could consider something like Azure Storage also serverless because you can use it to serve up a static site, like a React or a Vue or an Angular site, and you don't know anything about the web server. You only pay for what you use, and it scales infinitely, and it's stateless as well. So I don't know, perhaps I need to add that one in there. But look, you can't have four things. You can only have things in sets of three, otherwise nobody wants to hear about it anymore. Three's the magic number. 

AJ_O’NEAL: So machine learning is really hot. And in machine learning, you're often managing large sets of data and you don't necessarily need it all in RAM. Like you might have a five gigabyte data set, but you don't need it in RAM. Therefore you need virtual memory, but most serverless providers don't allow you to have virtual memory. They want you to pay for, you know, the, the six or eight gigs of RAM to load that five gig data set, even though you only have like 256 megs at a time that's actually being processed by your Python processor, your R processor, or whatever it is. Do you know of any tricks around that? 

BURKE_HOLLAND: No, that's a good question. I don't know what the answer to that is. I don't do much with machine learning. But that's something that somebody like a Seth Juarez over at Microsoft could answer for us if he was on the call. I know that we do have serverless and Azure Functions Python workloads. So it is something that they're looking at, but I'm not exactly sure how they solve that problem. 

DAN_SHAPPIR: So when I'm writing code to run in a serverless environment, obviously there are two sides to that story at least. That's the code that's running inside the serverless function itself. And there's the consumer of the output provided by the, that provides the input and receives the output from the serverless function. So if I'm writing code for either end, how is that different than, or is it different at all than standard or previous coding approaches and practices? 

BURKE_HOLLAND: Yeah, that's a great question. I can only speak to Azure Functions. And Azure Functions, and we'll talk about this from a JavaScript point of view. That's the show, right? JavaScript? 

CHARLES MAX_WOOD: Yep, yeah, that's the idea. 

BURKE_HOLLAND: Okay, so I wasn't on like, you know, I don't know, something else. What's this? What's something else? I don't know. I found people. Something like that, okay, yeah. So from a JavaScript or a TypeScript point of view when you are writing your serverless code, you are writing a single function that gets exported from a file. So you have like an index.js file in a folder. So let's say you're writing a, let's say you're writing an HTTP trigger endpoint called get products that returns a list of products from a database. You would have a folder called get products and inside of that you would have an index.js or.ts. And then from that file, a single function is exported. And that function is what actually gets executed. And that function takes in two objects. It takes in one, the context, which contains all of the contextual information about the function call. So if you had any, for instance, like bindings, what we call bindings for your function. So if you had bound to like a storage queue or something like that, that would be on the context. And then the second thing you get in is the request object, which contains like query string parameters, all of the request information. So you get those two things into the function. And then once you have those, you could do whatever you like in that function. It's just like coding a regular node application, right? You'd import your SDKs at the top. You'd import any shared libraries at the top. You perform any of your operations, and then you would return on the context. So you would return the context object and you would set the body. So you basically say return context, and then the body object on the context to whatever data it is that you wanna send back to the browser. Now that's an HTTP trigger example that's sort of the most naive example of doing serverless. Usually when people start with serverless, the first thing that they do is they try to write an HTTP endpoint because the on-ramp for building APIs and serverless is so fast. In other words, it's so fast to build HTTP APIs with serverless that it makes it really, really attractive to use that for building web applications. And that's part of where we get the term serverless web apps is because people are using serverless platforms to build HTTP APIs. And then they're consuming those APIs from a front end which is being served off of some static hosting provider, be that Azure Storage or AWS, I forget what their buckets, an S3 bucket, Netlify is another one. And so when you combine those two things, the front end, which is a static site, and the serverless HTTP APIs, you essentially have a serverless web app at this point. 

DAN_SHAPPIR: A question about that, actually a few. So first question is, let's say I have, you know, I'm creating some sort of a restful API, for example, and I'm implementing several bits of functionality through it. So they're related. You know, let's say I can get the list of products and I can get the list of customers and I can get the categories and whatever. So there's a bunch of functions. Would I usually implement all of them within the same serverless function, or would I create a separate serverless function for each one of them? What, what would be the typical architecture?

BURKE_HOLLAND: Oh, that's a great question. Let me think about the answer to that. Cause I've seen this sort of stated both ways. I think that the overall strategy here, the technical guidance is to keep your function projects as small as possible. In other words, you don't want a giant monolithic API inside of a single functions project. You would want to split that up in the true sort of spirit of microservices and apply that to serverless as well. Again though, I don't want to provide overarching guidance here as if it applies to everyone all the time. Much like everything else in programming, I think the answer to this is it depends on what works for you and your team and how best to split it up working best for you. I'm just going to say, if you create a bunch of function projects, you find yourself with a bunch of function projects and you think this is unmanageable, then you should probably have less function projects. If you have one function project with your entire API in it with like 100 functions, then you probably need to split that out so it's more manageable to each their own. 

DAN_SHAPPIR: So my other question is, in the previous couple of episodes, we've been talking a lot about the different APIs and different ways to standardize APIs and to document APIs. So we had to show about the open API and Swagger and a couple of shows about GraphQL. So you've been talking about serverless as an endpoint to restful functions. How does this kind of tie in with the other various technologies that are related to APIs and ways of defining and declaring APIs? 

BURKE_HOLLAND: Right. Well, I mean, it does in the same sense as anything else that you would build does. So well, let's just take GraphQL as an example. I think that the recommended way to implement a GraphQL API is that you put it on top of your existing REST API. I believe that is what Apollo recommends. I'm fairly sure that's the architecture that's recommended. And you would do that in Azure Functions the same way that you would in any standard node project, in so much as you'd build out a function endpoint that has a graph definition and then that would internally call some other API. So it's just another, it's another layer. Really GraphQL is a layer of discovery, if you will. So that would be GraphQL. The one where we talk about sort of like service discovery and all those things, there's a couple of different schools of thought on this. One of them is that you implement something like Swagger. Although I get the feeling that things like Swagger are being superseded by technologies like GraphQL because they solve similar problems and y'all can disagree with me on that if you would like. On the second one about things like managing the API and service discovery, there's actually a new school of thought, which is that you should not do that in the same place where you build your APIs. And the reason why is your APIs are alive they're constantly moving and changing and growing. And the naming changes, and it ends up being something that's fairly difficult to manage because you are incurring technical debt, you can't take these things offline. And so what a lot of people have found is that it's easier to put a layer on top of this, an API management service of some sort. And the API management service allows you to construct on top of your existing API layer, some sort of logical interface for your services. So, I mean, you can rename them, you can change the endpoints, you can do all sorts of things. And in Azure, there's a service actually called API management and you can also do service discovery with this. It's a very visual tool the way that it works, but it's not actually implementing in code, it's implemented in a separate service that allows you to abstract that sort of like API management logic out of your code and handle it in its own service. And that seems to be something that's easier to do because it doesn't require you to architect it actually into your code base. 

STEVE_EDWARDS: Is that something like Kong? Because I've heard about that. And this is the guy who does that. Is that sort of what you're talking about? Or is that something different? 

BURKE_HOLLAND: It may be. I haven't heard of that one. There's a lot of different projects that are popping up to do this. But I'm not. I'm not familiar with that specific one. 

STEVE_EDWARDS: Yeah. It's the way they advertise themselves. The next generation API platform from multi-cloud and hybrid organization. So I've heard the guy talk on other. Podcasts before, and that sounds like what you're describing. 

BURKE_HOLLAND: Yeah. The multi-cloud is super interesting. We've actually seen some interest in this as well, where you have customers. We actually had a customer and they wanted to build out this API, but they wanted it to be multi-cloud. We hear this from time to time that people don't just want to be on Azure, they also want to be on a different cloud provider. And there's a lot of different reasons for that. A lot of it is risk mitigation. Um, just in the event that a data center does go down for some reason in their case, 

AJ_O’NEAL: happens all the time. 

STEVE_EDWARDS: Absolutely. I've seen it happen more than once. 

BURKE_HOLLAND: Right. It does happen. And in their case, they actually were on a different provider and they just. It just got bit where like an SSL cert for the data center went out of date and the whole thing went down. It seems silly, but it makes sense. I mean, it just, it does happen. And so what organizations want is this ability to target multiple cloud providers. The problem with serverless is that, and you may have heard this said before, is that it's been called the worst form of vendor lock-in and the reason for that is that you are coding to a programming model that is proprietary to the platform that you pick. So in Azure functions, when you code, we have a programming model that you code to, and you can't just lift that code up and put it somewhere else. You need our runtime to run it. This is a problem. One of the things that we've been working on here is we've done a lot of work in two different areas. One is with the folks on the serverless framework. So you may have heard of the serverless framework. What serverless framework allows you to do is you code to the serverless frameworks programming model, and then you deploy that to Azure, to AWS, to GCP, and your code works on all three of those. So it's an abstraction on the vendor programming model. 

DAN_SHAPPIR: So it's kind of like jQuery for the cloud. 

BURKE_HOLLAND: Ah, yeah, kind of. That's kind of a crazy analogy, but I like it. Let's go with it. So jQuery for the cloud. So you're coding to this programming model and it looks a lot like Express. It looks like, you know, it's like app.use, app.get, those are the kinds of APIs that's in it. So it feels very natural if you've done any Express development before. And then the other thing that we've done is we have an open source project that allows you to build multi-cloud and target multi-cloud from your code base. Basically what you do is you build the project and then when you go to deploy it, you can deploy it to multiple clouds. And then we'll even stand up load balancers for you so that you can toggle your workloads between clouds as necessary, right? So you can flip between AWS and Azure as you need to. I don't know what the name of that open-source project is at the moment. We'll get it for the show notes though. 

DAN_SHAPPIR: So obviously we are in a JavaScript podcast and obviously you can use JavaScript to code for serverless functions. Other than familiarity, is there any advantage or is there any downside in picking JavaScript over the other supported languages, programming languages? 

BURKE_HOLLAND: Oh man, that's almost a loaded question. 

CHARLES MAX_WOOD: If you give the wrong answer, we'll just switch over to Adventures in.NET or something. 

BURKE_HOLLAND: Okay, gotcha. There is always going to be a contingent of people that feel like JavaScript really doesn't belong in structured server side programming. I happen to disagree with that, but I respect the opinion. I do find myself though gravitating more towards TypeScript when I build JavaScript on the server. And the simple reason for that is that the tooling that is provided by statically typed languages is just superior. It just is. When the compiler can inspect the object that you're using and can then tell you the methods that are on it, that is just easier than you having to guess or go to the docs, which is what JavaScript developers have done for years. I remember doing jQuery development and just I had the docs open all the time because they're, you're just on your own. And then you also get the real-time checking. And I think it just works, TypeScript works great in the browser. It works fantastically in Node because, well, first of all, if you're using Azure functions and VS code, it's transparent to you. You just write TypeScript and all of the build and compile just happens. VS Code is really good about that. Of course, Microsoft makes TypeScript, so it just works with the stuff that we make. But that is a huge benefit in that you can just start writing. The second benefit that it gives you over standard JavaScript is that you have all of the latest constructs. For instance, we still don't have a module system for node, so you can't do import this object and that object from this package in Node, that won't work. You have to use a require statement. But if you're using TypeScript, you can import like you would in the most, in the latest, most cutting-edge versions of JavaScript, and it works. And of course, all of that gets transpiled over to code that will run in a Node environment that does not recognize those constructs. So I think those things are for JavaScript developers at least, TypeScript is almost too good an opportunity to pass up. The thing about TypeScript is that you don't have to use the types. Frequently what I'll use it for is just the fact that I can write modern JavaScript without having to wire up any sort of transpiler whatsoever. Nope, there's a build step, but I'm not privy to it. VS Code is doing it for me. So I can use imports, I can use async away, I can use all of that stuff and it just sort of just works. So a lot of times I treat TypeScript as if it were just a transpiler, like a Babel or something like that. And I'll just forego the types completely. So I get the tooling and I get the modern aspects without having to fool with things like interfaces with jobs. JavaScript developers usually bulk at that stuff and I don't blame them. I do too. I'm like, eh, do we need those? I'm not convinced. 

DAN_SHAPPIR: But when I'm writing code for the backend and serverless is part of the backend. Can't I just automatically assume the latest version of JavaScript? I don't know how it works in Azure functions, but if you can select which version of node you're kind of simulating, but don't I by default just run on the latest version of node or something like that? 

BURKE_HOLLAND: Well, you run on whatever version your function is created on or currently set to. So in Azure functions, I'm not sure what the version we're up to is. I'd have to go and check, but it's a setting in your project, right? So you can sort of upgrade, downgrade. But I would say that the, so yes, but that doesn't mitigate the fact that there are simply constructs that are not in Node that are in standard JavaScript. And the module system is the most glaring one to me, the fact that in a Node project today, require this whole module into this whole variable. That's your only option. 

AJ_O’NEAL: I think that the latest node, you just call your file like.mjs instead of .js, and then I think that does work now. 

BURKE_HOLLAND: Does it? Is this node 12? 

AJ_O’NEAL: I think so. We had someone on the show earlier talking about this like a month or two ago. So I think that that has landed. But also, I mean, I don't see the argument for the import. To me, it seems like it makes the parser more complicated. It breaks the language. And I don't actually know where the benefit is because I use required. I never had a problem with it. 

BURKE_HOLLAND: Well, I mean, I guess it could be a semantic argument, but my opinion on this is definitely that being able to import into or destructure your imports just makes your code cleaner and easier to read. The additional thing with TypeScript is that you get IntelliSense on the import. So if you have a package and you're like, I don't know what's in this thing, you can open curly braces and you control space and it will show you all of the things that are exported out of that package that you can pick from. 

AJ_O’NEAL: Can it not do that with requires? 

BURKE_HOLLAND: I don't know. Maybe it can possibly it could. Yeah. I don't know, man. I mean, for me, and I'm not, this is not a case against require, but for me, it always feels like I'm reaching back into the past when I type requires statements. You know, sort of import is the new requires the old import is moving forward. You're one of the first people that I've heard say like, what's wrong with require it works just fine. It does. I don't know. Maybe I need to formulate a better argument against why we did imports and requires not good enough. 

DAN_SHAPPIR: Look, first of all, I would say it this way, although we kind of, uh, veering away from the main topic of the discuss of the discussion, uh, whenever there's something that's built into the language versus something that's implemented on top of the language. I prefer to use the thing that is built into the language. And like it or not, going forward, imports and JavaScript modules are the way to go. I mean, you know, stuff like AMD is dead and gone effectively. And I do think that you get more semantic binding built into the language itself, even into JavaScript, not just TypeScript, when you're using stuff like import and you have like better control over static import versus dynamic imports and stuff like that than you would with simple require statements. But again, going back to my original point, I love the use of TypeScript in the context of getting a DTS file for any API that I need to consume. So like you said, if I'm, let's say using a BS code and it smartly auto-completes for me, that's awesome. Like you, I'm not such a big fan of writing all those type specifications myself. It feels kind of like a lot of effort to write a lot of code that eventually gets compiled into nothing. So I don't really enjoy writing it. And sometimes I even feel like putting in all of these type specifications kind of limits what I do with the language. I gather some people might claim that's a good thing, but that's just how it feels to me, but again, be that as it may. I really love the fact that when you actually import a standardized API, like the one that you're providing into the functions in Azure, then you actually get those DTS files that provide you with all the auto-complete and indeed you don't have to look at the documentation all the time. Just an example of where these types of things kind of don't work anymore is indeed when you give that example is something like jQuery because jQuery was, when it was implemented you know, types were not in mind, everything is on essentially on everything. Like you put like the dollar wrapper around something and it, you know, it's an array and it's a, uh, basically it could have any attribute on it or any method on it. So actually having like a tarp declaration in that case is not as helpful as it might have been. But again, going back to that question of which programming language to use in a serverless environment. So you gave the arguments or the benefits of using something like TypeScript over plain vanilla JavaScript, but what about something of those compared to something which is not JavaScript at all? Like, I don't know, Ruby or Python or whatever. 

BURKE_HOLLAND: Oh yeah, that would just be your personal preference. So I think there was a lot of really good stuff on what you just said. So let me back up and we'll move into why other languages, but on the TypeScript one. I think again, it's personal preference. Nobody is saying, you know, you shouldn't write JavaScript. You should only write TypeScript. I think it's just that a lot of people really enjoy, myself included, enjoy the benefits that the language gives you. I think the latest stats I saw are that some like 60% of JavaScript developers are now using TypeScript as well. It's obviously filling a need that's out there. And I think that when you say that you would rather code to the underlying thing and not code to the abstraction, I understand the sentiment that's in that, and I think that that's good, but the fact of the matter is that we are all coding to abstractions now, like that ship has sailed. It doesn't matter if you're doing React or Vue or Angular. I mean, we are coding so abstracted now that that's just sort of the world that we live in as JavaScript developers. But, you know, in Azure Functions, you can choose TypeScript or JavaScript, both are first-class citizens. The other thing that you said about the other languages, Again, I think this would veer into the territory of, which language do you want to use? Python people like their significant white space, I do not. And so they would want to use that, and I would want to use the language of my choice. I think in the case of certain things like C-sharp, of course Microsoft makes C-sharp and.NET, so there are a lot of very nice things that come in that SDK if you choose that for Azure Functions. So just as an example, Azure Functions has a built-in feature called Easy Auth where you can authenticate people using an OAuth 2.0, I think, flow. And it basically handles everything for you, right? You call an API, you call an endpoint, it verifies you with Twitter, Facebook, or Active Directory, or whatever, and then sets the session cookie on your session. You don't ever have to do anything about it. And if you're using something like C sharp, you can just inspect or you can just pass in this user context object into your function and just look at it. And it'll tell you if the person's logged in, who they are, what roles they have, right? And that's just part of the C sharp SDK. And it just comes along for the ride because Microsoft makes the language, makes Azure functions, and therefore we can make those things really good together. With something like JavaScript though, you are gonna have to wire that stuff up yourself or use passport, but you're still going to be doing some wiring. So there are some advantages in choosing a language like C sharp for doing Azure functions. 

DAN_SHAPPIR: How about processing speed? 

BURKE_HOLLAND: Oh man, I don't know. Now that's one I can't answer. Speed is always dangerous territory. Which one is faster? Everybody will claim theirs is faster. I don't know what the answer to that one is. I'm sure there's benchmark somewhere. 

 

Wish you could speed up your release cadence and skip the rollbacks and hot fixes? What if you could move faster, limit the blast radius of unforeseen problems, and free up individual teams to deploy as fast as they can develop? Splits feature delivery platform gives you progressive delivery superpowers, like the coupling deploy from release, gradual rollouts with automatic telemetry to detect issues before they show up in operations graphs, and the ability to prove whether your features are hitting the mark using real user data, not the highest paid person's opinion. To learn more and sign up for a free trial, go to split.io. 

 

DAN_SHAPPIR: I'm kind of curious in that regard, because of the way that the JavaScript engines work, actually often JavaScript code becomes faster the longer living it is, because if you look at modern JavaScript engines like V8, they kind of identify hot code, and then they, only then when they identify it and they gather enough type information about it they can actually apply very sophisticated optimizations to it that approach native level performance because before that JavaScript is essentially primarily interpreted and it's fairly slow. So it's an interesting question about how the various serverless environments manage that sort of thing because if the code itself is short-lived, then it might never get to the point where it's actually optimized. Now if you're just collecting data from backend services and merging them together and then passing them on, then maybe you don't care. But it seems that if you need to do any sort of more sophisticated computation, then whether or not you should be doing it in JavaScript really depends on how the engine works within that serverless function. 

BURKE_HOLLAND: Yeah, that's a fair point. I actually think though that the problem in serverless that we're trying to solve at the moment is not the speed of the language, but rather the speed at which the project can be loaded into memory. The nature by which serverless works is that your project is not hot all the time. That is how serverless works. That's how you don't get charged when no one's using it because it's not running. It's literally tombstone to disk. And then when you request it after some period of inactivity, it gets loaded off of disk and into memory and then it stays there in Azure Functions, I think for five minutes, I believe, is the time to live. So it stays hot. And then if nothing else calls it, right, like if you don't hit it from HTTP, if no timer trigger fires, if no other service calls it, if no queue bumps, then it goes back to disk. Now there is a latency in the time that it's coming from disk into memory, and that's called cold start. And every serverless platform has this. Anybody who's done serverless development knows what this is. And as you said, if you're building microservices that talk to each other, like they're doing payment processing or something, or some back office system, you don't really care about this at all. And this was when, when serverless originally came out, this was sort of the idea was that, wow, cold start doesn't matter because these are microservices and people won't use them in a way that's going to require them to be immediately responsive. But what we found is, especially as web developers have really come to embrace serverless as a way to build APIs, is that the cold start does matter. Because if your API isn't accessed for five minutes and then someone comes and hits it, it could be three, four seconds on that first hit before your project is loaded up and a result gets returned. And so one of the things that we're doing that we're working on in Azure Functions is relentlessly lowering that cold start time. There's a lot of different things that we're doing to make that work, but that's just ongoing work to try to make it so that the time for that first person who hits that API after it's been tombstone is, is, you know, just negligible to the person who hits it after it's been hot and memory. And that is a hard problem to solve. 

DAN_SHAPPIR: I'm guessing there is also a sort of thing of, of if you see that. More requests are starting to come in. You start to create instances that are kind of waiting there, you know, in case additional requests come in and then you keep them alive for a while or keep like a bank of functions that are ready to serve so that you don't have to wait on that cold start every time a new request comes in. 

BURKE_HOLLAND: Yeah, that's exactly right. So that's some of the things that you can do is sort of the anticipatory processing where you're watching events and you're responding to that appropriately on the back end. So if you see the queue getting full, you know, stand up the resources now to take care of that. And those are some of the things we're doing. Another thing that we're doing is we have something called premium functions, which costs more, but what it does is it basically keeps your function hot all the time. And so we just keep that thing running for you. So it's a little bit pricier, but still not as wasteful as having an entire VM or an entire application service instance with dedicated amounts of processing. But that's sort of, I think in the interim for Azure functions at least is we want, there are people out there that need functions, they need it without any sort of cold start and they can get that currently with premium functions. I think in the future, as we push forward with the technology, I'd like to see all developers everywhere using serverless be able to get at those functions in real time without having to take any sort of hit whatsoever on the load into memory? 

DAN_SHAPPIR: But premium, I guess I'm misunderstanding something kind of changes the model as well, because I assume that with a premium function, the same function, same function instance, with all its memory and whatnot, would service multiple requests, whereas with, let's call it a non-premium function. You service a single request and then kind of die so that every request comes into a clean or blank slate or am I incorrect? 

BURKE_HOLLAND: That's a great question. I'm actually pulling this up now. I don't know, we'd have to get somebody from the engineering team here to answer that question, I think I'm not even going to venture out there. I don't know what the implementation details on that or how that's actually handled on the backend. 

STEVE_EDWARDS: So I want to jump in with a couple of things here real quick. One, you've made the point, or you've mentioned a couple of times that, I don't want to be Debbie Downer here, but you've mentioned a couple of times, well, Microsoft makes TypeScript and VS Code, and so of course they all work together. That hasn't always been the case with Microsoft. I have memories in years past when I was working in Microsoft environments where one of the major frustrations with Microsoft was that same apps that are made by the same company don't work together really well or same platforms and stuff. And so I think that's, they've gotten better at that over the years, but that's not an assumption that I've always been able to make with Microsoft is Microsoft makes A, Microsoft makes B. So therefore A works with B really well. So kudos that they've done a lot better at making that happen, but it hasn't always been the case. 

BURKE_HOLLAND: Steve, how dare you say that? 

STEVE_EDWARDS: No, I'm sorry. I didn't mean to rock the boat. Actually I did mean to rock the boat.

BURKE_HOLLAND: This is a very good point. And what happens is, you know, Microsoft is a huge company and it has so many different products. And a lot of times the people working on those products are focused on those products. And so what we do in Azure, the way that we approach this from a JavaScript point of view is that we look at it as a developer experience. So if I'm a JavaScript developer and I come to Azure, what is it exactly that I'm trying to do? Well, the answer to that question may be I need to host a website. Oh, okay. Well, in order to do that, what are the things that you need to do that? Well, you might need a static storage, you might need Azure functions for an API, you might need a CDN, you might need a database, you might need authentication. And these are five different services with five different teams inside of Azure. But those things need to all work well together across the board. And it really starts with VS code and goes all the way through. So what we're trying to do is look at it holistically so that when you do come in and you do use VS Code, that things do just work, that we provide extensions for Azure so that things just work. We try to take a lot of the mental burden off of the developer and make sure that they don't have to do any plumbing that they would otherwise not want to do. If you have to come into VS Code and set up a TypeScript build, eh. You could do that, but we could also do that for you. And then that's less work that you have to do. So the work is ongoing there, but we're definitely looking at it from an end-to-end perspective. You as the JavaScript developer, not what product are you working with, but what is your overall experience like across all of the products and services? 

DAN_SHAPPIR: And how do I debug these serverless functions? 

STEVE_EDWARDS: Oh, that's, oh, you just stole my thunder, Dan. I was talking about that with my mute button on. 

DAN_SHAPPIR: That's excellent.

BURKE_HOLLAND: Who do we give credit to for that one? Is that Steve? 

STEVE_EDWARDS: Yeah. 

BURKE_HOLLAND: Or Dan? 

STEVE_EDWARDS: So I'm looking at your blog post about serverless doesn't have to be an infuriating black box. 

BURKE_HOLLAND: Oh yeah. 

STEVE_EDWARDS: And with a great picture of you eating chicken, by the way. 

BURKE_HOLLAND: Oh man, that's my favorite place for chicken wings, if you're a big wings fan. 

STEVE_EDWARDS: Yeah, I'm real big on debugging and being able to debug just because I'd like to be able to see inside my code and what's happening. So I was wondering if you could talk about some of the tools that you mentioned here in VS code that allow you to debug locally your serverless functions. 

BURKE_HOLLAND: Right, okay, so this goes back to your question, Steve, about things haven't always worked well together and what are we, you know, sort of not willing to make that assumption. And here's a great example of a spot where we're trying to make this just work for people. In Azure Functions, you have a core runtime. You use it from the CLI. So, you know, it's like func init or func new, func create, whatever, to create a new project and then to run it. It's like funk start and then that runs it and it's actually running locally. The problem is though that when you're building something and especially for a node project, you wanna be able to debug that thing. And I don't mean debug with like console log, I mean actual breakpoints. And so what we've done is we have an extension for VS Code called Azure Functions. And what it does is it not only scaffolds projects for you but it also sets up all of the necessary configurations in VS Code so that you can just click the play button up top in your functions project. It will launch everything from the command line, from the integrated terminal in VS Code. It will launch the debugger and it does everything for you. So all you have to do is set a breakpoint and run your functions. That's it. That's all you should have to worry about as a developer. 

STEVE_EDWARDS: So in other words, you could say that the extension is a black box almost?

BURKE_HOLLAND: To a degree, but all it's doing is if you actually look inside of what's created, so in your project, when you create the project, the extension creates this file in a.vscode folder called launch.json. In that launch folder is your launch configuration that tells VS Code how to launch this thing and then how to attach the debugger. Now, it's not complex. This is stuff that you could figure out as a developer, but it might take you 45 minutes or so. And instead we can just do it for you and that way you don't have to worry about it. Right. 

STEVE_EDWARDS: So that was sort of a joke on the title of your blog post. 

BURKE_HOLLAND: Oh, gotcha. No, I didn't. It went right. It went right over my head. I mean, one of the things that we are working on and at one of the places where I think Azure functions really does shine is in the local debugging experience is just phenomenal. It really works quite well. I think that's, we haven't nailed every area, but I think that's one place where we've just absolutely nailed it. 

CHARLES MAX_WOOD: So I've been looking at functions, one of the things that I run into is that I've only really set up serverless with the HTTP API access. So then I have to do the sort of authentication and checking within my function, right? So I might have to call out to Auth0 or call out to, you know, whatever other service, just to make sure that whoever's accessing the function, you know, is authorized, or maybe I'll just set up, you know, some little key in there, right? So that my app, you know, it verifies that it has the secret from my app, one thing that I'm wondering is, is there another way to do it that's not an HTTP endpoint that kind of has this kind of handshake or whatever built in so that if I want somebody to access it publicly, I can just set up a Lambda function that has that public HTTP endpoint. But if not, then yeah, I'd like to be able to just kind of go, hey, here you go, and have it run through the background somehow.

 

BURKE_HOLLAND: Yeah. So when we talk about authorization, authentication, there's sort of two different scenarios here. The first one is what we call function level authorization, which is that you have permission to call a specific function. And this is done with a key. But this is not something that this is not considered secure. So you would not just deploy this thing and then put the key in your web app and be like, I'm done, because then everybody has the key and anybody can call it. 

CHARLES MAX_WOOD: Right. 

BURKE_HOLLAND: What this is really good for is when you want when it's not publicly exposed and you want another function to be able to call this function internally and You want that function give that function the key, but your key is not publicly exposed. But your function is still locked down behind the key. So that's sort of the use case for that in your case you're talking about an HTTP API There is we have an author's authentication service in Azure functions called easy off and it's a visual setup so when you go in you click on authentication and then you check it on with a check box, and then it has a couple different providers. It's got like Microsoft, Facebook, Twitter, Active Directory. For people who don't know, Active Directory is Azure's proprietary off store. And then what you do is you set up, so you click on one of those and you say, say Twitter, you click on Twitter and it would say, okay, what is your client ID and secret for Twitter? So you'd have to go set up in Twitter, your app, register your app, get your client ID and secret, come back and put it into Azure. Now, once you do that, it lights up these endpoints in your function app. So if your function app is, I don't know, javascriptjabber.azurewebsites.net, which is the default URL you get, it lights up this endpoint, which is javascriptjabber.azurewebsites.net slash dot auth slash Twitter. And you can call that endpoint, and it will redirect the person over to Twitter to log in, and then redirect them back to your application, and now the session key is attached. And now you can read that session key from your function to get the user's information. If they're not authorized, though, the function can just return a 401, I think. Yeah, it just returns a 401, which is that you don't have permission. You're not authenticated. You cannot access this function. That's the way that that you do it inside of Azure functions without tying into a third-party services with what's called easy off. 

CHARLES MAX_WOOD: Right. Are there ways of setting up endpoints that are not HTTP? 

BURKE_HOLLAND: How do you mean? 

CHARLES MAX_WOOD: So, so are there other types of endpoints that aren't internal, but you know, so I could make some kind of like remote procedure call or something. I don't know. 

BURKE_HOLLAND: Like RPC endpoints. 

CHARLES MAX_WOOD: I don't know what, what it would be, but yeah, something like that, where it's just, you know, I have some programmatic interface that I can go in. You know, like hit it on a different port, it follows a different protocol and it, you know. 

BURKE_HOLLAND: Not that I'm aware of. HTTP is the one there's HTTP. We call these triggers, which is like, what's your function executed? So the most basic form is HTTP. But then there's also queue triggers, where you insert something into a queue. And then when the function sees that queue, the item in the queue, then it wakes up and it does something to that item, pops it off the queue based triggers. We also have like database triggers. So if you insert a row into a table, it'll go that way, but I'm not familiar with another way to call externally outside of HTTP end points. 

CHARLES MAX_WOOD: Right. But those other kinds make sense. Right? So if you're using CosmoDB then yeah, you know, you insert something into a table and it says, Hey, wake up and do work. Or yeah, it's every 10 minutes or whatever, right? Wake up and do work. Or yeah. Yeah. Anyway. 

BURKE_HOLLAND: Yeah, exactly. We try to cover the most. 

CHARLES MAX_WOOD: But if it's external, it's going to be HTTP. 

BURKE_HOLLAND: Right. Yeah, that's correct. 

DAN_SHAPPIR: I love it how HTTP has become the universal API infrastructure. You know, you remember it was originally created to serve documents. 

BURKE_HOLLAND: Right. Now it's sort of the API for the world. 

CHARLES MAX_WOOD: Yeah. One thing that I worry about a little bit though, is that like, if somebody finds out that I have this endpoint, so let's say I set up an endpoint that creates records in a database. And you know, it's kind of, it's kind of floating out there, right? Because it's HTTP, it'll take a request and then I authorize it after it comes in, right before I do any work. So somebody could sit there and DDoS it and rack up my charges or they could sit there and try and send a whole bunch of different keys and brute force or guess what it is, is there any mitigation for that where you can actually like blacklist a bad actor or something like that? 

BURKE_HOLLAND: Yeah, I don't know. 

CHARLES MAX_WOOD: White list a good actor.

BURKE_HOLLAND: Yeah, I don't know if we do any DDoS protection. I don't think there's anything built into Azure functions, although I could be wrong about that. I don't want to misspeak without engineering here to say, no, no, we actually do do something. But I think that, you know, any site, any API, anything that you put up on the internet has to be behind some sort of a DDoS like Cloudflare or something, some layer of protection that does specifically that because there's just nothing to stop people from calling. Even a 401, right? Like even requesting and having the server having to look and see and then say no. If you do that enough, you can, you can bring something to its knees. 

AJ_O’NEAL: Well, in this case, because it's, um, you know, the, the whole scale aspect, you're not worried about a DDoS. You're worried about a $20,000 bill. 

CHARLES MAX_WOOD: Yeah. 

BURKE_HOLLAND: Yes. That is correct. So that this actually a really, really good point. There was an article written recently by someone who had moved. I don't know if you saw this, but they had built a very popular, like, it was like a card game online and they moved their backend over to, I think it was, I think it was Lambda, although I could be wrong. And they were just getting hundreds of thousands of requests per second and they got this massive bill because of exactly this. So there's this sort of like point of diminishing return with serverless where, and I wrote an article about this is, which is how much does serverless actually cost where I attempted to take Facebook and look at their traffic and extrapolate that onto serverless to see what that would look like. And at some point, it doesn't make sense anymore for you to pay per execution. You actually do want the boxes and the hardware. You want the control of it. Facebook would not want to make everything serverless because their cost would be insane. They would want to control that stuff themselves. 

DAN_SHAPPIR: Well, it's kind of like the fact that theoretically you could. You know it would be ridiculous to do so but theoretically you could build a Microsoft cloud on top of the Amazon cloud. There's just a certain point in when you're scaling up and you're going beyond a certain point that it starts to make sense to do stuff yourself. Like for example in the case of WIX where i work you know there are a lot of products that do what's the word telemetry for you that gather information about errors or problems in the end points or performance data and so forth. And it's certainly recommended for a lot of companies to use these sort of things because you don't want to be handling that stuff yourself and you can get it as a service. But once you scale beyond a certain point and you start paying per domain, per data, per storage, then you get to a certain point where it actually makes more sense for you to do it yourself and then actually have people managing it for you. So I guess it's the same thing. It's the same thing here. It's the fact that when you scale beyond a certain point, then it starts making sense to actually doing the stuff yourself. I guess a different analogy might be that if you have a factory and you're using electricity to run all your machines, then once you scale beyond a certain point, maybe it starts making sense for you to have your own power generator instead of taking electricity from the electricity company. 

BURKE_HOLLAND: Yeah.I totally agree with this. And when I talk to people, one of the things that I forget as a Microsoft employee is that I don't pay for Azure. I just use things and someone else pays for it. And in the real world, that's not how it works. You have to pay for your cloud usage. And so people tend to be very nervous. One of the things people are nervous about is what sort of a bill am I gonna get at the end if I use this thing? That's a very salient question to answer. And I think that, I mean, there's a couple different things that you could do to mitigate this. There's some things that we do internally. One of them is, I'm gonna give people a couple of tips here. And I know this is a little bit off the question, but just controlling cloud costs in general. One of them is that there is tooling in Azure that allows you to send reports to people based on their subscription that tells them every month what their spend was and reminds them to please delete anything that they're not using. So that's something we use because in Microsoft, we cross charge. So if I use something in Azure, I actually am incurring a bill that my team has to pay. And we try to keep those costs low, just like everybody else does. The second thing is we use this trick internally where, and developers can use this, but whenever I'm creating something that's gonna be short-lived, it's like a demo or something, I'll prefix it with delete me dash as the resource group. So Azure has this concept of resource groups all these related resources inside of a group, and then you could delete the group and it blows everything away. And then what I do is I have this function that runs at midnight and it's called the Azure Reaper. It's actually an open source project from one of the PMs on the functions team. And it just goes through my subscription and it looks for anything that says delete me dash and it deletes it. And so what that stops me from doing is orphaning resources in the cloud that I meant for development purposes or to just quickly demo something. And then I don't want to end up paying $400 for something that was meant to just be a POC that I've long since been done with. And then the last one is Azure has the ability to put spending caps on subscriptions, which is definitely something that people should do if they're using Azure. This is the safety net, which is to say, I'm not gonna spend any more than this amount of money. So you know for sure that your cost won't be any more than that. The problem with that is though, that when you, you have to be aware though, that when you hit that cap, you can't serve any more requests because you've told Azure, hey, don't charge me any more than this. And so you don't have any resources with which to do that. So you'd have to go in, it will notify you, but you'll have to go in and sort of make those changes. So I think sort of like responsible cloud usage is the answer here to not getting a $20,000 bill for your serverless project, which was supposed to save you money. 

AJ_O’NEAL: This is an argument that like, first of all, I'm the resident curmudgeon. 

BURKE_HOLLAND: Good to know. 

AJ_O’NEAL: I don't know why the other panelists still laugh at that because I say it like every episode. But okay, so from my perspective, pretty much for the last, I don't know, 15 years, it's cost $5 a month to have a server. And what you get with that $5 a month has increased over the past 15 years, but it's always been $5 a month, no matter which provider you end up going with. I mean, I think maybe 15 years ago was $10 a month, and then it came down to five with the specs saying the same, and then the specs have gone up and it stayed five, but I use digital lotion. Other people use OVH or, uh, Voltaire or there's one other, there's like four that are really popular, Scaleway. So those are kind of like the four popular VPS providers that have really risen to prominence. And there's some older ones like Linode and Rax and all that, but they, they didn't get as competitive in the pricing tier. So for a fixed $5 a month, I have pretty much unlimited CPU utilization. And in truth, if I'm not using it, yeah, I'm still paying the $5 a month but it's five bucks a month. You know, it's not like consider the work that I'd have to do to, to change that setup is not worth the $5 over the course of the entire year. Right. And when that CPU isn't being used by me, it is being used by someone else. You know, if I got to a point where I was using more than what CPU I'm allotted, then I'd have to upgrade and go into a different service tier. And that's like a click of a button, but then I'd always have to pay $10 a month. So it, It's not scalable, but I feel like 90% of businesses don't actually need scale. And if you're going to spend a couple hours learning something that is less predictable and has more moving pieces and has like just more things you have to learn, or you have to spend a couple hours on how to set up a node server with system D, I can't think of a case where I tell someone, yeah, it's worth it to save $2 and 50 cents a month to go through this abstraction layer as opposed to just get a BPS and run an express server on it. Because it's predictable, I don't have to worry. I use AWS for a couple things. I have no idea what the bill is going to be. And I have no idea why it's going to be that. Typically, it's the minimum charge because I'm not actually using it for very much. It's just a couple of things for test sites. But it's just odd to me that like one month the bill is $9.70 and the next month the bill is $10.20 and it's like, I know I haven't hit any usage tier. And that's for me, just the predictability of knowing it's a flat fixed cost and knowing that I'm not cool enough or popular enough to have to worry about, you know, I don't know, the slash dot effect on one of my blog articles. I just love predictable pricing and having some you know, bare metal debugging control. To me, serverless almost seems like a mute point because if you don't need it, then it's inexpensive, but the amount that it's saving you is almost nothing. And if you do need it, you quickly get into this price tier where you have to start controlling your costs and then you're thinking about going back anyway. So, I mean, it almost sounds like people are paying for, I don't know, I don't, argument I understand about serverless is that people are afraid of learning how to run Linux commands. I don't really get it other than that. 

BURKE_HOLLAND: Well, all right, so I'm gonna have to jump off here. We have a snow day today and they've got the schools out and my kids are texting me, you need to come get me. So I'll do this one and then I've got to go. But just to go back, AJ, I think that, I mean, you're making a valid point, but I also think that there's a giant scale on which computing exists. And you have, you know, the folks that need a $5 server over here. And then you have this huge swath in between. You know, when you get into enterprises, things get way more complicated. They're not quite at, you know, they're not at Facebook scale. They're somewhere in the middle. They're trying to control costs. There's a lot of, I mean, now we're getting into reasons to just sort of just justify the cloud in general, but I do think that- 

AJ_O’NEAL: VPS is the cloud, right? That's the term cloud was coined for VPS.

BURKE_HOLLAND: Well, correct. But I mean, you can, if you're just talking about like just a naive virtual machine, there's some argument to be made. Well, why don't I just, you know, you could argue yourself right back into putting that server inside your own data center. If you argue it hard enough or back under your bed. 

AJ_O’NEAL: But then, but then you have a lot of upfront costs, right? Like you've got, you know, hundreds of thousands of dollars of upfront costs. And if you're at that scale, then yeah, why wouldn't you? But you know, that's, that's not like a, that's no small feat. That's not like I'm getting started with a business or I have a small to medium-sized business. That's like, I have an enterprise that's got lots of resource utilization. 

BURKE_HOLLAND: Well, yes. And so let me just finish off and say, I do agree with that. I just think that for the vast majority of companies and enterprises with actual operations on a day-to-day basis, I don't think that it's a fear of using Linux commands. I think it's trying to find lower cost, faster, more productive ways to do the same thing. And the thing that is so great about serverless is that for nine out of 10 people, it is a lower-cost way to do the exact same thing. It just is. That is almost always the case. And the second thing is, is that because it's event-driven, it allows you to tie into things that you're already doing in the cloud. You likely have a database in the cloud. You likely have Office 365 in the cloud. You likely have You likely have a Redis cache maybe possibly. I mean, there's all sorts of things you could have and serverless allows you to tie directly into all of those things. So I don't, I think that viewing it specifically, you know, from an HTTP endpoint sort of walks you into this spot where you're like, why don't I just get a server? But I think that if you look at the overall use case of serverless, it's just way bigger than that. And it almost always justifies itself from an ease of use point of view, um, from a cost and from a cost point of view, even if you don't need to scale, definitely from the productivity and from the only pay for what you use from the cost. And I got to go folks. I got to pick up these kids from the school. That's how you end an argument. You just say, I got to go. 

AJ_O’NEAL: And then, I would, I wish you, we had more time because I'd love to hear more about like, the types of businesses that you see getting the most benefit from this. Cause I believe you, I believe it's there. It's just, I see so many people getting started and it seems like they get so complicated so quick and I just want to say, Hey, hey, hey, just learn how to use LS first, you know, like worry about becoming a DevOps expert later. Just learn how to use LS and system D restart. 

BURKE_HOLLAND: Sure. And there's a, there's a valid discussion that we got to have there. I'm sorry to jump off on y'all, but I do have to go. 

CHARLES MAX_WOOD: Nope. It's all good. Real quick. If people want to connect with you online, where do they go? 

BURKE_HOLLAND: Well, you can connect with me. You can follow me on Twitter at Burke Holland. Although I don't do a whole lot there. I guess I'll just put out my email address and people can just email me directly. I do have a blog, which I just keep up my GitHub, which is a Burke Holland.github.io and that's about it. 

CHARLES MAX_WOOD: All right. Good deal. Well, thanks for coming Burke. 

BURKE_HOLLAND: Yeah. Thanks for having me. Appreciate it. Thanks everybody.

 

Back when functional programming was making its resurgence, I found it really interesting that a lot of people were moving over there, and it almost felt like it was on hype, and I didn't really understand the power of functional programming until I learned Elixir. Elixir is a functional programming language that's built on the Erlang virtual machine, and it really does some interesting things and makes you build apps in a different way. But what's really fascinating about it is the speed of the applications, the ability to distribute work easily and just how it manages the functional programming and all of the nice things about it so that you don't have to worry about side effects and a lot of the other things that come out of functional programming. Plus, pattern matching in Elixir is a killer feature. If you're looking for a new language that you wanna learn that is going to make a difference for you and give you the opportunity to challenge some of your thinking and find a new way of doing it, Elixir is a great way to go. And we have a podcast now on Elixir called Elixir Mix, and you can find that at elixirmix.com. 

 

CHARLES MAX_WOOD: All right. Well, let's go ahead and do picks really quickly since we lost Burke. Dan, do you want to start us off with picks? 

DAN_SHAPPIR: Okay. This time, uh, due to, I've been doing a lot of picks about books, but this time due to shortness of time, I actually neglected to select the book I wanted to pick. So instead I'll pick something else. And the thing I wanted to pick is taking a vacation. Some of your listeners may have noticed I've been absent from a couple of episodes lately. And the reason was that the first I went off to a conference, a Chrome Dev Summit. And afterwards, we took an almost three-week vacation. We were in the US. We were in Guatemala. And we were in Mexico. And from talking with various people, it seems to me that a lot of people, for some reason or another, neglect to actually take vacations. And I'm not talking about time off during the holidays with your family. I'm talking about intentionally getting their entire family or just your spouse or whatever and then taking some time off from work to just go on vacation. And for me, it seems like an incredibly important thing. I think it helps you clear your mind, re-energize, avoid burnout. And when I hear about people that don't do it, that are concerned that they might be seen in a negative way in their job or they, I don't know, or they can't afford it or for whatever other reason, they just don't take a vacation. I think that in the longterm, they're making a mistake. I also think at the end of the day, you only live once and you should work to live and not live to work. So taking a vacation when you need it is incredibly important from my perspective. It's not just enjoyable, it's literally good for you. And yeah, that's my pick for today. 

CHARLES MAX_WOOD: Awesome. Steve, do you have some picks for us?

STEVE_EDWARDS: Yeah, I got two picks. One of them came up during the podcast. I was doing some searching. First one is I'm going to go really old school and pick an author and I'll pick one of his books. If anybody's ever read in the Western genre, probably one of the more famous authors is Louis L'Amour. And so I started reading his stuff when I was in third grade. My dad was a seminary professor at the time and one of his fellow professors was a big fan and she gave me a couple books to read and I was hooked. And I still have like two bookshelves worth of books that I've purchased over the years where a family would give me like, you know, six books at a time. And just real enjoyable, easy-to-read books. You know, there was times where I'd sit and read a book over a lunch break because I could just burn through it, you know, in about an hour. Probably the favorite book I have of his is called The Lonesome Gods. And it was about when California was still a republic and Los Angeles was a really small, little tiny town. There's a whole story about a guy who comes out there on a wagon train, but I always just love Lou Moore. He had some great books. And if you ever read about his life, he did a lot of things. In the late 1800s, early 1900s, he traveled around. He was, you know, he'd be a miner, he'd be, you know, just working all kinds of different jobs before he finally started writing. The second is when I was doing some searching for the Azure Reaper that Burke was talking about talking about which is a GitHub repo for cleaning up your Azure functions. There's also, if you search for Azure Reaper, there's also a lightsaber. There's this whole company called Ultrasabers.com and they have a particular product called the Azure Reaper and there's videos and the guys talking about the craftsmanship and you turn it on and light comes out and it's really pretty cool. So I'll put the link in the chat so we can put it in the show notes, but it looks quite entertaining if nothing else. 

CHARLES MAX_WOOD: Nice. Yeah. My dad was a big Louie Lamour fan. He probably had a couple boxes of his books at his dental office. So, 

STEVE_EDWARDS: Oh yeah. I still got stacks of them right behind me here in my office. 

CHARLES MAX_WOOD: Yeah. When we cleaned all that cleaned out his office, it was, yeah, it was wild. I don't know what happened to him and I've never read them. So anyway, AJ, what are your picks? 

AJ_O’NEAL: Okay. So I've got a couple of good ones today cause you know, the other ones, they're not good. Um, so I'm going to pick. Hello World by Hannah Fry. I've been listening to the audio book, which is narrated by Hannah Fry. And Hannah Fry does some of the explainer-type video series. Some of them are on Netflix. She is, I don't know if she's just a computer scientist or she's a data scientist. She talks like a data scientist when I hear her on TV shows and in her book. Hello World is another book about risks and benefits of algorithms taking over the world. And it's interesting. Like if you think about it, when we talk about the things we're afraid of with algorithms, like for example, propagating racial injustice in the criminal system, you consider some of the piloted algorithms that have gotten criticism for perpetuating with racial injustice, they actually perpetuate it less than the current judges do and they, you know, because it's an algorithm, it behaves consistently if a certain set of characteristics matches, which is not matching by race, but matching by like socioeconomic status and, and like past history and stuff, which happens to also, you know, end up appearing to create racial categories and then creating more significant false results within racial categories. But anyway, that one of these things that we're so afraid of actually, if you look at the real-world data, if we pick the algorithm, we would actually start correcting the problem immediately, even though it would still exist in the system because the data that's fed in, it has its own characteristics that are going to perpetuate certain biases within the system or whatever. But they'd more likely auto correct itself over time as the data gets fed in, it makes more correct decisions based on exact criteria rather than the way that judges currently do it with kind of like every case is case by case and they don't have a matrix where they say, well, you did this, therefore that's 15 years and you did this, therefore that's 20 years. They kind of just come up with it based on how they feel. And the data shows that if they had just had lunch, you're likely to get a lesser sentence. If it's just before lunch, you're likely to get a greater sentence. So that's one aspect or that's one thing she brings up in the book. But the book is about looking at this kind of stuff broadly across society. And it's both scary and comforting at the same time. And so it's a book I put on the check-it-out list. Plus she has a very pleasant British accent, so she's nice to listen to every once in a while. Have to hit the back button to understand some British slang or something that she's using or a weird way she says a word, but it's pleasant to listen to. And the content is enjoyable. So I'd recommend Hello World by Hannah Fry. Also, I'm gonna pick IKEA, because why wouldn't you pick IKEA? And specifically, I'm gonna pick the Calyx. We moved into a new place. We have more space. It's one of those paradoxes where when you have more space, now you need more things to fill the space. But also, because before we had less space, everything went into a closet, and now since we're trying to not put everything into a storage closet, we actually have things out. We need space for it as well. So anyway, we've got a bunch of Calyxes and it's interesting. They have so many accessories and it's become such a standard. This is like a 13-inch by 13-inch square repeated in different patterns. And I mean, you can get them at Walmart now. You can get them at Target. It's no longer just an Ikea thing. It's now it's become a standard for storage and you can get accessories that Walmart and Target, as well as that Ikea for that will fit into these boxes various styles of cubby holes and drawers and wine cellars, et cetera. Anyway, and if you have not been to an IKEA, take a vacation, we're just talking about vacations, to a place that has an IKEA and go. It's like Disney World for adults. Just imagining what your house could look like, even if it was a 10 by 10 square room. It's wonderful. 

STEVE_EDWARDS: Yeah, we have a big one here in Portland area. We've had it for five or six years. I have yet to go there. So I'm gonna have to take a vacation to IKEA locally.

AJ_O’NEAL: you have to go and make sure to get a lunch there. Go with your wife. Be ready for the credit card to get maxed out. And make sure you have the Swedish meatballs or whatever else they have because they've got a little restaurant in there. And then they've got a bistro at the end. So you've got two different places to get your food. You can go in, get lunch, go through the maze. And then by the time you get out, it's time for dinner and you can have dinner. So it's great. 

CHARLES MAX_WOOD: Awesome. I'm going to jump in with a few picks here. So the first one is, is the book, the guide to finding your developer dream job is now available on paperback on Amazon. So you can get a physical copy of the book. And I'm pretty excited about that. That's going to be awesome. Another pick that I have is buy me a coffee. It's kind of like Patreon. If you go to devchat.tv right now, you can actually donate. And I'm just asking people to give five bucks a month. Honestly, that would help us keep things going. We had a little bit of a slowdown with sponsorships. I think we're going to be okay coming out of it. But it'd be nice to just have a little bit of a buffer and this will help us build that up. So anyway, if you want to just give once, you can. If you want to give every month, you can. But that makes everything a lot easier there. And yeah, besides that, we are still looking for hosts on a few of the shows, RubyRogues. I think we have one more spot on Views on View. I think React Roundup has a spot open. React Native Radio has a spot open. We've got a couple of Adventures in.NET spots. So yeah, if you're interested in any of the shows that we currently have on devchat.tv and you think you might want to be a guest or sorry, a host, uh, let me know. Yeah. I'd be excited to have a conversation and see what we can figure out there. All right. Well, I think that's all we've got. So let's go ahead and wrap this up. Thanks to you all for coming. And thanks to Burke, even though he had to run, we will have another episode next week. And in the meantime, Max out. 

 

Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit C-A-C-H-E-F-L-Y dot com to learn more.

 

Album Art
JSJ 417: Serverless with Microsoft Azure with Burke Holland
0:00
1:18:01
Playback Speed: