JSJ 401: Hasura with Tanmai Gopal

Tanmai is one of the founders at Hasura. Hasura gives you instant graphQL APIs on top of a Postgres database. The eventual idea is to make data access secure and easy. Tanmai explains the challenges of doing this in the cloud. He talks about some of the difficulties with the tooling around using GraphQL and its bias towards working well with a monolith. Since GraphQL is basically a shared type system that describes your API, that means all your types need to be in the same code base. This is at odds with the folks who want to do microservices and serverless functions, because since their API is split across multiple services they have different types, and forcing these types to work together defeats the purpose of using microservices. Also, storing state across requests doesn’t work well with serverless and cloud native stuff. In short, learning to live without state is one of the general challenges with going serverless.

Special Guests: Tanmai Gopal

Show Notes

Tanmai is one of the founders at Hasura. Hasura gives you instant graphQL APIs on top of a Postgres database. The eventual idea is to make data access secure and easy. Tanmai explains the challenges of doing this in the cloud. He talks about some of the difficulties with the tooling around using GraphQL and its bias towards working well with a monolith. Since GraphQL is basically a shared type system that describes your API, that means all your types need to be in the same code base. This is at odds with the folks who want to do microservices and serverless functions, because since their API is split across multiple services they have different types, and forcing these types to work together defeats the purpose of using microservices. Also, storing state across requests doesn’t work well with serverless and cloud native stuff. In short, learning to live without state is one of the general challenges with going serverless. 
This is where Hasura comes into play, and Tanmai explains how it works. Hasura is metadata driven, and each instance of the server can leverage multiple calls and exhibit a high amount of concurrency. It’s designed to be a little more CPU bound than memory bound, which means that configuring auto scaling on it is very easy and allows you to utilize the elasticity of cloud native applications. Tanmai clarifies his usage of the word ‘cloud native’, by which he means microservices. He explains that when you have a metadata based engine, this metadata has a language that allows you to bring to bring in types from multiple upstream microservices, and create a coherent graphQL API on top of that. Hasura is a middle man between the microservices and the consumer that converts multiple types into a single coherent graphQL API.
Next, Tanmai explains how Hasura handles data fetching and a high volume of requests. They also invented PostgresQL, RLS-like semantics within Hasura. He explains the process for merging your microservices into a single graphQL interface. Back on data fetching, Tanmai explains that when the product is an app, preventing an overabundance of queries becomes easier because during one of the staging processes that they have, they extract all of the queries that the app is actually making, and in the production version it only allows the queries that it has seen before. Hasura is focused on both the public interface and private use cases, though private is slightly better supported. 
Tanmai talks about the customizations available with Hasura. Hasura supports two layers. One is an aliasing layer that lets you rename tables, columns, etc as exposed by PostgresQL. The other is a computer column, so that you can add computer columns so you can extend the type that you get from a data model, and then you can point that to something that you derive. 
The panelist discusses the common conception of why it is a bad idea to expose the data models to the frontend folks directly. They discuss the trend of ‘dumbing down’ available tooling to appeal to junior developers, at the cost of making the backend more complicated. They talk about some of the issues that come from this, and the importance of tooling to solve this concern. 
Finally, Tanmai talks about the reasons to use Hasura over other products. There are 2 technologies that help with integrating arbitrary data sources. First is authorization grammar, their version of RLS that can extend to any system of types and relationships, The second is the data wrapper, part of the compiler that compiles from the graphQL metadata AST to the actual SQL AST. That is a generic interface, so anyone can come in and plug in a Haskell module that has that interface and implement a backend compiler for a native query language. This allows us to plug in other sources and stitch microservices together. The show concludes with Tanmai talking about their choice to use Haskell to make Hasura. 
Panelists
  • AJ O’Neal
  • Dan Shapir
  • Steve Edwards
  • Charles Max Wood
With special guest: Tanmai Gopal
Sponsors
Links
Follow DevChatTV on Facebook and Twitter
Picks
AJ O’Neal:
Dan Shapir:
Steve Edwards:
Charles Max Wood:
Tanmai Gopal: 
Special Guest: Tanmai Gopal.

Transcript


Hey folks, I'm a super busy guy and you probably are too. You probably have a lot going on with kids going back to school, maybe some new projects at work. You've got open source stuff you're doing or a blog or a podcast or who knows what else, right? But you've got stuff going on and if you've got a lot of stuff going on, it's really hard to do the things that you need to do in order to stay healthy. And one of those things, at least for me, is eating healthy. So when I'm in the middle of a project or I just got off a call with a client or something like that, a lot of times I'm running downstairs, seeing what I can find that's easy to make in a minute or two, and then running back upstairs. And so sometimes that turns out to be popcorn or crackers or something little. Or if not that, then something that at least isn't all that healthy for me to eat. Uh, the other issue I have is that I've been eating keto for my diabetes and it really makes a major difference for me as far as my ability to feel good if I'm eating well versus eating stuff that I shouldn't eat. And so I was looking around to try and find something that would work out for me and I found these Factor meals. Now Factor is great because A, they're healthy. They actually had a keto line that I could get for my stuff and that made a major difference for me because all I had to do was pick it up, put it in the microwave for a couple of minutes and it was done. They're fresh and never frozen. They do send it to you in a cold pack. It's awesome. They also have a gourmet plus option that's cooked by chefs and it's got all the good stuff like broccolini, truffle butter, asparagus, so good. And, uh, you know, you can get lunch, you can get dinner. Uh, they have options that are high calorie, low calorie, um, protein plus meals with 30 grams or more of protein. Anyway, they've got all kinds of options. So you can round that out, you can get snacks like apple cinnamon pancakes or butter and cheddar egg bites, potato bacon and egg, breakfast skillet. You know, obviously if I'm eating keto, I don't do all of that stuff. They have smoothies, they have shakes, they have juices. Anyway, they've got all kinds of stuff and it is all healthy and like I said, it's never frozen. So anyway, I ate them, I loved them, tasted great. And like I said, you can get them cooked. It says two minutes on the package. I found that it took it about three minutes for mine to cook, but three minutes is fast and easy and then I can get back to writing code. So if you want to go check out Factor, go check it out at factormeals. Head to factormeals.com slash JSJabber50 and use the code JSJabber50 to get 50% off. That's code JSJabber50 at factormeals.com slash JSJabber50 to get 50% off.

Hey folks, I'm a super busy guy and you probably are too. You probably have a lot going on with kids going back to school, maybe some new projects at work. You've got open source stuff you're doing or a blog or a podcast or who knows what else, right? But you've got stuff going on and if you've got a lot of stuff going on, it's really hard to do the things that you need to do in order to stay healthy. And one of those things, at least for me, is eating healthy. So when I'm in the middle of a project, or I just got off a call with a client or something like that. A lot of times I'm running downstairs, seeing what I can find that's easy to make in a minute or two, and then running back upstairs. And so sometimes that turns out to be popcorn or crackers or something little, or if not that, then something that at least isn't all that healthy for me to eat. Uh, the other issue I have is that I've been eating keto for my diabetes and it really makes a major difference for me as far as my ability to feel good if I'm eating well versus eating stuff that I shouldn't eat. And so, um, I was looking around to try and find something that would work out for me and I found these factor meals. Now factor is great because a, they're healthy. They actually had a keto, uh, line that I could get for my stuff. And that made a major difference for me because all I had to do is pick it up, put it in the microwave for a couple of minutes and it was done. Um, they're fresh and never frozen. They do send it to you in a cold pack, it's awesome. They also have a gourmet plus option that's cooked by chefs and it's got all the good stuff like broccolini, truffle butter, asparagus, so good. And you can get lunch, you can get dinner. They have options that are high calorie, low calorie, protein plus meals with 30 grams or more protein. Anyway, they've got all kinds of options. So you can round that out, you can get snacks like apple cinnamon pancakes or butter and cheddar egg bites, potato bacon and egg, breakfast skillet, you know obviously if I'm eating keto I don't do all of that stuff. They have smoothies, they have shakes, they have juices, anyway they've got all kinds of stuff and it is all healthy and like I said it's never frozen. So anyway I ate them, I loved them, tasted great and like I said you can get them cooked. It says two minutes on the package. I found that it took it about three minutes for mine to cook, but three minutes is fast and easy and then I can get back to writing code. So if you want to go check out Factor, go check it out at factormeals, head to factormeals.com slash JSJabber50 and use the code JSJabber50 to get 50% off. That's code JSJabber50 at factormeals.com slash JSJabber50 to get 50% off.

 

CHARLES MAX_WOOD: Hey everybody and welcome to another episode of JavaScript Jabber. This week on our panel we have AJ O'Neill. 

AJ_O’NEAL: Yo, yo, yo. Kevin Henschen live from Sunny Provo as per usual. 

CHARLES MAX_WOOD: We also have two brand new panelists. Unless I missed your first episode, in which case I'm sorry. We have Dan Shapir. 

DAN_SHAPPIR: Hi, this is indeed my first episode as a panelist all the way from Tel Aviv. Very happy and excited to be here. 

CHARLES MAX_WOOD: Awesome. We also have Steve Edwards. 

STEVE_EDWARDS: Hello from Portland. This is actually my second, but you weren't here for the first one Chuck, so I'll forgive you.

CHARLES MAX_WOOD: I know I'm a slacker. Did you get a chance to introduce yourself Steve on that episode? 

STEVE_EDWARDS: No, I just sort of came in out of the blue on that one. So no, I haven't had a chance. I'm a IT pro, I guess you'd call me. Done a number of different roles for about 20 years, starting with like tech support, project management, business systems, analyst type role, been doing web development full time for about 10 years, a lot of time in the Drupal community and doing full time view now.

CHARLES MAX_WOOD: Awesome. And Dan, why don't you go ahead and introduce yourself as well? 

DAN_SHAPPIR: Okay, will do. I'm a software developer. I've been using JavaScript for over 20 years and I'm a big fan of both JavaScript and the open web in general. For the past five years, I've been working at Wix currently as the performance tech lead for that company in which I have the responsibility to ensure that 150 million websites hosted on our platform have good performance.

CHARLES MAX_WOOD: Nice. I'm Charles Maxwood from DevChat.tv. I'm just going to give you a quick shout-out if you go check out maxcoders.io. which is the new website that I'm setting up to help people become better communicators, leaders, and I'm still working on the tagline for software developers. So anyway, go check it out, maxcoders.io. We have a special guest this week, and that's, by the way, I'm a pro at screwing up names, so you'll just have to forgive me. It's Tanmay Gopal. 

TANMAI_GOPAL: Yep, that's perfect. Hi, everyone. Glad to be here. 

 

One of the biggest pain points that I find as I talk to people about software is deployment. It's really interesting to have the conversations with people deal with Docker, I don't want to deal with Kubernetes, I don't want to deal with setting up servers, I don't you know all of these different things. And in a lot of ways DevOps has gotten a lot easier and in a lot of ways DevOps has also kind of embraced a certain amount of culture around applications the way we build them, the way we deploy them. I've really felt for a long time that developers need to have the conversations with DevOps or adopt some form of DevOps so that they can take control of what they're doing and really understand when things go to production what's going on so that they can help debug the issues and fix the issues and find the issues when they go wrong and help streamline things and make things better and slicker and easier so that they'll more generally go right. So we started a podcast called Adventures in DevOps. I pulled in one of the hosts from one of my favorite DevOps shows, Nell Chamerell Harrington from the Food Fight show, and we got things rolling there. And so this is more or less a continuation of the Food Fight show, where we're talking about the things that go into DevOps. So if you're struggling with any of these operational type things, then definitely check out Adventures in DevOps. And you can find it at AdventuresInDevOpsPodcast.com.

 

CHARLES MAX_WOOD: Do you want to give us a brief introduction on your part too? 

TANMAL: Sure. I'm one of the founders at Hasura. We created the open source project a while back, then incorporated that as a start up just about two years ago now. I studied machine learning computer vision, got into software engineering, got into software engineering web development in a big way. Over the last few years, I've been mucking around a lot with Docker, Kubernetes, Postgres, GraphQL. My weapon of choice is usually Haskell and Node.js. That's kind of where I am. 

CHARLES MAX_WOOD: Wow. You're the second person in two days who said that they use Haskell or have used Haskell to get work done. And we're serious about it. Most people, they talk about it and they're like, I've done Haskell and that's about all you get. 

TANMAL: So our entire core, the core of our entire open source project is in Haskell. 

CHARLES MAX_WOOD: Oh, cool. You want to give us just a brief intro to what Hasura is and then we can dive in cause this is going to be really interesting to talk about how you do 

TANMAL: So, Hasura kind of the open source project today kind of gives you instant dual time GraphQL APIs on top of a Postgres database. So, what you do is Hasura is basically a metadata engine. So, there is a small DSL and some metadata that helps you map the data models to your API models, maybe do a few transformations, set up some authorization rules. And as soon as you kind of give that configuration to Hasura, Hasura basically sets up the entire API, the GraphQL API anything can use your front-end applications, other services, can consume that API. We integrate really well with the eventing system as well. So this API is real-time. We recently did a benchmark that lets like a million people subscribe to events happening in Postgres and stuff like that. Short story, instant real-time GraphQL in Postgres, and a bunch of other stuff. The eventual idea is to help make data access secure and easy. And that's kind of where we're heading but the open-source project today, we work with mostly with Postgres. 

CHARLES MAX_WOOD: Cool. So you gave a talk on doing GraphQL in the cloud, cloud native stuff, and my experience putting together GraphQL, at least GraphQL servers is mostly on the backend. So, you know, I'll build some basically models into my application that tell GraphQL queries how to model the data out for the GraphQL API and then it sends it back. So I'm pretty used to the kind of the monolithic app version of this, but how do you do it on the cloud? I mean, that seems a little bit backward, I guess, to try and do it on like microservices or I guess you could do it in a Docker container, but you know, yeah, serverless and yeah, all that, just I'm kind of trying to wrap my head around it. 

TANMAL: Yep. Yep. That's kind of exactly the problem. I think that most people are going through today. They're like, GraphQL is a really nice API for the consumer it's not necessarily great for the people who are writing it as much, but the Delta for the people who are using it is amazing. And that's mostly because of the tooling around it. You get like autocomplete and you get this whole API exploration and you get a little built-in API explorer just for your API. There's lots of client-side tools that do code gen. So you can know your TypeScript or even Go or Java or Swift, you get like code gen for those API calls that you make, which is really nice. I think the artifact of it having come from a system like Facebook, I guess, is, and I'm not sure if this is the reason, but it's probably the reason, is that it's very biased towards working well with a monolith. And this is because GraphQL essentially, for the folks who are writing it, is basically a shared type system that, it's kind of like a type system that describes your API, which means that all of your types need to be in the same code base, because otherwise, how will you build a coherent set of types, right? If they're in the same code base, then whatever tool that you have, like your build tool can validate that the whole thing makes sense. And so if you create a type system that says I have authors and the author has a bunch of articles and the article has an author ID and has an author, whatever, right? Like whatever, whatever setup you make will make sense. This is at odds with the folks who want to do microservices and serverless functions, because the story there is very different. The story there is we want to move really fast as a business. We couldn't care less about the best ways of writing code or whatever. So why don't we have the system where every single team or every single developer can just go in and write their own code and own their own code? And this is why people like microservices or serverless functions, where you have little teams, and these teams can own their entire code end to end. But when your API is split across multiple services, the problem is that now your types, the types of your APIs, are split across multiple services. So how do you build a GraphQL API that's coherent on top of these services? You're going to have to force the authors of the services to coordinate with each other, which kind of defeats the whole point of going towards microservices in the first place. 

DAN_SHAPPIR: So if I can interrupt, basically you're saying that one of the bigger challenges is coming up with a coherent type system across all the microservices? 

TANMAL: Yes, that is one of the challenges. There are two other smaller challenges as well, but yes. 

DAN_SHAPPIR: Okay, thanks. 

TANMAL: The other kind of smaller challenges also with the way GraphQL works is, if you're building a GraphQL server, the amount of stuff that you have to do to actually productionize it, everything related from, you know, optimize, because essentially what you're doing is you're building, you're building something that can parse and validate a query language, which means that all of the stuff that databases used to do, stuff like, you know, query plans, optimize query plans, memorize things, cache things, you know, you want to, you might want to maintain state across invocations of requests, especially like a query plan. Once you've figured out the query plan of where to fetch data from for this particular GraphQL query, you might want to cache that plan. Once you've seen query, instead of having to validate that entire query, if you know this query matches a previous query that you've seen, you want to just use the plan that you've created from before. When you want to do stuff like that, you realize that you want to start storing state across invocations of requests. Now, this idea doesn't work well with, especially with serverless, but even with cloud-native stuff where services can go up and down really frequently. And as that happens, it becomes a little bit painful for you to think about, oh, I do want to maintain some state, but it's not easy to think about where I'm going to maintain that state. So one of the places where this shows up in a really big way is GraphQL subscriptions. So GraphQL subscriptions is this idea that when data changes on the backend, it's basically a small protocol on top of WebSockets. It's a small format on top of WebSockets that says, here's how we'll send that event from the backend to the frontend. And this is nice, but the problem is that you have to maintain a tremendous amount of state to know which event maps to which client, right? And in a backend system where services are going up and down or God forbid you have a serverless function, your event might trigger a serverless function, but then like you have to connect to that to that upstream, to that downstream web socket, right? And then deliver the event there. So there are these problems also with maintaining state, especially with subscriptions, which makes it hard to work with Cloud Native. 

DAN_SHAPPIR: It seems to me, though, that learning to live without state is kind of like one of the general challenges in going serverless to begin with. So you're basically just saying it's like the same, only even more significant. 

TANMAL: Yes, that's exactly it. Especially because if you write a GraphQL server and you're writing it in a monolithic system, it's nice. The experience is easy. As soon as you start forcing yourself to rid yourself of state, you'll realize that it's even more nuanced than what it usually is. It's pretty nuanced anyway, but it gets even more cumbersome when you have a GraphQL API that you're writing. 

AJ_O’NEAL: So what GraphQL server do you use? Because all the ones I used don't deliver on the promises. They just implement the bare protocol to communicate to the front end. 

TANMAL: Well, I mean, considering that we've built a GraphQL server, I'm going to shamelessly plug, well, Hasura delivers in all of its promises. The way that we found around this was that obviously serverless, so serverless, for example, doesn't work in this case. What we do with our GraphQL server and the way we've designed the GraphQL server is that it's metadata-driven. So it's kind of like an Nginx config where you have an Nginx.conf, and then you have Nginx that runs. The metadata that is required to create the GraphQL server comes from this configuration file. Now, each instance of the server can leverage multiple cores and can exhibit a high amount of concurrency. And inside the server, we can do a lot of the things that we would want to do with keeping a little bit of state around across requests, you know, memoizing stuff, caching stuff, whatever. And now this unit is meant to scale one to N very easily, but we'll never get zero to N. We'll always get one to N. And the reason why we can't do zero to N is because of subscriptions because we need at least one instance to be there for certain clients to have concurrent connections with the server. We've designed it in a way where it's a little more CPU bound than memory bound, which means that configuring auto-scale on it is fairly easy. Most of the cloud vendors, they make it really easy for you to have a container and to go to that container and say, hey, if you hit 60% CPU, auto scale out. And if you drop below 40% CPU, auto-scale down. And this doesn't work well with memory, but this works well with with the CPU bound process. So that kind of configuration, that being the entry point to the GraphQL server, kind of has been a good middle ground. So it's kind of like a monolith that autoscales really well. And essentially, being able to autoscale is this whole idea of being able to exploit the elasticity that the cloud vendors give you. If it's not autoscaling, it's not cloud native. 

AJ_O’NEAL: You're saying two things that I don't understand. So you're saying monolith, but I don't think you're using it in the way that I would use it. And then you're saying cloud native and you're kind of juxtaposing these as if they're, they're opposite. Can you, can you explain that for a bit? 

TANMAL: Yeah, absolutely. So, no, you're totally right. Like monolith has nothing to do with how you deploy it. Right. That's absolutely true. What I mean here more is when I use the word cloud-native, what I actually mean is, is microservices, which is how do you help people be independent and write code inside independent silos. Right. And the second thing that I'm talking about is deployment. So if I back up a little bit, this is one of the architectures that's emerged, right? Whether this is right or wrong, I mean, time will tell. But here's a solution that works. What we end up doing is we have metadata-based engine. This metadata-based engine has the properties that I talked about, which is it scales really well, one to n. You can chuck it, give it a CPU metric, and it'll work. That's fine. This metadata has a language that allows you to bring in types from multiple upstream microservices and create a coherent GraphQL API on top of them. So let's say for example, I have a GraphQL API, I have a Swagger spec, I have some other typed API information. I can use the metadata language to kind of bring them together and create semantic kind of relationships across these types. 

AJ_O’NEAL: So is this something where in my other service, I would be exposing an endpoint that gives back this typed content or would I just be plopping that in as JSON to the metadata tool directly? Or is it either or? I

TANMAL: t's either or. So if you have an API that exposes types, for example, let's say your microservice is a GraphQL service or your microservice has maybe a swagger spec or it's a gRPC, in which case we can infer types directly. The metadata tool can connect to those types directly and can infer the types that your different API endpoints have. But in case you don't have type information, you have to provide that annotation as the YAML or JSON annotation that the metadata tool will consume. 

DAN_SHAPPIR: So maybe I missed something. I need to better understand. So your service is kind of a middleman between the microservices and the consumer, kind of putting GraphQL face on the services provided by the microservices? Or is GraphQL deployed within the microservices themselves? 

TANMAL: No, the former, where we behave more like a middleware, kind of like a gateway component that helps you bring these together. 

DAN_SHAPPIR: So the microservices themselves would still use some sort of, I don't know, let's say a RESTful API, and you'd be converting this RESTful APIs coming in from a multitude of microservices into a single coherent GraphQL interface? 

TANMAL: Correct, correct, correct, correct.

DAN_SHAPPIR: And because of the way that your server is, or service is implemented, you'll be able to scale up and scale out with the cloud-based microservices. 

TANMAL: Exactly, exactly. That's the idea. 

AJ_O’NEAL: So how do you deal, and maybe your software doesn't do anything with this, it's up to implementation, but I've noticed a lot of the GraphQL servers don't handle things that would be really, really, really, really efficient in a normal database. They make them into in squared queries. So where, you know, you have an object that, um, let's say you have a person and they have books and then books have pages and you want to say, give me all the pages of this person that have the title soup. And, you know, if you were to do this in SQL, it's super fast, super simple. I mean, you don't need a degree in SQL to be able to put something together like that. You're doing two joins and a where clause. It's easy. But then I actually worked with a GraphQL implementation where you did something like that and you either had to hack around the framework or you just had to deal with it was going to make 152 requests. 

TANMAL: Yep. Absolutely. Because, because you're making one request to fetch the book and if the book has a hundred pages, you're making like a hundred requests to fetch each of the pages for the books, right? Yeah, that's, that's true. So like I said, our metadata engine works with services, but what we work really well with is if that service is actually a database. For example, if you point Hasura to a Postgres database, we'll read all the catalog objects and the same metadata framework will give you a way to annotate those metadata, those models that you have so that you can expose them in a slightly more coherent way for the end consumer. Then what Hasura actually does is behave like kind of like a compiler, a transpiler, right? You make a GraphQL query, and then Hasura converts that to an internal kind of ASP, and then annotates that with, you know, whatever information was there in your metadata, and then renders that into a single SQL query, and then does a bunch of the database stuff that you would do, like, you know, preparing the statements, making sure that the JSON aggregations are pushed down, so that you're avoiding a serialized, deserialized, and then runs that query on the database. So that's an approach that we've taken. It's an approach that a bunch of other folks are also planning to take, but I also do take, like for example, I know Post Graphfile, they exploit lateral joins. Sorry, they exploit, is it, I'm not sure if they, if they do it with lateral joins or with window function, but I'll have to check. But they exploit a construct that Postgres SQL exposes really well to basically be able to construct a nested graph to be able to construct the SQL query as if they were doing a depth first search of the GraphQL AST and then construct that entire query and then run that and Postgres can run that as an efficient join as a SQL query. We take a slightly different approach, but yeah, this kind of AST, like you basically traverse the entire AST, add your own kind of rules in the middle and then run a single query. That's the only approach that would actually work. And other things that you would do would result in exactly like you said, you know, like this N plus one queries or N squared queries. If you have more levels of nesting and stuff like that. 

AJ_O’NEAL: Yeah. So I think, uh, post-graph file and join monster, I think that's the name of the other one, they do kind of this query planners to me, it sounds really cool. You know, you, you basically taking a JSON object and then kind of doing the same set theory type of stuff on it that you do on with SQL, but it's probably a little bit nicer to work with than SQL, and then you export it out to SQL and you do the query, and you get all of the user-friendliness of, you know, object-style interface, but you get all of the efficiency of a SQL query, and it's kind of the best of both worlds. 

TANMAL: Yep, yep, exactly. The only other thing that I think we've done that is a little bit different is, are you familiar with Postgres's row-level security, RLS? 

AJ_O’NEAL: So I am a little bit. I don't think that I would recommend using it to the average person. If you have like a Postgres guru on hand and Postgres is one of your strong skill sets as a business, I think it'd be a phenomenal tool to use. It can have some unintended unexpected consequences if you're just kind of you know someone that came from MySQL or MSSQL whatever. 

TANMAL: And what we've. What we've done, and we did this a while back, we did this just before Postgres released RLS, is we kind of invented similar semantics to RLS, but inside the application layer, inside Hasura itself. 

AJ_O’NEAL: To be honest, I think that logic is probably better served there for most people in the application layer. 

TANMAL: Correct, correct. Especially because now you can also connect that those authorization rules to to arbitrary IO things, right? Like you could fetch that, you could fetch session information, you could make an IO call to fetch that set, like an HTTP call or whatever to fetch that information. You can layer on arbitrary complexity or arbitrary integrations, and that's much easier to do at the application layer than having to push that down to Postgres. And that's kind of one of the things that our metadata does, which is why this approach of going from GraphQL to SQL actually makes sense because you can basically say query, profile, ID name, and the SQL that is generated is select ID, common name from profile where ID equal to cookie.userID. 

DAN_SHAPPIR: Maybe it's obvious to you or gurus, but you slightly lost me at how I go through the process of mapping collection of serverless, restful APIs that say I have. I don't know, let's say I have 10 microservices and I want to merge all of these together into a single GraphQL interface. What would be the process that I go through? 

TANMAL: So here's what people do and here's what kind of we do, right? The common, the different approaches. One of the approaches is, let's say your different microservices have a swagger spec. So now what you do is you build an engine and you consume one the different swagger specifications of these different microservices. You then add your own metadata. Let's say these are rules that say what the relationships between those different microservices are. Let's say you have a profile service that gives you a customer ID, name, email. And then let's say you have an accounting service that gives you the invoices for particular customers. So the API that you are getting from the from the accounting service was something like you got an invoice endpoint that said parameterized by customer ID. I can fetch invoices for a particular customer ID. Invoices, question mark, customer ID equal to one will give me the invoices for this particular customer. Now what I can do at my metadata engine level is say that the profile service and the accounting service, the customer object from the profile service and the invoice object from the accounting service, they have a relationship. The customer.id can be mapped to the invoice.customerid. The invoice.customerid is a parameter that can come in from the customer.id's value. You're constructing a join. You're basically constructing a join across these two APIs. You have this metadata language that helps you specify how certain fields map to certain parameters and fields of other types. This specification rule can be used as input for your metadata engine. This can be code, this can be YAML, whatever, to actually figure out that when a GraphQL query comes in that says, I want a customer and I want this customer's invoices, like the GraphQL query, you can actually figure out what you're supposed to do. You'll go make a query to the customer API, you might go make a query to the accounting API, you might make a batch call to the accounting API, then you'll do the application level join and then return the data. Does that make sense?

DAN_SHAPPIR: Yes, it does. So I have two questions about that. Let's start with the easy, with the simpler one. Suppose I need to do some sort of silly, simple transformation. I don't know. Let's say that one microservice provides the ID in lowercase, but the other service wants the ID in uppercase, because that's the way it was written. Is that something that you will be able to, that I could be easily defined within your service to handle that sort of a thing?

TANMAL: In our case, yes, we offer a layer of aliasing, but this is a rabbit hole. There will be like, this is a declarative language, right? So there are some things that we can do, and there'll obviously be some things that we can't. There'll be some things that you just have to do in code, some kinds of transformations that are, that you might want to just do in code, in which case you might want to just write your own engine. 

DAN_SHAPPIR: I understand. And the second I think is kind of related to something AJ said before, that it seems to me that less. Well, maybe you can solve this with really sophisticated caching, but unless you're careful, a really simple GraphQL query can accidentally translate into a whole lot of backend queries to the various microservices. 

TANMAL: Yeah, that's very true. That is, and that's one of the big reasons why it's painful to build a GraphQL API. So there are a few reasons why this is not practically as much of a problem. There are usually two kinds of people who are using GraphQL, right? People who are building products and apps, or people who are providing an API as a service. Let's say for example, I'm actually building a front-end app, like my product at the end of the day is not an API, my product at the end of the day is an app. When the product is an app, this becomes easier. This is not as much of a risk because during my CI CD, during one of the kind of staging processes that we'll have, what we'll do is we'll make sure that we extract all of the queries that the app is actually making. And then in the production version of the GraphQL server, we lock it down to allow only those GraphQL queries that we have seen before to actually execute in production. So if you log into Facebook and you go to the console in your browser, you can't actually make arbitrary GraphQL queries, even though their entire API is GraphQL. You can only make those GraphQL queries that they have seen in their web app or mobile app before when they built it and then they went through the release process. So in that case, you know what the performance impact is going to be on your system beforehand, right? So you're not going to see something extremely surprising in production. This, however, is a huge problem for people who provide a GraphQL API publicly, which, by the way, I don't know if this was actually the intent of what GraphQL was ever supposed to do when it was open-sourced by Facebook because for them, the API was for a product. But this is, for example, the case with GitHub's API. GitHub's GraphQL API is a public API, which means that they have to do some kind of rate limiting. They have to do some kind of like, you know, they need to have like a statement query execution time, right? To say what, that if a query is taking longer than 15 seconds to execute, time it out. But even the timeout has to be intelligent, right? You can't do a naive timeout. It's not like you can time out the HTTP connection with the consumer, but the backend servers are all still running, trying to like do things and process data but they actually, I mean, you still ruined your backend. So you have to do a lot more to safeguard a public GraphQL API, but it does become easier for a private GraphQL API, if that makes sense. 

AJ_O’NEAL: So is Hasura more focused on the public use case with limited queries that you pre-specify and say, these are the queries we allow, or is it more focused on the private use case of you just have whatever generic query and it's just going to do its thing. 

TANMAL: So actually both, but I think we support the private use case where you can lock down the queries better today. Although we are now building stuff like the rate limiting kind of rules, the caching kind of configuration, the timeouts. So those kinds of things we are now building at a per model, per query level. So that, that can also be dynamic. And this will work for public APIs. And we did this mostly because a lot of our users actually do end up using Hasura as a way to distribute their data API to internal or external consumers. So it does end up becoming a problem that we need to solve. We have a few basic solutions in place, but we're working on making that richer with specifically with rate limiting and caching. But we already support whitelisting or allow listing for the private use case where you know only that you're going to make a finite set of queries in production.

AJ_O’NEAL: I think of this post graph file that I was using and the way in it, it has kind of a metadata language as well. And the way that you use it is in, as opposed to many of the graph QL backends that are, that are not taking advantage of the optimizations and a very simple and convenient database like Postgres, it gives extremely predictable connections. So between objects of data, it's very, very predictable what the names of the connections are gonna be, how you're going to traverse it. However, the trade-off is it's a little more verbose. If I were going to write it by hand specifically for a particular front-end customer, then I would make it different than the kind of machine generated that I get from that. But at it's like pick your poison, because I kind of prefer the post-graph file way, because it's predictable and it's guaranteed. So with Hasura, do you find a middle ground there? Do you lean towards customization? Do you lean towards machine-generated? 

TANMAL: So, well, we started off machine generated. And then, you know, exactly like what you said, we had a few cases where people were starting to point Hasura at like their legacy databases. And these were like a few enterprise folks ancient databases where even like the and it was built by like different database people over time. So, you know, some people had like patient ID with a camel case and some people had a patient underscore ID. So there was all kinds of crap happening everywhere. And so what Hasura does now is we support two layers. One is a aliasing kind of layer that allows you to kind of rename tables and columns and fields and functions as they're exposed by Postgres, but to just kind of rename that into something that's better. And the other is a computed column. We support computed columns. It's similar to what has been added in Postgres 12, but it's a little more different. It's a little different. You can exploit more of the, some more application layer stuff, but essentially you can add computed columns so you can extend a type that you get from a data model. And then you can point that to something that you derive. So rather than having to traverse maybe the machine-generated models of like A.B.C, maybe you can collapse that to just A.C. And it makes a bunch of those use cases easier. So with a combination of this, with views, materialized view, I mean, with views, functions, and a combination of this computed column in the metadata layer, it ends up being fairly good for the consumer. So it's kind of a middle ground, I guess. But going back to your previous point, one of the things that I've wondered about, and there's a question to you folks, I guess, is I've been struggling with the idea of, why is it a bad idea to expose the data models to the front-end folks directly? It's good for them to have the transparency to see, oh, well, here are the data models. I mean, back-end developers got the right to be able to look at their data models and make queries. Why shouldn't other application developers get that same privilege of seeing what the actual data models are and then having to deal with it. 

DAN_SHAPPIR: I think that the reason is that maybe the backend people are worried that the frontend people will see how incoherent the data models are. 

AJ_O’NEAL: I think that always happens anyway as a byproduct of an API. 

STEVE_EDWARDS: Tom, I have to agree with you coming from, my experience was Drupal and I was pretty much a backend guy. And now that I'm working with you on the frontend, I still want to be able to query my data. I'm one of those people that I want to know the details of how everything's laid out. Partially because once I know how the data model is, maybe I can figure out a more efficient way of doing something or making a query that maybe somebody else hasn't figured out before. Especially with the experience I've got in the back end. Now everybody's obviously not in that boat, but I tend to agree with you that if I'm developing and I'm querying an API, I want to see how it's put together because that helps me make better queries when I need to make them. 

TANMAL: Yep.

 

This episode is sponsored by Sentry.io. Recently, I came across a great tool for tracking and monitoring problems in my apps. Then I asked them if they wanted to sponsor the show and allow me to share my experience with you. Sentry provides a terrific interface for keeping track of what's going on with my app. It also tracks releases so I can tell if what I deployed makes things better or worse. They give you full stack traces and as much information as possible about the situation when the error occurred to help you track down the errors. Plus, one thing I love, you can customize the context provided by Sentry. So if you're looking for specific information about the request, you can provide it. It automatically scrubs passwords and secure information, and you can customize the scrubbing as well. Finally, it has a user feedback system built in that you can use to get information from your users. Oh, and I also love that they support open source to the point where they actually open source Sentry. If you want to self-host it, use the code devchat at sentry.io to get two months free on Sentry small plan. That's code devchat at sentry.io. 

 

AJ_O’NEAL: So I, I like the idea of. I don't know. I mean, GraphQL kind of does this. Like I'm not a big fan of GraphQL in general because I think it takes something that was relatively simple and makes it overly. Well, it does because what they're kind of what they're doing. So what I'm noticing, I'm finally starting to get it. I've been really frustrated by all these changes in web platforms and stuff. And I'm finally starting to get it. 90% of the web is now junior developers because we've had 20% year over year growth for over five years now. So most of the web is junior developers. And so what these companies are really trying to do, it seems to me, is they're trying to make it easiest for the lowest common denominator developer to be able to use a tool. And you have something like GraphQL and your lowest common denominator in terms of like your programming experience, like hardcore programming experiences, is it's gonna be in front of people. It's just like, that's the way that it is. And so they're trying to make it really, really, really dumbed down. So everything's auto-complete and blah, blah, blah. At the expense of what we're going to make the backend really complicated, but it's okay. Cause backend people can handle it. They've got more experience. They've done this longer. They're not churning is as heavily. So, and then the whole cloud movement of like, and not only that we're going to let the super experienced people just make a hundred X on the backend. Right? So we've got kind of this weird slant with we're trying to make tools too simple on the front end, irregardless of the costs on the back end. But with that, if you look at the data model on the back end, like your, your public API shouldn't necessarily, I mean, like you may not be a perfect developer, like for example, I'm not. And so if you're exposing lots of information about your data model to the front end in a, in a kind of unfettered way, then you're also opening your attack vector where somebody can look at your data model and start to make guesses about what your code looks like and where vulnerabilities might be. And I can see that as a real potential problem, probably not so much in the GraphQL case, but then again, kind of, because you still have complex things like metadata is around row level security like you were talking about before. Like somebody might come in and use that metadata tool and think, oh yeah, I put row level security on this. Like I'm super secure. It's super cool. But they only do it in one direction. Like you get that security when you come at the object through a person object. But if you come at that object the opposite way through a pages object, you know, if you're coming through person to get to book, it's protected. But if you're going through pages to book, maybe it's not protected. So you get, you can have a lot of stuff like that where, or where it doesn't easily map to a way that you put it on the front. And I think, I don't think that it's back in people want to keep things secret, but rather it's a complex to expose everything in a coherent way and B complex to expose everything in a coherent way that's secure and keeps private information private. 

TANMAL: I understand your sentiment. I think especially because like, and there is also a tooling like the tooling is not good enough, or is the tooling is still evolving around solving those problems, right? The reason why this is not a problem, I mean, at a very, very abstract level, the reason why this is not a problem with REST or whatever is because you had so many tools to kind of help you build systems that were easier, where, and even if you, I think the sticky point here is the bit about, like, exposing excess complexity to the front-end developer, right? That's the piece that...That's the piece that I'm interested in because I think that I think you're right that there's lots of like a newer generation of developers who've come in, who really need to build front-end applications and things need to be simple for them. But I also think part of the reason why a lot of people are moving towards some of these modern tools, GraphQL being one of them is because you want to make people, you want to make people more independent. So it's that same idea of saying that, Hey, you know, here's your view app. Here's the API. Don't ever bother me, but because the data model is ready, the data is there, figure it out, build something. I don't want to build this middle translation layer just so that you can have like a secure and a stateless API. Why don't you just go ahead and do it yourself? And I want the tool to give me a guarantee that it is going to be stateless and secure. So it's the responsibility of the tool to level up to make sure that it actually gives what it promises. But once it does, you know, go ahead and do whatever you want. And maybe this is similar to the movement that we're seeing with the Jamstack ecosystem, right? Where every time somebody says something about Jamstack, a whole bunch of like pro back-end developers just go like, ha ha ha, what is even Jamstack? 

AJ_O’NEAL: I'm super pro back-end, but I'm super pro Jamstack. 

TANMAL: Yeah, exactly. Right. So it's a similar movement, I think, maybe. 

AJ_O’NEAL: It really, Jamstack to me is just about eliminating rendering and using APIs instead. Cause you're using APIs for comments, you're using APIs for whatever. You're just saying, I'm going to build the front end. So it's just a front end and I'm gonna let all the data be handled by APIs rather than inserting a bunch of rendering stuff. That's what it means to me. 

TANMAL: Yeah, at a technical level. And at a team level, it means that the front end people never have to bother the back end people when they want to make changes and ship newer versions of their apps, right? 

AJ_O’NEAL: They just have to when they need different data properties that weren't there before, which is part of Web GraphQL. I mean, you're saying that on the private side, this is what it solves and you don't recommend it for the public side, but the way people are using it in the wild, people are using GraphQL because they don't care about performance necessarily. I mean, like GraphQL promises all this performance if the backend server is actually coded that way. 

TANMAL: Exactly. 

AJ_O’NEAL: But when people use it, they're not using it for performance. You know, I see people do terribly inefficient things in GraphQL. What they're using it for is convenience because it has an autocomplete tool. 

DAN_SHAPPIR: I think it's actually even beyond convenience in the sense that it, it kind of opens up possibilities. Because when you're looking at the complete data model as a whole, it really brings into focus a lot of the stuff that you can do with it, whether when it's just a collection of disjointed microservices, you might never actually see the entire picture. You might not even be aware that a certain microservice even exists. So being able to see this entire data as a whole certainly opens up a whole lot of possibilities and options with regard to the types of applications and services that you can literally build on top of it. 

TANMAL: It reduces the, at least the amount of effort that you would have to put in to discover those data models, right? 

AJ_O’NEAL: The code is the documentation in the perfect machine-generated case. 

TANMAL: Yep. 

AJ_O’NEAL: The code itself generates what you need to be able to discover. 

DAN_SHAPPIR: And that's also the challenge because at the end of the day, you have this centralized repository of all knowledge and that centralized repository needs to be kept up to date and in sync with all the various microservices which have been kind of independent up to that point. 

AJ_O’NEAL: And if everything is in GraphQL in a perfect world where everything's in GraphQL and you update your microservice in GraphQL like the hypothetical unicorn case is that GraphQL gives you that because every GraphQL instance can communicate with every other GraphQL instance and the one that's the most downstream to the consumer will always see all available information. 

TANMAL: Yep. That's the dream. 

AJ_O’NEAL: But we had that dream with REST too. It just didn't pan out there either. 

TANMAL: I think part of the other reason behind why GraphQL is also really nice is, in a sense, I think it's the same as like SOAP and REST where it was not really that REST is more powerful or anything, right? Like, if anything, I think maybe SOAP has more power. But it's just that REST was just an easier API to use for human beings. 

AJ_O’NEAL: And it's funny because GraphQL looks exactly like SOAP, except with JSON instead of XML. It's like a single endpoint, all post, non-debugable, unless you have a GraphQL explorer.

DAN_SHAPPIR: Yeah, I just wanted to mention that it's open W and what's it called? WSDL. So yeah, so if you bring both of these together, these are essentially very similar. Just like you said, it's that it was once it was XML based, but XML is bad and JSON is good unless you're doing JSX, in which case XML is good again. But but other than that, yeah, it kind of feels like sort of back to the future in a lot of ways.

TANMAL: Yeah, yeah. And I think it's always that. It's always like the cycles of like people just figuring out what's convenient today. 

AJ_O’NEAL: No one wants the middle ground. Everybody wants the extremes. Either you got to go all the way on one side or you got to go all the way on the other side. No one can just say, you know what, it's good in the middle. 

TANMAL: We need something to do as human beings and developers. We need to create our own problems. I mean, by this time we should have been. Why are we all not focusing on how we build free energy and how we move to Mars forever? But no, we're busy building applications. 

AJ_O’NEAL:There's no little YouTube videos already on free energy. All you do is hook the one end of the battery back up to the other with a motor. Perpetual energy, it works. 

TANMAL: Yes, I mean, we're creating and solving our own problems in a way. There is a genuine need for making things simpler, right? And I'm not against making things simpler. I think a lot of progress has just been making things simpler. So yeah, you're right that there are lots of complexities and trade-offs and like, and in a lot of cases, the early adopters are pushing that complexity to people who did not need to ever do them. 

AJ_O’NEAL: Again, like with, you know, the junior turnover, like they're the early adopters are the people that are all about like what's new and what's cool and discovering these things. And it looks so simple because it's marketed from the cloud people because they're making big bucks. They're going to make top dollar on this. Yeah, this is what you should do. You've got to do it this way because you're going to get 10 billion users. And you have to scale no matter the cost, no matter the complexity. And on the front end, it's like, oh, look, pretty pictures, an API, three data models. Woohoo! 

TANMAL: That's true. I mean, the pendulum. The pendulum swings. 

AJ_O’NEAL: It does. What's your biggest, and I hesitate to say the word advantage, but when you look at the other things out there, and this could be business value, it could be a technical change, but like what's the biggest reason that I'd want to use Hasura instead of Postgres file or join monster? Like what is your biggest differentiator that it's like, well, we specialize in this. And so we just do this differently because this is what we believe. 

TANMAL: That's a good question. I think for us, But internally, the system has been designed in a way, and we have already started moving in directions where you can start integrating arbitrary data sources. And there are two pieces of technology that are critical to this. One is what we call the authorization grammar, which is basically our version of the RLS, but that can extend to any GraphQL, any system of types and relationships. We can extend our RLS system to that, which gives it a lot of power, and I'll give you an example in a bit. And the other is, what we call the data wrapper, which is basically the part of the compiler that compiles from the GraphQL metadata AST to the actual SQL AST. And that part is also a generic interface. So anybody can come in and plug in a Haskell, unfortunately, Haskell for now, module that has that interface and implement a backend compiler for a native query language, which can be anything. You could even compile quote unquote or transpile that out to to the Mongo query language or to KSQL with Kafka or whatever. Those are the two critical portions of tech that it's built on. This allows us to start plugging in other sources, including sources like Swagger or gRPC when it comes to stitching microservices together. One of the biggest advantages is that it's like you pointed to your database, you pointed to the Stripe API, and this is like a demo, for example, that already works. Then you can just start joining between Stripe and your database and fetching related data in a way that... 

AJ_O’NEAL: As in the payment processor? 

TANMAL: Yep. 

AJ_O’NEAL: Okay. 

TANMAL: And you can start fetching like customer or customer.savecards, customer.billinghistory, and all of this is safe to do for the front-end developers. So that kind of power is possible. Philosophically, the difference between, I think, Hasura and things like PostgreSQL and JoinMonster is that we want to be a configuration-driven, kind of like a managed component, whereas Post-graph file intends to be more like a library. Join monster is already a library, but post-graph intends to be more like, it can also be a library that you can use in your own system for the data layer. Right. Uh, kind of like a better ORM in a sense. So I think that's kind of philosophically the difference, but as we want to be like the infrastructure component, that is just like you, you fire it up, give it some configuration, forget about it. And then it works. Those are, those are differences. Does that make sense? 

AJ_O’NEAL: If I understood correctly, let me regurgitate what I, what I think I heard here. So you got to focus on integrating with something that's convenient and perform it out of the box like Postgres, but making it easy to pull in other sources as well that are more microservice-based and have a consistent authorization layer that works across the database source as well as microservice sources and one of the bigger pieces of business value you bring to the table is having a graphical web interface to be able to do your metadata definition of what maps to what and what connects where. 

TANMAL: Correct. And that entire metadata definition can be done with like a YAML file. Like, 

AJ_O’NEAL: yeah, but something that you have that I don't think the others have is a nifty UI for the other ones. I think are just pure JSON, pure YAML. And you've got a lot of effort into your UI to make it so you can... And I'm imagining the UI isn't powerful enough to do everything that you can do in the YAML file, but it gets you started. 

TANMAL: Yeah, exactly. Especially when you want to start doing more macro-driven or programmatic stuff, where there's a lot of repetitive stuff, you can just move to the YAML file and it's much easier. 

AJ_O’NEAL: And yours is written in Haskell. 

TANMAL: That's true. That's a big... That's the difference of the century. 

AJ_O’NEAL: You said, unfortunately, earlier, so...Is this a benefit or is this an unfortunate... What is that? What does that mean? 

TANMAL: Haskell has been an amazing gift to us. We wrote the first version of this data layer with a JSON API over Mongo, maybe five years ago in 2014, and then we shifted to Postgres 2015. And it was still our JSON API. 

AJ_O’NEAL: Good choice. 

TANMAL: And it's support for GraphQL just a year ago. That's kind of been the evolution. The biggest benefit for Haskell has been that amongst the...Amongst the garbage collected languages, Haskell is best in class for writing two types of systems, writing compilers and writing things that need a good amount of single machine concurrency. So both of these, Haskell is best in class amongst the garbage collected languages, which is the tools that the programming language gives you out of the box. 

AJ_O’NEAL: Well, I would imagine the kind of stuff you're doing with all this set theory and relationships. I would imagine Haskell being a functional language lends itself to set theory type problems very well. 

TANMAL: Especially like the syntax tree transformation type problems, the set theory type problems. It's a very nice tool to have. It guarantees you a lot of safety out of the box. You're not writing a lot of dumb tests. You're writing the more algorithmic tests. That's good. And the other part that is also very different about what we do from other folks is subscriptions. So you can have stuff that changes in your Postgres database, and we will pipe the relevant event to frontend clients. And this surprisingly scales as well. We did a blog post benchmarking that for like a million events that happen in Postgres. And there are a million different applications connected to their own order. So a million different orders are getting updated, and everybody's getting those updates. And the system just runs. And you just horizontally scale us out, and you have a single tenant Postgres, and everything just works. And programming that kind of stuff which requires a degree of concurrency has also been very easy with Haskell because of the concurrency constructs that the language has. So there's things like software transaction memory, STM, there's a lot of data structures and programming constructs that make it very easy to do concurrent stuff safely, right? And not deadlock yourself in the face or not have race conditions. And I think Haskell has been really, really good for that. But of course, one of the things with Haskell is that it's, you don't get a lot of the joy of the open source community where people are kind of doing things and you know. 

AJ_O’NEAL: The Haskell compiler is not open source? 

TANMAL: No, it is. Everything is. It is open source. But I'm just saying that there is not too much of a benefit to it being open source because it's not JavaScript. You know what I mean? It's not readily consumable by the community and contributable by the community. Does that make sense? 

AJ_O’NEAL: Yeah, I don't know. I mean, like, I want to say JavaScript is overrated, y'all. It's not the solution to every problem. It's just not. 

DAN_SHAPPIR: Which podcast am I participating on? I'm kind of confused right now. 

AJ_O’NEAL: It's the Haskell podcast today, of course. I love JavaScript, but it's just not. It has gone from being something that fulfilled a few specific purposes and had some real potential to being the hammer for every screw. It's just not right all the time. It's just not. You got to be honest about that.

STEVE_EDWARDS: AJ, you use hammers for screws? I use them for nails myself, but. 

AJ_O’NEAL: That was the JavaScript analogy. 

STEVE_EDWARDS: Oh, okay. Gotcha. 

CHARLES MAX_WOOD: I was going to make a joke about hammers on my kids, but I don't want to go to jail. 

AJ_O’NEAL: So I don't think jokes. 

DAN_SHAPPIR: It's screws on your kids. 

CHARLES MAX_WOOD: Oh yeah. Using the screws on my kids. That's a different thing. All right. I think we're way, way out in the field here. Is there any other aspect of GraphQL as a service or in microservices, serverless, cloud native, or Hasura that we haven't talked about. 

STEVE_EDWARDS: So going back to just the basics of Hasura, so on your webpage, it says that it's for Postgres, but we're talking about microservices. So am I understanding correctly that it can be used for, it's used separately for both cases, either with a normal Postgres database or array of microservices? 

TANMAL: Well, you could use it for both, but a lot of the microservices stuff is still what we call under preview. So I'm just gonna send a link to you for that. So you can already start linking across to different microservices, but we haven't upgraded all of our marketing material and copy and like stuff around that. But you'll see all of it in the docs and stuff like that. So it's already like. 

STEVE_EDWARDS: Okay, so it started out with Postgres and you're expanding it to the microservices. 

TANMAL: Exactly, exactly. 

STEVE_EDWARDS: Okay. 

TANMAL: Yeah, I think, and we call this, we call that ability to join across different data sources, remote joins inspired from its database terminology. I think that covers most of the topics. There's some interesting things that we're doing with eventing and basically like a CQRS event sourcing workflow that's exposed over graphical mutations. That should be out hopefully this week. By the time the podcast is out, it should be out as well. That's also something very exciting that we've been working on. The idea is to expose a graphical mutation. It actually creates a domain event and then delivers that event in an at least once fashion or quote unquote exactly once fashion to an upstream microservice and then gets the event response and then stitches that event response into the graph as well. So it's kind of like CQRS in a box without you having to do the plumbing for CQRS, but you can get the benefits of CQRS very easily and all over GraphQL as well so that people who are using these APIs like using them. Yeah, that's something cool that we're working on that we'll be putting out soon. So that's something that if you're listening to this podcast, you should try out as well.

CHARLES MAX_WOOD: Awesome. Before we go and do picks, Tanmay, how do people find you online? Do you have a blog, Twitter, GitHub, other places? 

TANMAL: Yup. I'm on Twitter. I'm TanmayGo on Twitter. T-A-N-M-A-I-G-O. If you just search for TanmayGo, I'll hustle on Twitter, something will come up. 

CHARLES MAX_WOOD: Awesome. 

 

The thing that I believe most about top-notch developers is that they're constantly learning, whether you're out watching videos, whether you're reading blog posts or books. Whether you're out writing open source software, you're always out there learning how to be a better developer. And my friends at Thinkster and I teamed up and we put together a show called the Dev Ed Podcast. You can find it at devedpodcast.com. It's run by Joe Eames, who you might know from JavaScript Jabber, Adventures in Angular, and Views on View. And they have terrific conversations about what it means to become a better developer, to learn how to do development and the ways that you can learn. So if you're looking for inspiration, and ideas about how you can do better and learn better as a developer, then go check out the DevEd podcast. 

 

CHARLES MAX_WOOD: All right. Well, let's go ahead and do some picks. AJ, do you want to start us off with the picks? 

AJ_O’NEAL: Oh, I'll start us off with some picks. So first on my list, this book, along with attending the Utah JS Conference has swung some of my beliefs. It's called The Economic Singularity. Now, you know, I've gone from someone that was, you know, kind of aware of the trends in technology to someone that's afraid of the trends of technology to now someone that actually believes that Skynet may be possible. That it may not like... Artificial intelligence may actually be getting to a level where machines are capable of making decisions that humans trust. Not good decisions. Not decisions that are based on things that are actually you know, ultimately important for the types of judgments we want in ruling the world, but types of decisions that people will accept for judgments that are put into place in, in ruling the world. Economic singularity is, is about, one of the phrases that I, I key off of is the gods and the useless as the technological divide becomes wider, you're going to have the gods, the people that own the data centers and the useless the people that just have thin clients or no access to technology and are not empowered by technology. And the concept of, well, what are these people that empower, you know, the AI that takes over, how is it going to improve life for everyone? What might that look like? Is the joblessness issue different this time? Because in the story of the boy who cried wolf, you know, eventually the wolf did come and we've been crying wolf about joblessness for technological advances for centuries now, but like, are we really at peak human, as it says? Anyway, it's just got a lot of crazy tinfoil hat type, or not necessarily tinfoil, it's stuff that makes me put on my tinfoil hat and makes me rethink my life and how scared I am of the future. And anyway, I'll pick it. But then seeing so many people having like, people put a lot of faith in AI and it scares me. It scares me that people really believe in it. I just, I don't know, I just, I think humans are human and but whatever anyway. But I'm gonna pick that book because it just it phrased things in a way that opened my eyes that that make me maybe more aware of things are better and worse than I had anticipated. I'm also going to, I'm gonna pick Capital Cities as my music pick. I just got their 2lp deluxe edition and it's titled Wave of Mystery and I'm in a tidal wave of amazing sound. And let's see, and I'm still on the GameCube kick. So the most recent thing that I've done is I actually got my HDMI adapter that's worked well. And then I've got the M Classic on the way, so I can't really pick it yet because I don't know how well it's gonna work. But I have used Game Boy Interface to play Game Boy games with the Game Boy player, and it is freaking awesome get to scale the screen to whatever size I want. And I love it. Those will be my picks. 

CHARLES MAX_WOOD: Nice. I forgot about the Game Boy interface. Now I really want to get my GameCube together. 

AJ_O’NEAL: Chuck, we got to talk, man. I'm going to help you make the best GameCube experience of your life. 

CHARLES MAX_WOOD: I know, right? All right. Dan, do you have some picks for us? 

DAN_SHAPPIR: Yes. Yes, I do. So last week, I came back from speaking at a conference in Bucharest, Romania called JS Camp. And the videos just came up today, so they'll certainly be there by the time this podcast comes out. And it was a really awesome conference and I want to plug these videos. We had Alex Russell from Google speaking about the challenges of the mobile web. We had Ruth John speaking about CSS Houdini. I also gave a talk giving my favorite JavaScript riddles talk, so do check it out and I'll give you a link to put in in the show notes. And I actually also wanted to call out Romania in general. The country is lovely. This time around, I didn't actually tour it. It was really a quick visit for me, but I actually have been there before and it's a lovely country. And I highly recommend going there. Bucharest is a lovely city. And there's also Bresov, that's another beautiful city in the Transylvania region of Romania ringed by the Carpathian Mountains. And there's also a city called Sibiu, also in Transylvania, which is an absolute gem. So I highly, highly recommend visiting that country. And those will be my picks. 

CHARLES MAX_WOOD: Very cool. Steve, do you have some picks for us? Y

STEVE_EDWARDS: eah, I've got one. So I'm assuming that most of us have heard of the real famous book called In Cold Blood. It's sort of considered the first true crime novel in America. So what I want to talk about is a documentary that came out on Sundance about a year ago. And it's called Cold-Blooded, The Clutter Family Murders. And this one sort of has a personal connection for me a little bit. So quick background, there's a family of six people that live in this tiny town in Kansas called Holcomb. And a former farmhand who worked for the Clutters had this crazy idea that Herb Clutter, the father, had a whole bunch of money. So he and his cohort of him, after he gets out of jail, they go back and look for this money where they don't find anything. They end up murdering the mother and father and their two youngest children that lived at home with them. And the two oldest girls were away and had already grown up and moved away and were living elsewhere, so they were saved, so to speak. So long story short, Truman Capote, an author, gets wind of this from a Life Magazine story and comes to town with his good friend Harper Lee, who is the author of To Kill a Mockingbird. And they go around and do interviews and he ends up writing the book in Cold Blood. Well, it was hugely famous. I think it was the last book that Capote ever wrote. But as it comes out later, it has a number of inaccuracies, mostly for the reason that all the people in the tiny little town wouldn't talk to him. He got all this information second and third-hand. So anyway, the reason this is a little personal for me is that. The two daughters that had moved away, Evian and Beth, had a number of kids, and one of their children is a good friend of mine. And so I've talked to her at length about this. Anyway, it's a four-hour documentary, and it's really, really great. It goes into a lot of detail. You find out a lot of things about the actual story, and they get points of view from some of the remaining family and their perspective on the story itself and the book and their thoughts on it and it's portrayal of them as a family. So really, really great, uh, documentary, very detailed. And if you're into a true crime or even just the story itself, it's, it's a great watch. 

CHARLES MAX_WOOD: Awesome. I love the road runner sounds in the background too. 

STEVE_EDWARDS: Oh, sorry. That's my phone notifications. 

CHARLES MAX_WOOD: I know I figured it was something like that. The notification sound on my phone is Daleks from Dr. Who, when they say exterminate. And so whenever I get a text, it goes exterminate. And then all my kids go exterminate. 

STEVE_EDWARDS: I hate boring notifications sounds. So I used to have, for instance, from a spin, sure. Pat detective when Jim Kerry comes out of the bathroom and says, do not go in there. And, uh, at the time I had it, my little two-year-old son would repeat that every time I heard it, it was nice. 

CHARLES MAX_WOOD: Oh, awesome. I'm going to jump in with a few picks. So, uh, I have been kind of pushing, uh, max coders here. I know that this will probably come out in a week or two, which is when everything will be set up and rolling. You want to get in early because I'm just going to grandfather people in at the lower rate. I'm going to limit it as well, you know, as we ramp up, but it's going to have less content at the beginning. So I figure the value is going to be lower. So I'm not going to charge as much, right? But as the value goes up, you'll be able to get it for that lower price. So definitely looking at that. I'm also going to foreshadow that I'm looking at doing some sort of masterminds to help people help each other keep current and provide some accountability for that beginning of next year. So keep an eye out for that too. A few other things that I'm just going to shout out about here. One is, is that I'm going to be traveling a bunch through the end of the year. Probably going to cut back on the travel next year a bit. But anyway, when I travel, I've been using the Tripit app. And the thing that's really nice about it, it's free, which is nice. But the other thing is, is I can invite my wife to the like the travel details. And so then she'll know like what hotel I'm staying at and which flights I'm flying on and all of that stuff is all in there and that makes it really easy for her to be able to, she'll just call me on my phone if she needs me. But when she picks me up from the airport and stuff, it makes that easy too. Cause she can just pull it open on her own phone. And so, uh, I'm going to shout out about that. The other thing that I'm going to pick is of course, I may not feel like picking it next week, but, uh, this Saturday, I'm going to be running the St. George marathon in St. George, Utah. My first marathon, so I'm a little bit trepidatious and a little bit excited. 

DAN_SHAPPIR: Way to go. Way to go Charles. 

STEVE_EDWARDS: You've been training for this one for a while, haven't you? 

CHARLES MAX_WOOD: Yeah, I've been training for it since November of last year. Yeah. I'm fairly confident I can make it. That's my goal. People are like, what's your goal? And I'm like, survive. 

STEVE_EDWARDS: So. So you're doing the full marathon not to have? 

CHARLES MAX_WOOD: Doing a full marathon. 

STEVE_EDWARDS: Uh, I get to about three miles and I'm dead. So, uh, kudos to you. 

CHARLES MAX_WOOD: Yeah. I run five miles a day anymore. So. But yeah, it was, it was like that for the first while you just have to build it up. But yeah, I'm going to pick that. And one other thing I'm going to shout out about is there's an app called VO2 Max. It's a training app and I hired a running coach because I had no idea how to actually prepare for a marathon. And I also had no idea, you know, just what it would take. So anyway, she gets in and she actually assigns me workouts every day. And so, you know, she's been helping me prep for this marathon and it's been great. It's, it's nice cause she puts the workouts in and then I can just run them. So it's been a bunch of work. I've actually lost like 35 pounds, you know, since beginning of July, I am feeling really, really great. The other thing that I've been doing is the keto diet. I haven't been super strict about it, but you know, I haven't cheated on it either. I think that's really the main thing is that I cut carbs way down and basically just eat anything else. Yeah, I've been losing weight and feeling really, really good. Which is good for me because I'm diabetic. Anyway, those are my picks. Tanmay, what are your picks? 

TANMAL: Two picks on the science fiction read side. I don't know if you folks have come across The Broken Earth Trilogy by N.K. Jemisin. 

DAN_SHAPPIR: Oh yeah, it's a great book. I really loved it. Finished that trilogy a while back. 

TANMAL: It's very different. It's very nice. I like it. The other one is Shishin Liu's the three body problem. It's again a trilogy. It's not a typical kind of character-driven like narrative that we're used to. It's a very, very different style of writing. It was originally Chinese translated to English a while back. I find that book fascinating for a bunch of different reasons, especially the way China sees itself in the future. It's kind of like a narrative around that, which I really enjoy reading. It was very nice. And I think on the tech side. We're part of a GraphQL conference next year called GraphQL Asia. So if any of you are going to be around in Asia, if any of you would like to speak there, please do submit a CFP or please do come by. And also a bunch of us have been working really hard on putting together a bunch of tutorials for GraphQL and stuff like that. And we have them up on learn.hustler.io. So that's kind of where the last few months of my time have gone. Yeah, those are my picks. 

CHARLES MAX_WOOD: Awesome. All right, well, thank you for coming Tanmay. This was a lot of fun. Yeah. And I loved all the back and forth as far as, yes, you know, the what and why and how. 

TANMAL: And yeah, we all know why AJ hits JavaScript. He's been reading the economic singularity book and he's just like, people are going to build AI in JavaScript. We're all screwed. 

AJ_O’NEAL: That would be so true. Oh, oh, awesome. My web browser tells me what to do and controls my life. Not that far. 

DAN_SHAPPIR: It already does, doesn't it? 

AJ_O’NEAL: Yeah. 

TANMAL: It does. It's not in a web browser yet though. You need WebAssembly and then chuck it onto Web browser and then you're fine. Can we get JavaScript into the browser? 

DAN_SHAPPIR: I think the W3C is looking to build machine learning into the browser. So it's on its way. 

AJ_O’NEAL: Just neural net, just like whoever I connect with on Facebook, all of our browsers, just neural net together. I think they do that anyway. It's on bots anyway.

DAN_SHAPPIR: Yeah, it's all bots anyway, isn't it? 

AJ_O’NEAL: Well, you know, I kind of like to have the illusion that the 700 people are my friends, but thanks for pointing it out that I made up the accounts. 

CHARLES MAX_WOOD: All right, folks, we're going to go ahead and wrap this one up. And in the meantime, Max out. 

 

Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit c-a-c-h-e-f-l-y dot com to learn more.

 

Album Art
JSJ 401: Hasura with Tanmai Gopal
0:00
1:10:20
Playback Speed: