Kubernetes Schema Validation Tools with Eyar Zilberman - DevOps 227
Eyar Zilberman joins the adventure to discuss Kubernetes schema validation tools.The panel jumps in and discusses the power of and the pros and cons of the different kinds of schema validations.
Show Notes
Eyar Zilberman joins the adventure to discuss Kubernetes schema validation tools.The panel jumps in and discusses the power of and the pros and cons of the different kinds of schema validations.
Links
- Why you need to use Kubernetes schema validation tools
- A Deep Dive Into Kubernetes Schema Validation
- Datree.io
- Eyar Zilberman - DEV Community
- LinkedIn: Eyar Zilberman
- Twitter: Eyar Zilberman ( @eyarzilb )
Picks
- Jillian- GitHub | cloudposse/terraform-example-module
- Jonathan- Sid Meier's Memoir!: A Life in Computer Games
- Will- Paperlike
Transcript
Welcome everyone to another episode of adventures in DevOps. I'm your host for today, Will Button. And we have our new panelists with us. We have Jonathan Hall. Hello.
Hello. And Jillian Rowe. Hello, everybody. And then we've got our guest today. We have Ihor Zoberman.
How are you doing, Ihor? Hi. Nice to meet you all and excited to be here. Well, we're excited to have you. You wanna give us a little introduction about yourself?
Yeah. Sure. So my name is Ihor, and I am leading the product at the company named Atree. In Atree, well, I think the company is prevent Kubernetes' misconfigurations. And fun fact, you're actually hosted my cofounder at episode number 76.
So I give a reference to this episode, and I won't go into details about exactly how we're doing that because it's staying there. Beside leading the product at edit tree, I'm also leading the local community of GitHub in Tel Aviv, which is the biggest one in the world, over 25 100 users. And beside that, I'm just love development. I actually was a developer before I went into the tree and as a product leader. And another fun fact, I actually have a law degree.
So I have nothing to do with development. It's all self learned, and I actually love very I I really love code, and this is how I got into this, space. So you say you have a law degree? Yeah. This is correct.
Actually, I have a law degree, and I was supposed to be a lawyer. And so the prospect of being a lawyer was so horrible, you decided, no. I'm gonna work in tech instead. Is that how this went? So something like that.
Basically, like, while I dealt with law, I I always loved the technology. So I did, like, law and technology stuff. Basically, it was a lot of open source licensing because law people never really understood what is a open source, and open source people never understood what is law. So I was in the middle there, was able to talk with both sides. But during this process, actually, I fell in love with the technology and then decided it's it's part of the open source.
It's much more interesting. So I got into the open source, started by myself, and then went into the process of being a developer. So I have a lot of great it's somewhere, on the wall, but I know. So I'm not using it. So so it's not that you thought law was too simple and you wanted a better challenge, and you wanted something more complicated like Kubernetes to work with?
That wasn't the thought process? Yeah. Something like that. Get out of your comfortable, comfort zone. Are you, like, certified in law?
Can you send out kinda cease and desist letters? Because I think that would really come in handy sometimes. I prefer not to do that because, again, I I did it, like, a few years ago, so I'm not up to date with all the new rules and stuff like that. But you're up to date with Kubernetes. Right?
This is this is correct. Yeah. What's the latest new feature you're excited about? Wow. You you can't hear that.
Jump the jump. We promise not. Exactly. Exactly. Too many for me to mention.
Cool. But you did write an article that we've got here on why you need to use Kubernetes scheme of validation tools, and you actually looked at 2 different ways of doing that, kubeval and kube conform. What was the motivation behind it? I'm assuming that there's, like, a a backstory here of where something happened and you were like, oh my god. We cannot go through this again.
Yeah. So, actually, there's also a third option. It's like actually doing it with kubectl. And so the backstory is that, at the 3, like I said, we are helping companies prevent Kubernetes misconfigurations. We're doing that by scanning the manifest files and giving them indication if it's up to the standards that was defined by the organization.
Or policy is also called. And some feedback that we got is that a lot of people thought told us that it's passing the policy, but it's still not a valid Kubernetes file. How come? Because I know someone forgot to configure it correctly, and instead of calling it, I know, API version with version in the capital letter, it's all smaller. That's just something like that.
So it's still passing the policy. It's because it's gonna have, like, a readiness probe, and it's gonna have a proper label and everything's correct. But on the technical side, it's not a valid Kubernetes file. And then where the question, is this something that we need to catch or we don't need to catch? Because, again, it's passing the policy.
It's only a problem on the valid on the Kubernetes validation side. So I got into this space and start to investigate. And while doing the research, I found that it's actually a common problem that people have. And there's only 3 ways to solve it. So one of them is with cubeval, which is a really good tool.
It's actually the most popular one that most of the people are using. And this is a way to do the validation offline. The second 2 tool that I found was kube conform. It's another open source. It's really good good tool.
And by the way, I just want to say, Jan, I really love this tool. So thank you for that. Jan is the actually the the person that buy this tool. And Jan, he actually took kube eval and he proved it. He did a lot of great stuff that you can see on kube eval, and it's also well maintained because, Jan is keep maintaining this project.
And then there's also the 3rd option, which is actually using kubectl. But the funny part, and I was really surprised about that, is that in opposed to all the other stuff capabilities that are really well documented, this part of doing schema validation with the native tool, which is KubeKettle, it's not documented at all. I actually went through the code itself, like the call code inside GitHub to understand what is happening, understand how it's working, which flag I need to use. And I looked everyone, like, I Google. When I Google it, I got, like, 2 pages.
This is how weird it was. Fell into a dark corner of the Internet there, didn't you? Yeah. Like, someone can hide their body and there is also about how to do schema validation with q catalog. You can hide the body there and there is also no one will find it.
Nobody will find it. Except some random No one's looking at it. Exactly. It's gonna be your new title. I was just wondering as you were describing these.
Are any of these integrated with Helm? Or are these if you're writing your Kubernetes configuration files manually or through some other tools inside of Helm? That's a really good that's a really good question. So if you think about it, Elm, basically, it's also Kubernetes manifest. And in there, we're also rendering Kubernetes manifest.
So it doesn't matter, like, all of them will work with with the helm. It's only a matter of do we have like, a native integration that it will be connected to hand directly? Or another way to do that is to render the the manifest, with Helm, and then passing it to one of those tools. Yeah. It's an interesting way of doing it.
Just, yeah, just have Helm render it for you and then throw it off to one of Exactly. Like people forget people forget that do that manually. Exactly. That Helm is actually in the end, there's a Kubernetes benefit that's generating, and this is what's getting pushed to your cluster. Usually, you don't don't see it because it's it's pushing it directly.
But if you do Helm template, you will see the file itself that is pushing. Cool. Now I have an extra step in my make files to add. I'm doing it right on that. I think that's a great that's a great point though is where do you recommend people do the validation checking at?
Yeah. So just, for the people that listening and didn't read the read the article, we just say that good news. If you have a schema validation error, it will get caught in the end. Because basically, when you try to deploy to your Kubernetes cluster, Kubernetes will throw an arrow. It will tell you that it's and it will lead the Kubernetes file.
That's all good. The problem is that you want to catch those arrows as soon as possible. You want to shift them left. You don't want to wait until you try to deploy. You want to catch them when someone is submitting that.
And that's the problem. Because with kubectl, there is something that is called like, it's a dry run flag that you can say something something something apply minus dry run, and then it will connect to your cluster. It will check if it's a valid file. If it's a valid file, it will not apply it. This is why you have the dry run flag.
And but you to give you the indication if it will be accepted or not by the cluster itself. So that's really cool. The issue with that is that you actually need to have up and running cluster, and you also need to have a connection to that. So going back one step, and we said that you need to validate those manifest files as soon as possible. Usually, local machines or CI machines don't have and you don't want them to have a connection to your cluster.
So that's become an issue. So you need to find a way that you can do it offline. When I'm saying offline, I mean, with no connection connection to your cluster, but also in a way that you can run as soon as possible and not only when you want to push it into production or in staging on also to your cluster, which means to the cluster. So like I said, you you have q eval that you can do that with. You can write run it locally.
You can add it as a step in your OCI, and you can also do it in in the CD before you're trying to apply something. So that's one option. Another option that you can do it with is with qconform. And same same you can implement it in the same ways because, like I said, basically, it's almost the same tool. It's only I would say it's like q eval with superpowers with the kubefull.
And the other way to for you to do it is actually with the tree. So with the tree, we like I said, it was a issue that we had. So we also added those capabilities to our tool. And if you are checking for policies, there's also prerequisites that we will check. So we will check that you have a valid Kubernetes file.
And if it's a valid Kubernetes file, it's it will also check to make sure that's that it's also passing the policy that you define on, on the organization. This is also something that you can do. I will also say that another thing that is interesting, and I wrote in the article, is that you have another flag with kubectl. So you have 2 modes. You have server mode and you have client mode.
Basically, you can check both of them requiring you to have a connection to a cluster. Something interesting that I discovered was that actually there's an open bug in the Kubernetes project, and the open bug is saying yeah. Yeah. Yeah. Among those 1,000 bugs that are open there.
And this open bug is actually saying that this is not the expected results. If you're using the flag dry run, but on the but as a client mode, it should not need to have a connection to your cluster. But right now, it's not working. So it's still requiring you to have a connection to your cluster. Another interesting thing, and this is also I I explained in the article, is that there is a discrepancy between the validation that are done on the client side and the validations that are done on the server side if you are using kube.
So to answer your question will know. No. Go ahead. Go ahead, and then I'll argue with you. That's fine.
So just to wrap it up, the best way to do that is as soon as possible. You should run those validations across the entire process from your local environment through CI, CD, and just before you're going to deploy it or any other automation process that you have, staging, production, whatever, do it as soon as possible and do it all the time. I actually wanted to argue with you a little bit on a point about not having access to a cluster while you're doing these validations. I would think you would need access to a cluster because what if I'm doing like node affinities or okay. That's the only case that I can think of actually is when I have node affinities.
So I don't I don't have a real strong case to argue with you. But if I'm doing that right, I would want for it to say, oh, you're setting this node affinity on something that doesn't even exist or doesn't make sense or it's not gonna come up or I don't know, something like that. I would hope it would be smart enough to tell me that you're doing something wrong and it would need to have a connection to your cluster to do that. Right? So think about it like in big organizations that you have a lot of developers.
And so usually, we are saying CIs, but we need to remember that CICD are 2 different steps. And there are a lot of organizations that I'm familiar with that the CI step is taking x amount of time and only then coming the CD step. So during the CI step that people will keep changing their manifest, it's not necessarily going to be deployed right away. So in at this step, then when you have the CI process, you want to run different checks and but you also don't want it to have a connection to your cluster. Only when on the CD step, you want to have a connection.
You have to have a connection to your cluster. Mhmm. So if you separate those steps, which usually happening in bigger organizations, the CI step don't have connection to your cluster. So I'm I'm looking through your your article and and some of the you have this nice little table that compares kubectl and kubectlform against client mode and server mode of kubectl and what things it caught and what it didn't. And I'm clicking on some of these here.
And it looks to me like in some of these cases, it's looking more for syntactic validity than contextual validity. I don't know if that's the right phraseology there. But, for example, I look at the label value and the wrong example has a label of dash, dash, dash, which is just it's invalid. It's invalid syntax. It's not that label I guess my question here is, does this check that the label makes sense or just that it's syntactically valid?
So that's a good question. So, basically, there are different steps of validations that you need to pass if you want to have a valid file. So first of all, let's think about it, like, on the general view. You want to make sure that all the all your Kubernetes files have to be a valid Yamen file. That's first of all.
After that, there has to be a valid Kubernetes file, which means they need to follow a specific structure. After that, the values inside those files need to be valid, and different steps or different tools will catch different arrows that I just mentioned. So with the tree, we'll catch all the arrows. We'll make sure that it's a valid YAML file. We'll make sure that it's a valid Kubernetes file.
We'll also make sure that values are valid. And with kube eval, it will make sure that it's only a valid Kubernetes structure. So you have different validations that we make. But by the way, kube cartel, once you try to deploy it to your cluster, it will make it will check all the stuff that I mentioned. So it will also make sure that it's a ML file.
It will also make sure that it's quantified and also by valid value. But, again, the problem is that it's too late in the process because it's only when you want to deploy. And you just want to ship all this information to the left, to the Yeah. Right. Right?
So Yeah. It's one of those. To the left, to the left, to the left. Do you do you shift left in Hebrew also or do you shift right since you read the other words? We we read the opposite.
That's the problem. You know? This is why the confusion. We're we're reading from we're reading from right to left. So I'm like, makes no sense.
Did the Japanese shift up? Yes. I don't know. Cultural adventures in DevOps. I had a great question, and now I completely lost it.
You wanna ship back, by the way? It'll come back. The term is ship back. I mean, I remember that. I think okay.
Well, I was just thinking, you know, like, this whole idea of, okay, we can say that it's a valid YAML file and a valid Kubernetes file, but doesn't make sense. And to me, that's always been, like, such an interesting problem, like one of the more interesting problems, especially because my background is high performance computing. So anyways, I think that we should have like the crossover event with the machine learning people where we just make them train a really big model on a whole bunch of Kubernetes configurations where it makes sense or not. That might be the only way to do it is have, like, a massive decision tree that nobody actually understands that says yes or no. I think you just described Kubernetes.
Exactly. As a disabled company understands. Yeah. Little bit. Okay.
I remember my question. I'm curious. What does your workflow look like? When you're working on Kubernetes manifests, do you run these tools in your editor, for example, on save? Do you use get hooks?
Do you use CI pipelines? What does your setup look like? How do you do this in practice? Well, I'm biased. I'm using our own tool.
But you should But yeah. But I'm telling so what I usually see that people are doing, this is why we created this tool, is that they understand it, the value, and they're trying to shift it left. It's not right. You're trying to shift it left, and they are doing it with pre commit hooks. That's 1.
Then it's implementing inside the CI. The problem is that you need to implement a lot of tooling in order to get those simple validations that I just described. So you need to have a link there for your Yammer file. You have a to have q eval or q perform for Kubernetes, and then you need to have, some some way to actually do the policy checks, which can be performed with different tools that are able to pass the actual files. Jq, for example.
Just throwing some ideas if someone want to get crazy and do it by himself. So it's actually requiring a lot of gluing and a lot of stitching and a lot of different tools that need to work together, which become to be like a massive headache If you want to do that, and this is why we built the tree, trying to do it in one tool, make it simple, make it fun. So you can it's a CLI tool. So you actually enforce it or you can put it everywhere you want. You can put it on your local environment.
You can put it in your CI, you can put it in your city, can put it everywhere, and it will do all those validation for you out of the box in a really simple and easy way. That's very cool. And is it all open source? Yes. Yes.
And again, like Even better. Yeah. So there there is a magic sauce in the tree. Like, it's not we don't have a secret API. We are doing something that, like, every developer can do what we are doing.
It's and we are totally okay with that. And the the cool part is that we're just trying to make it much more simple for you. So you don't need to do it by yourself. So you don't need to configure this pre commit tool, and you don't need to configure this end integration. By the way, we also have a hand plug in so you can do it natively.
We just want to make sure that it's simple enough for you to use our tool and not to try to build it by yourself. Because we all really believe in buy business build that you should be focused on building great stuff that are out of your core business and they'll try to build and they'll try to build stuff that are not, and you should prefer to buy them. So this is how we think about it, and this is why we're all we always trying to make sure that we will well, we always want to make sure that all the stuff that we're doing will give you a value as a user. That's very cool. You said something I didn't quite catch.
There's a plug in for something. Was it Helm or was it Yeah. Yeah. Yeah. So you asked about them.
Oh, nice. So so I mentioned it because you asked about them. So for example, we have, native m plug in. So when you're doing ham install, it will do all those validation. We check sure to make sure that it's a valid YAML file, to make sure that it's a Kubernetes file, to make sure that it's passing the policy, and it's all be integrating side hand.
So we don't need to do the hemp template, pipe it into kubectl, run it with dry ram flag or with kubival or whatever, stuff like that. Cool. Does it integrate with, like, any of the code editors too? Like, will it tell me a nice big red letters? Because, like, listen, I really need those nice big red letters telling me that I'm doing something stupid.
If not, you get that, you respond to that? It's it's it's on the road map. It's already on the road map because we truly believe that we need to give this feedback about the validation as soon as possible. And on the road map is to also put it inside your ID. And if it's possible, also in when you think about doing a misconfiguration, it will also be integrated there inside your head.
You get like a buzz. Like What do you call that CRD? Not sure. Not sure. We need to think about the name for that.
So I'm I'm really curious about how this works with Helm because, obviously, Helm isn't purely deterministic in the sense that depending on what values values you provide, you could have an infinite possibility of of actual Kubernetes manifests to come out. How do you handle that? I mean, for for example, I'm I'm thinking of the chart testing or CT tool. I don't know if you're familiar with that, but it it lets you define like a list of you can give it a directory full of values YAML files, and it will just test against each one of those. Do you have something similar, or how do you approach that?
So, again, this is a really good question, but we need to remember, in the end of every helm file, there is a Kubernetes file. So we are not checking the value file separately and the child file separately. What we are doing is that we are rendering it together, and then we're running the checks on double print. So in the end, it's just a manifest file that is rendered from hand value and then child that is combined together. So it doesn't really matter how you do the how you do the template thing from your side.
You can use which key and values that you want because indent will be translated into a Kubernetes file. So we're just running it on the end result, which is the Kubernetes file itself. But but if my values suppose I have one values file that says ingress true and one that says ingress false, that could output completely different manifests, you know, with completely different resources defined. And I might wanna validate both versions. Does your helm plug in automate that for me, or do I just need to have two two lines in my c I script that says run it this way and also run it that way?
Sound. Make sense? So if I understand correctly, you're asking if I can run it in if I can have, like, 2 different policies because I have different permutations for the same help file? Yeah. I mean, so suppose I have a home start that that just deploys, WordPress or whatever.
And in in one variation, one of my configurations, say disables the ingress. So I'm no longer creating the ingress resource in my output. I'm not setting several different things that might not be created. I'm not creating an SSL certificate and so on. My output manifest is gonna be significantly smaller with fewer resources in it than if I had enabled ingress.
And maybe I wanna validate both versions of that using your tool. What what what steps do I take to accomplish that? Yeah. So basically, again, we it doesn't matter. Like, we will validate both versions.
So there is a logic inside your code that will trigger one of them. Correct? Yeah. So the version that is triggered, this is also what will be passed to the tree, and this is what will also be validated and will give you the indication if it's passing or failing. The same that the same the same mechanism that's breaking triggering your helm is the same one that will be passed to the tree.
Yeah. So the validate runs on the, like, helm install or helm upgrade command. Right? Exactly. Exactly.
Not beforehand. Then how are you gonna integrate it with an editor? With the editor? Yeah. Because if it's in an editor, it's before the helm install.
You're right. And this is a challenge. This is something that we need to solve. I don't have all the answers right now. This is something that we're working on.
That's interesting. That's where you need the decision tree. Probably. This is why we call the tree. Well, I've been seeing people have validating their values file also with an additional JSON schema, and it seems like you could kind of work something like that out to sort of them have these trees that are like, oh, if you have a boolean value, it should, you know, it should check for both the true and the false and these kind of things.
But, I don't know. I'm glad you're building it and not me. That's very cool. So you're right. I also saw it.
You can do it with Jason's scheme Gmail is sorry. You can do it with Jason's schema. The problem is it's taking it's doing it's a lot of work to do that. And also, actually, it's taking a lot of maintaining to make sure that it's always up to date, which is more hard than than just reading it. But it's not that common that people this is the best practice, but it's not that common that people are doing it that.
And usually, they're, like, just doing the validation itself and not on the values separately on on on the chart separately. They're doing the validation on what's coming out from combining the both. That's true. I attempt to just cross my fingers and prey on all all the times that I get commit to GitHub. Yeah.
So I think one of the things that was cool in your article here, because I know in my experience, a lot of pushback I've experienced in trying to implement different solutions like this is how much time it takes or how much you know, people don't wanna do it because they they have this cons this idea that it's gonna slow them down. But you actually did quite a bit of benchmarking on this, right, to see exactly what the slowdown or impact would be. Yeah. So this is something that it was interesting to me to see because while I check the different, possibilities about how can I have how can I actually have all overcome the problem of, schema validation? I noticed that when I'm doing it with kubectl, and I'm doing it with the server ball, actually taking a lot of time to get the results back.
So I said, what would happen if I would do it like one other times? You know, like, develop a single need to take it to that. And and then I actually benchmarked all benchmarked all the tools and how much time it would take them to do the validation. It was so just to give you the summary of that, kube conform is doing it the best way. It's actually giving their results really, really fast.
After that, you have q eval that's also giving the result fast. Again, it's like on milliseconds for a regular usage, not when you're trying to scan one of the Kubernetes files. So as a user, you won't actually notice that. So it's you can say that it's almost the same. When you're running it with kubectl on the server side on the server mode.
So, yes, it's taking longer, but it's not like it's going to take you 10 minutes. It's just going to take a little bit longer. So if we think about it, we just said that you've got several several mode. It's the best validation. And so we don't really have an excuse why not to do that because it's not going to add too much time to your deployment process or something like that.
The only issue with, doing it is that it's requiring you to have a connection to a cluster. And as we already mentioned, this is something that is not not always possible if you want to go as soon as possible with the shift left approach and you want to do the validation on the CI or on locally. Right. Would would it be possible to run the server mode test against a test server, like, say, running in kind or mini cube or something like that? Or does it really need to be your production server with all your existing CRDs and everything installed?
Yeah. Perfect question. So you can do it with the mini cube, and then you can do it also in the CI or whatever. But then you need to remember, it's f to f the same environment, like your production. So if you have a namespace that exists on production but don't exist on mini cube, it will fail.
Because you try to deploy a file and tell you, oh, I don't know this namespace which called Jonathan or whatever because you have it on production. So it's a valid file, but it will fail your your failure. It will fail on the CI. So this is something that you can do. You can actually have a mini cube set it up like your production.
But, yeah, but again, it's like with the checking your it's like building schema validation. So it's like building the adjacent schema, problem. You need to maintain it. You need to build it. It's a lot of headache.
Yeah. I think we could argue forever about, like, mocking out infrastructure versus actually building it. For me, that's one of those pendulums that swung back and forth, and now I'm on the other side where I'm like, no. People are gonna pay for me to have, like, the same setup in CI as in production so that I just have something real that I can test against because it's just, you know, too many times running up against this kind of thing that the, the CI infrastructure ends up not being the same no matter how long you take to make it. Yeah.
Yeah. It's a huge effort by itself just to sync everything, like, to sync. This is something that is going to be lost somewhere and someone is going to forget about it. And then it's going to annoy, developer really, really, really, really because you don't know why he's getting this validation error because it's like, I don't know what to do with that. And then there was DevOps guy that forgot to actually sync the mini cube with that.
You know, it's going to fall in between the cracks somewhere for sure. I'm sold. I'm gonna start using this tool. Me too. Do you have a GitHub actions for it?
Can I just hook that up right now? So actually so I have an example. In the old docs, we have an example about how to implement and decide a GitHub action workflow. We still don't have a GitHub action per se. It's something that we will build soon.
It just the amount of integrations that we need to build, it's just enormous. I know. Those are gross. Yeah. Yeah.
So we need to have, like, a second CIO, and you need to have the ID integration, and you need to have the hand plug in. So it's something that we are keep working on. And by the way, we also have like open issue on that in our GitHub repository. So if someone want to suggest another integration, feel free because this is something that we always keep updating. For example, someone said like, hey, we need you need to have a Homebrew.
You need to be installed with Homebrew, not with a one liner. So we are listening to the community. And the cool part is that actually there's a company behind this open source. So there are people that are working on that full time. So every every issue that is open is also issue that we address.
And every box that someone is opening is is a bug that someone is trying to to fix or to resolve. It's not like with Kubernetes that you have 1,000 bugs and no one actually tried to understand if they're valid bugs or not. What's the business model? Yeah. It is cool.
What's the business model this company is is employing? Is there a is there a commercial version of this software available or do they sell other commercial products? How does this fit into that ecosystem? Yeah. So like I mentioned, I started the developer.
And when we thought about this solution, we want we we had one agenda, and the and it's to make sure that developers will enjoy using this tool, and it will be useful also without paying for it. Because like I said, you can always be the buyer yourself. So our goal is not to convert single developer or a small team or team of 10 developers. Our goal is to convert or to monetize big organizations that appreciate what they're doing and getting the value. So we have, like, enterprise grade features that are more relevant for those kind of requirements, you know, like SSO, custom support, stuff like that.
But for regular usage of the tool, you won't mind that and we don't have for example, we don't have feature gating. You're getting all the features that we have and you don't need to pay that. So the business model is basically based on the fact that some features that are not relevant to any other people are gated, which are, like I mentioned, the SSO and stuff that, custom support, stuff like that. But we also have the limit of policy checks that you can run, which is today, 1,000 every month. And it's almost impossible to pass it.
Also, if also like that on this kind of show, like No. No. I'm just happy. Right? No.
No. I'm sorry about that. I'm I'm sorry. Okay. So I give you okay.
So we set the number for 1,000 because we know that people should not pass it, not because you can't you can't pass it. Right? But on a regular basic, if you want to use the tool and get the value, there's no reason for you to do so many validation if you are not a huge enterprise organization, basically. Yeah. I used to say that I hit the Docker pull limit a couple weeks ago.
I couldn't figure out what was happening. I was like, who's gonna do this? Pull it. And no. No.
Okay. I didn't even know they had a pull limit. Yeah. They just they used it and, like, no number or something. Yeah.
It's pretty easy to have. Funny story about that. So they're also doing some checks to make sure that you're not DDoS ing that you're doing a DDoS attack on them. So let me give you a story about, you know, what I do the name of the company because they're actually talking about it by myself. So there's a company called Datadog.
I know I don't know if you're familiar with them. Yeah. And Datadog, yeah, so they have a configuration of the Kubernetes. And part of the Kubernetes part of the, configuration was that you always need you you always need to pull a new image when the application is going up, when it's deployed. Right?
Image pull policy, which means that you need to always pull it. And they have, like, only 3 LAN addresses. So it's 3 IP addresses. And they have all the images hosted somewhere. And someone made a mistake, like, developers are making mistakes.
And it was actually a buggy code that got deployed with Kubernetes. So what's happening what's happened is that it's got deployed. So it's trying to push the it's trying to pull the image. The code is not compiling correctly. So Kubernetes is noticing that something is not correct.
It's killing it. But then it's actually raising a new one because this is what Kubernetes is doing. But do it like 1,000 times, 10000 times, 100,000 times. This is what Kubernetes is doing. And doing it from 3 IP addresses to the same place.
And if they their vendor thought that they are getting a DDoS attack, so they blocked it. And this is actually Very similar. It happened to me last week. Yeah. Yeah.
So Probably about the thousands of times, but it was enough. Yeah. So I think it's a really good example of a misconfiguration that is actually passing validation because it will pass a schema validation, but it's actually a policy that you want to make sure that you're not always pulling with the latest image because then you can do something by accident. So this is something that will be checked, but it's a it's, it's Kubernetes valid, but it's not policy valid. Yeah.
I think, I need to have an alert in that validator now instead of having a full policy of always, just have on whichever one it is not present or something. Yeah. I really need that. Thank you. Yeah.
Exactly. Ran into that. No. Never mind. I remember reading a few weeks ago about a Kubernetes manifest linter that would look for things like that.
It would look for pull policies. It would look for, do you have resource requests that are insane? Are you asking for 6,000 CPUs or something like that? This this tool doesn't do any of that, I I I don't think. Right?
But do you use one, and can you recommend one that that does similar stuff? So this is exactly what the tree is doing. It's also doing those validation. Thing. Yeah.
Again, we are not heuristics in some cases. Right? Yes. So you can also create, like, customers. You can set you can set, like, for example, that I want to make sure that there's a liveness prop and the value of the like, I want to make sure that the entry point is always the slash else, for example, something like that.
Or you can make sure there's a CPU limit, and it's always set to something like that. Actually, you can do a lot of cool stuff. You can say, like, for staging, I want to make sure that the CPU limit is 3. But for production, the CPU limit can be 6. So we can also mix them up and you can say, I want to run this specific policy for this environment.
Again, it's not something new. There are other tools that are doing that. I'm not I don't think that we created something that is unique. I think what is unique about our approach is that we're doing it simple. We're doing it in a nice way.
We're doing it in in a more integrated way inside your workflow. So we don't need to do the heavy lifting by yourself. You don't need to so with this example that you gave, you would still need to have something that is also doing Kubernetes schema validation. So we need to integrate another tool like kubevelop, kubeconform. And you also need to do a Yamini Python.
So that's another Yamini in Inter. And you also need to configure it to connect to your helm as a plugging or whatever. So you also need to build that. So you have this and this and this and this and this. It's all need to be glued together, and you have a big headache again.
So this is the approach we are trying to take like we're trying to take it all off of your hands. You don't need to build all those integrations, don't need to glue them. Again, we're not doing something new. You can also do it. You can always do it with JQ.
You can also glue it by you can also do it by yourself. But we will do it in a easy way for you. So you prefer not to use the tree and overbuilding it by yourself. Again, if you have, like, free time over the weekend and you want to build it, go and build it. It's fun.
Yeah. Fun. Exactly. Install Kubernetes. It'll be fun, they said.
Exactly. I'm interested in asking a question that's completely unrelated to this. In your introduction, you said that you're a leader or founder or something of GitHub users group, the largest in the world. Tell me a little bit about that. What do you do?
I mean, I'm part of the Go users group, or we don't call ourselves a users group. We call ourselves a meetup group. That's the new the new version of users group. Right? Here in Amsterdam, and we just get around and get together and talk about Go stuff.
Tell me what you do at a GitHub users group. Yeah. So basically, like this article, it came from my own thing. And the thing was that I want to discuss something about some features that GitHub had. And I tried to look with among my friends, like, where would we have like GitHub meetups that I can ask this question?
And the answer was nowhere. So I said like, okay, that's cool. Like I love GitHub. I'm using GitHub. And I'm sure that a lot of developers love GitHub and using it.
So let's do a meetup about GitHub. So this is how it gets started. And it's actually a user group because it's led by the community. I'm not working at GitHub, I'm not working at Microsoft. They're not paying me in any way.
I'm just doing it on my own free time. So this is why it's called an user group. And it's actually was surprisingly growing by itself because the first meetup was among 1 of the 20 people that register, and the last meetup that we did was 800 people register. So so those are the numbers that were whether you asked all those people? Yeah.
So so the 800 numbers, it's all 9. So it's usually it's online. We don't have a big place to host so many peoples. And and also there's, like, a benchmark that, you know, that if you have 800 people that are registering, not all will come. It's only 30% usually.
Yeah. So that's fine. But, again, it's a lot of beer and a lot of pizza to bring to Amita. Yeah. Just kidding.
No kidding. Well, that's great. Congratulations on that. I mean, it it's always fun to to be part of a community like that and to to get so much enthusiasm about when you decided to start. I know that just it has to feel good or or maybe overwhelming or both.
Another fun fact, actually, my cofounder that was hosted on episode 76. Again, sorry about the cross reference here. He's actually leading the local AWS community, which is also the biggest one in the world. So it's a little bit of a fight because GitHub got acquired by Microsoft. So I'm like on this side.
Oh. Is is leading the AWS side, and we are working in the same company, but we're still good friends and we love each other. Is there Kubernetes group? Because they're kinda Google related. That would be a nice little trifecta.
So they start they started from Google, but right now, they're just standing by themselves. Yeah. It's like part of the CNCF and organizations. So it's like Google started it, but I think it was really nice that they say, like, okay, we realized that it's something that is bigger than Google, and we want the community to enjoy it. So, hey, CNCF, take this wonderful child and please raise it for us.
Yeah. Well done, Gary. The kindness for the world. Is there AWS in Israel? Do you guys have AWS?
They're like local locally? You mean like, the like servers? Like physical servers? Yeah. They have do they have like an office?
Do they do they have like a physical presence there? Oh, okay. So you we have r and d and in Israel for AWS. And right now, they're actually building like 3, data data center centers in Israel. So we're also going to have the computers themselves, like the machines on Israel land.
We don't have AWS. It will be all it will be holy service, I guess. Mhmm. I missed something you said, Julian. You said you don't have AWS.
What? We don't have AWS, like, locally in the Middle East. So in the GCC, although they might be in Bahrain now. I'm not sure. But within UAE and Doha, we only have Azure, which is a problem for me in getting local clients because I don't wanna have to work a lot of things.
Like, I'm kinda lazy, and AWS is enough. Alright? It has a lot of things that I've had to stop with. And that could be another story for another time. But, like, yeah, for real, I wanna move on to another hosting provider cloud provider.
So, that's been my public service announcement for the day, I guess. Yeah. There are a lot a lot of r and d centers in Israel. We also have app intel. Like, there are a lot of tech companies that are in this in here.
And because we have a lot of people that a lot of developers, a lot of, qualified people to do that, the only thing that we don't still don't have is, like, the cloud provider themselves, the mid local machines. But like I said, it's going to be changed. I know that, Google is going not Google, but Azure is going to open and AWS are going to open in Israel. We are using, West Virginia at AWS, by the way. Mhmm.
Everybody's using West Virginia. Cool. Anything else you don't wanna talk about? No. I think you got it all covered.
So just to summarize it all, you should all validate your Kubernetes file. You should all do it as soon as possible. If it's possible to do it locally, do it then. If it's not possible, at least do it in your CI. And I gave some tips about how to do it.
You can do it with the different tools that we mentioned. You can do it with the tree, but you can also do it with the other open source tools. You can do it with native tools with, like, you can do it. But you then you need to put a connection to a cluster. And if someone have any questions regarding that, if someone have any feedback regarding this article, please contact me.
I think that you will also leave my information on this and where you're going to host this. So we have all my information, and feel free. Like, I'm super reachable. My email address is open, and you can find me on our GitHub project if you want to ping me whatever you choose. That's it.
Right on. Yep. We will put your contact info in the show notes. And then the last thing for us to do here are our picks for the show. Jonathan, you're excited.
Do you wanna go first? Sure. Of course. Bring it on. I'm I'm reading or actually listening to an audio book that I think is amazing.
I usually read boring stuff like O'Reilly books about Kubernetes and and Helm charts and stuff like that, but I decided to branch out a little bit. And I'm reading this Sid Meier's memoir, which is still nerdy because he's a nerd, but it's so fun. And he talks about game design and how he invented these games that he made. For for those who aren't familiar everybody's familiar. Right?
But if you're not, he's the creator of games like Civilization and Pirates and a bunch of other really popular games, early flight simulators. It's a great book. I I don't know. I I and and it's he reads the the audiobook, he reads himself. So I feel like I'm having a fireplace conversation with Sid Meier when I read this.
Oh, that's super cool. Yeah. I played Civilization from way back in the day. Like, Microsoft DOS days. 1st version, it was on Microsoft DOS.
I think it it was either version 1. It might have been 2. I wanna say it was version 1. I think I started with 2. And I I played, like, the 16 different expansions for version 2, and then I think I played every version since.
Such a great game. Civilization is good. It's safe. I mean, probably more my husband's insanity when I was on bed rest with my oldest because I had something to, like, obsess over besides just kind of bossing him around. So that's my civilization story.
Jillian, you've got a pick for us? I do. So I've been on a quest to go and clean up a lot of my terraform recipes and release them publicly out into the wild. And I found a really good template for doing that from this group called Cloud Posse. And it's I think it's spelled pretty much like it sounds.
They have a really nice like Terraform GitHub template. You know, like the how you can actually create templates straight from GitHub repositories now, like you press the button and it creates you a new repo with the file structure and all that kind of thing. And I really like it. They also have this really nice make file that just does like everything. Like there's so much stuff in that make file.
It's amazing. So, yeah, I've been cleaning up a lot of my Terraform recipes for that and using, like, using that template as the base. And I think it's it's just a really nice Terraform template. Go check it out. Right on.
That's awesome. Yeah. Make files. Make files and read me, I think might be 2 of the hardest problems in software engineering. I still haven't given up.
It's really it's becoming like a cultural age gap kind of problem for me. When I talk to new developers, I'm like, it's all in make file. It's like it's there. Right? And they're like, what's the make file?
Especially if they've been using, like, Node and they're used to the package dot JSON. And then I'm like, what's the make file? Sit down. We need to talk about this. Sit down in that chair.
We're gonna talk. Yes. That's right. That is it. You are.
Have you got a pick for us? I didn't know that I need to choose 1. Sorry. I didn't know. I didn't make my own walk.
That's quite alright. I've got one. And it's funny because I've heard about this for quite a while, and I was like, yeah. Yeah. Yeah.
Whatever. It's fine. And it's a screen protector for my iPad, but it's from Paperlike and it's as you might have guessed, it's very Paperlike because one of the things with using my I my iPad and the Apple Pencil is it felt really slippery. Plus, I'm left handed, you know, so I have this thing where I wrap my arm around 360 degrees in order to be able to write anything and then curl up in the fetal position. But it was really hard to write on my iPad, but I wanted to do it.
And, so I finally broke down and bought this screen protector called Paperlike, and I put it on and felt it with my fingers. And I was like, yeah, whatever. But then I actually started using it with the Apple Pencil. I was like, holy cow. This is really like writing on a piece of paper.
So that's my pick for today is if you have an iPad and the Apple Pencil, but you are struggling to use it because it feels like it just slides all over the place, the paper like screen protector has solved that problem for me. Is it iPad specific, or will it work on any tablet that you use with the stylus? That's a great question. I don't know. I only looked for the iPad version.
I would imagine that they've got it for pretty much any tablet. Yeah. Because it's just I mean, it's just a it looks just like a screen protector, you know, that you buy for your phone or or any tablet. There's nothing significant about it, but the texture of it feels like paper. So props to their marketing team for naming the product as well.
All right. I think that's it. We've got a wrap. Thank you everyone for listening. Yara, thank you for joining us.
This was a great chat. And, Jonathan, Jillian, welcome. Happy to have you guys here and, we'll see you all next time.
Kubernetes Schema Validation Tools with Eyar Zilberman - DevOps 227
0:00
Playback Speed: