Hey, what's going on everybody? Welcome to another episode of Adventures in DevOps. Joining me in the studio today, my lovely co-host, Jillian Rowe. And we have a special guest with us today. I'm Will Button, your host, and joining us is Jan Ledger. Welcome, Jan.
Hello, Willow. Hello, Gillian. Thank you for having me today.
Thank you for being here. I'm looking forward to this. And how about kick us off? Give us a little bit about your background.
Thank you for coming on.
Yeah, so today I'm the co-founder and CEO of a company called Koyeb. And I'm actually building stuff in the cloud space for like 10 years or so. I've built several cloud service providers. Before today, it's again our cloud service providers that we're building at Koyeb.
We basically simplify application deployment, so we let developers and businesses easily deploy applications in a way where they don't have to deal with infrastructure, so we abstract this from them. And I've been doing basically kind of this job for 10, 15 years now since we've built two other cloud service providers before.
One which is called scaleway and another one which is Which was called odd scale. It's known 3ds odd scale Based in Europe, so that's a small story
Right on. So I guess my first question is, why build another cloud service provider? I'm assuming that there is a specific need that you've identified that's not addressed by the other ones.
Yeah, so I think like we've over time we've moved up in the stack so we have a fundamental belief and we have seen it that like infrastructure technologies are evolving every year but there are cycles like with every five years you have something new coming up and something major coming up if you look at the last...
20-25 years, I mean, before people, 20 years ago, people were more or less racking servers in data centers. Then it moved to virtualization with VMware, which kind of won this battle at first. You saw the emergence of IIS after this, with AWS, which came up with EC2.
And then we saw Kubernetes winning kind of a portion of the market and being the new standard to deploy apps. And we're delivered in basically having a higher level of abstraction, which is kind of serverless as a feeling I'd say, in the sense that you don't need to think about clusters or orchestration and that it's built by the provider.
basically. And so the experience we are trying to provide today is an experience where you take a repository of code, we will build it into a container and we will run it on our infrastructure on your behalf so that you don't have to think about orchestration or you don't have to think about anything advanced including like networking, load balancing which is going to be provided by us.
And yeah, and you can actually deploy multiple locations in the world, we will take care of like deploying your application, doing against networking in this condition. And so our backstory is this basically trying to bring this new level of abstraction where you think even less about infrastructure than before. Yeah, I would say that's the reason why we jumped on this new venture.
Even if initially it was a slightly difference, but we ended up going this route on this last venture with Fair.
Gotcha, right on. That makes sense, because it seems like, from my perspective anyways, that a lot of applications look very similar from an infrastructure perspective. They may be doing completely different things, but like you mentioned, from an infrastructure perspective, it almost always comes down to multiple containers sitting behind a load balancer, and then some different disk storage or database options within that.
Yeah, and the thing is like, we, so we try to provide a higher level of attraction where you don't have to really think about the number of servers which are running or the number of virtual machines which are running for you. But at the end, we still run like on our end, our job and our business is about running like containers and virtual machines.
for end users, so we package apps inside of micro VMs and we provide a load balancing layer so that you don't have to deal with it. And we do it from the ground up. So we went into a direction where we don't rely on like large mainstream providers on AWS, or SAP Azure, we run on top of our machines so that we have like basically more control.
on the infrastructure layer and so that we can give you actually better performance and that we are not limited with the implementation of the large cloud vendors because they have their own limits.
How long have you been working on this?
on this new product, it's been two years and a half.
Okay, right on. Nice. Are you focusing on any particular like size of customer or specific application stack or market segment? Is there anyone who's like better suited to use your service versus another?
We basically, so we're not running after a large enterprise at this stage. So we're using more on, I'd say, smaller businesses, startups, seed stage, series A startups, or agencies or anybody who actually is able to move fast and willing to move fast. And
In terms of stack, we are able to support a large variety of applications in the sense that we are providing a platform. And you can actually come up with a repository of code, which can be in, I mean, we support six or seven languages when we build directly. So if you have a Python application, a GoLang application.
PHP application, Ruby application, Java application, we will build it automatically for you. And if our build engine doesn't properly build your application, you can fall back to putting a Dockerfile in your repository. We will build this for you too. And if you want to control the CI process, you can also directly deploy.
pre-built Docker containers that you built on your own. So you have quite a lot of options going from like something completely managed where you don't think about build at all. If you want to get more control, we let you do that too.
Right on. That makes a lot of sense, especially in the early stage environment. I think that's where one area where a service like this can be really helpful because, you know, like I've spent a lot of my career working with startups and those early stages, you have people who are experts in writing code, but maybe they don't have those strong infrastructure skills, which can lead to some unique configurations that potentially don't scale well. So.
going this route definitely would be advantageous for them.
Yeah, I mean, as long as you don't have like a really large infrastructure, we don't see what you would want as a business to basically dedicate resources to building infrastructure. So we're trying to make it so that you don't have to hire anybody specialized in infrastructure for a while. Basically, if you're a larger organization, you might have someone who is going to...
You might want someone to help you basically give good practices to your teams or even set up some automation. Like, I don't know, if you have several engineering teams and you're building on top of Koyev, you might also need some skills around DevOps actually or architecture. But we are trying to make it so that you don't need it in the initial phases where...
you're anyway struggling to hire. And yeah, just like hiring DevOps is a job on its own. So, finding people with the right skills is a job on its own. So we're trying to bring something which, and even if you have the skills, it's probably going to take you a few months to build it. So...
to build something which is distributed across multiple locations. For instance, it's not that easy building the complete CI pipeline. So yeah, so that's the spirit.
Right on, nice. If someone starts using the service, what level of input do they need to provide as far as you're handling the scaling and the fault tolerance and stuff, do they need to provide memory and CPU requirements or minimum and maximum thresholds and things like that?
Yeah, so at this stage what you need to provide is basically you need to tell us how much CPU and memory your service needs for each instance and you need to define the scaling yourself at this stage. We'll provide the auto scaling too so it should be live in Q4.
and you need to decide where your container is going to be running in which countries in the world. So we made it on purpose. We could have a strategy where we deploy everywhere in the world, but it might actually be counterproductive for some application, depending on where the database is, for instance. And because we are focusing on like...
full stack apps and API. So it's not like front-end apps where you need to be just like at the edge and you want it to be cached only. So we let you run complete full stack apps and you can decide if you want it to be running. So we just announced like one week ago four new locations in early access. We have six.
call locations today. So you can deploy across Europe, US and Asia and you can decide if you want the app to be running across in one location or in six or anything in between. And so we will, once you decide which...
how much memory you want to dedicate to your app for each instance, how many instances are going to be running per location. We take care of provisioning this inside of micro VMs on all the offices location. We provide load balancing layer with private networking built-in and a global load balancer.
in front of it so you don't have to add anything in front and you also have edge caching which is provisioned by default and then you're good to go.
That's a lot for you to build and support. What was that process like for you as you were just starting out the company? I mean, how did you tie all of that stuff together? What was your approach there?
Yeah, that's really interesting. Just thinking all that goes into, well, I'm going to start up a bunch of data centers around the world and then go support a lot of users, even just the motivation to do that. Just that sentence makes me feel tired. So I'm really wondering what were the challenges and some of the motivation and everything that was happening behind the scenes for you to even say, all right, let's go do this.
Yeah, so I mean we were always building infrastructure so I think it kind of biases everything like when you look at it because at scale we were even lower in the stack so we were at some point building ARM servers, like really the PCB and so on, like the design of the servers themselves.
We're producing them, getting them racked into data centers, data centers we were controlling. So it was even lower in the stack and now we are higher. The way we built it is we don't rack the samples ourselves. We found this like it's too tiring at this stage. Maybe we'll do it again when we need it, but if somebody can do it for us, it's better. So we don't have to do this. That's already a good point.
And so, yeah, we're just renting parallel servers all around the world. And the way, so we started, we were on the, we are three co-founders originally. So, and we're all three software engineers by training. So yeah, we were able to build a bunch of it on our own. We started with one single location.
And if you look it up, so yeah, we added progressively over the last two years features. So we added the build engine. Initially, it was only Docker containers which were deployed. Then we added the build from GitHub. We added, we quickly added the networking points because it's something we felt in the past was...
pain basically. You know previous experiences like building if you want to spread our infrastructure across multiple locations and multiple countries there is a question on how do you transport traffic between all of these locations. So from the ground we built a service mesh inside of the product so that you don't have to.
basically think about encrypting your connections between your locations so you can basically deploy APIs and microservices onto continents if you have like for instance a user microservice which needs to be running only in the US or a billing microservice and the rest of the microservices are going to be spread out in the world.
you don't have to think about connectivity. And that's something we built this way because we had the problem in our previous life and it basically slowed us down in terms of deploying. And yeah, I think we just added up features over the last two years following the feedback we collected from the initial users also to try to prioritize. I think we were pretty clear on where we wanted to go.
from the ground up. But the key change we have is actually making sure we build fast enough for the users because we are not alone on this journey actually on trying to simplify the application deployment journey.
for sure. Yeah, I'm one of those people I would definitely pay large sums of money to avoid ever going into a data center again.
Me too, right there with you. Like I heard you talking about ARM and I was just immediately like, no, not doing this. Just fighting with GPU containers too right before my vacation, so, I don't know.
Yeah, so it's just like it feels there are so many businesses where they dedicate time to doing stuff. I mean, I hear a lot of like early stage startup like speaking about communities where they don't have the resources or the time to waste on doing this because it's not their core business. I mean we're...
We are wasting a lot of time on infrastructure orchestration, but it's our business. But we're just seeing so many companies thinking about Kubernetes and this kind of stuff when it's not part of their product. So it's just like that support for another technology.
I'm really wondering how are you competing with these big cloud platforms because one of the first things that I thought is like, but AWS gave me startup credits. Is it more of like, you know, you're actually talking to the people, so you have the people to people connection or am I just cheap? Am I like, hey, startup credits, you know, and things like this? How does that all work?
It's true that fighting with free money against free money is changing. I mean, the fact is, yeah, if you have to decide between paying money or getting free money, it looks more compelling at first to take the free money and deploy there. Then I'd say the reality is some people realize that even if they are giving you 10 grand or more.
or whatever money, you're still spending time. And so, yeah, we're basically trying to provide value to the people that realize that their time is more valuable than the free credit. I'd say that's a play. There is a lot of companies, luckily for us, that realize that it's...
It's a waste of time and that they need to spend this time on building the product instead of building the infrastructure. So, and then like one key thing for us is to get known. So we produce like content on getting started. So we have several tutorials on how to get started with deploying Python Flask for instance, or Golden.
gene on the platform and trying to help people who want to get started fast. And so, yeah. And the key advantage we have competing with these large players is if you look at AWS and you want to figure out how to deploy an app, you're probably going to have like 10 options. So if you're not...
And I think it's real, like if there is a page which lists you all the way to deploy an app. So if you're not familiar with AWS, you might give up pretty fast. And people will tend to look for a faster way to deploy. So that's one thing. And the other side of the story is basically price performance, where we are way better because...
not because we have a higher scale than them for sure, but basically because they do insane margins. So we do insane margins and we can provide way better performance for cheaper than them. So I think it's two axes. One is like getting people...
that realize that their time is more valuable than understanding all the products of AWS. And the other one is just like price performance one, we are pretty good at it. Compared to the rest of the launch plan.
I can definitely see that. When I first got started with AWS, I remember so desperately just trying to figure out, how do I SSH into the server? It made me so annoyed, I wrote an article about it. I was like, this is how you do it, people. So I can definitely imagine if you're not a DevOps person and like you said, you want to just deploy your web application, and there's a specialized service to do that.
and you don't have to deal with the 80 million AWS options, which is nice when you need that many options, but if you don't need them all, yeah, you might be better off just going with a service that's easier to use.
Yeah, and so, yeah, when we looked at it, so to build what we have today, we need at least eight different products on AWS, so which is fine if you're familiar with AWS, I guess. But yeah, so we, one, the key thing we thought about when we built it, so we thought about basically going with...
providing a primitive which is functions, for instance, instead of letting you deploy a standard repository. But we didn't want people to have to change the application. So, and we felt like functions are still not standardized today, so that basically you cannot move them around. You don't like learn.
You need to learn a way specific to AWS to deploy functions. I mean, that's our feeling today. So the way we approached it is like, we want you to be able to use your classical app with any framework you have today, and we'll kind of make it happen for you. And the end game in the long run is, ideally also, if we need to split it on your behalf.
into Functions, we will do it for you so that you don't have to think about like, basically this packaging way, because Functions to us is a packaging way and it's not like a paradigm because you can't do the same with containers. So, and actually you can now run containers on Function on AWS. You can run Docker containers on Lambda.
So, just to mess things up and to make it more easier for people.
Yeah, I think a really good point to highlight on all of that. You mentioned that typically in AWS, you need eight different AWS products to launch your stack. And one key thing there is you can actually misconfigure each of those eight different products multiple ways and your application still runs, but that's going to expose or possibly lead to different scenarios that...
you don't identify and definitely don't want. So yeah, I think giving people like this standardized path with limited number of options is definitely a winning scenario for people who are more focused on delivering product features and getting their application deployed instead of building and learning how to configure infrastructure.
But the thing is, you can then add flexibility. I'd say once you have this basic set of things, you can add up the flexibility on top. So the key change is to consistently fight on the developer experience so that every time you add a feature, it doesn't add complexity, and that the default is sane enough. I mean,
historically, I think like it was kind of the philosophy on rails like on really on rails, that is like a thing which was, we will give you a default configuration which is going to be working and you can change it. And so that's what we are trying to do. We don't want at the end to have like the same issues and Heroku had, which is
basically caps the features. It's not a zero strategy because then you're losing people to one AWS when they outgrow the platform. So the end game and the challenge for us is in the future to be able to provide you enough flexibility so that you can grow with us and that you won't outgrow the platform. And anyway, like in the current space, if you're a cloud service provider, whether it be like...
Like us, we're mostly focused on compute. We are also going to manage databases to the platform. But you have like a lot of players which are specialized in one area. Like you have people which are specialized on object storage. So it goes into a game where you need to properly integrate with other players on the market, which is a key thing for us because...
There is also a lot of people who already have something set up somewhere, and we need to be able to interoperate with these infrastructures. So, yeah, I'd say it's like one of the key subject also for us is how we do, how do we basically provide something which is like multi-cloud in the sense that you don't, not in the sense that multi-cloud you need to run on
how to do hybridation between different players. So that's one thing we are also focused on. We have a lot of partnerships with other players in the database space, for instance, as it's not our core product, so that you can basically build up an infrastructure which is state of the art with the best of three players, I'd say.
Right on, so you're just focusing on your core expertise and then partnering with people to leverage the things that there are experts on so that a customer using your platform gets to work with the experts for each individual part of their stack rather than having to negotiate all of these agreements themselves.
Yeah, I mean, so we have different strategy like some you need to discuss with some of our partners too, but we're focusing on helping you build a state-of-the-art infrastructure and getting this like done easily, I'd say. And so it can be against re-documentation or helping you figure out like...
how to set up just like an AVEN database, for instance, with Kaleo, so that we basically take care of the compute side and the AVEN database with AVEN. And so we're focusing on building up these partnerships so that you don't have to struggle building on top of the site, say.
Right on. What's the interface look like for using this? Is it a web-based UI that you go in and set settings? Or do you add a config file to your existing repository?
So you have a control, we provide a web UI, we provide also a CLI, if you're not into web UIs, or if you need to do more to start automating, we have our REST API behind it, so which is completely open, and we also provide like, actually automation tooling if you want.
Thanks for watching!
If you want to go further and for instance, like automatically deploy dedicated apps for each of your customer, you can use Terraform or Pulumi. That's all you can do on the platform. And by default, we just like, for our standard GitHub repository, we'll just like, like if it's a Python project, we will use read your setup.pi.
or your requirements.http or poetry file and build it for you. And if you want to basically deploy multiple microservices on the platform and do more, so today we don't have any like single centralized configuration file. We'll probably add like a coif.yaml at some point if you want you to do this.
But yeah, you can use Terraform or Polynifon for more complex scenarios where you need to deploy thousands of microservices.
Right on. Just a personal question here, which one do you like better, Terraform or Pulumi?
And this is an actuality subject related to that with the change of licensing from H-Hikon. So it's a dangerous slope. But I'd say Pulumi is probably more future proof. I mean, in the sense that like...
I'm putting you on the spot.
Right? No kidding.
If you take it from, we've been huge believers that like, the, you need to do some software engineering around infrastructure, that the right way to do infrastructure is to think about it like a, like a software engineer. And so Pulumi allows you to do more in this direction, I'd say, from the...
from my feeling. Then like actually Poulouni should think a lot Ashikor because most of their connectors are built on top of Terraform. So yeah. Thank you.
Yeah, no, I would agree with that. I think whenever we talk about moving and abstracting these up into higher levels so that software engineers get more control over their infrastructure, the fact that Pulumi is very code native, I think it just ties into their skill sets and makes the learning curve a lot easier for them to approach and learn the platform.
It also probably makes it easier if you really want to build, like let's say, a SaaS platform where each of your customers is going to have a dedicated instance. For instance, if you need to go through Terraform, I mean, yeah, it's going to be less than that. So.
I still like Terraform though, because I like it because you can't get too precious about it. You know, like what you see is what you get, whereas as soon as you throw a code at people, you start getting multiple layers of abstraction and nobody ever thinks of them the same. And I don't know, Terraform's just a fancy make file. So I have mixed feelings about the whole, like I know I probably should go learn Pulumi or the AWS CDK or something like that. And I'm like, I just, I don't want to, but I don't want to.
Yeah, I think I'm saying this, but we're actually using Terraform for the automation. We are using Terraform internally and in Ansible.
Me too, I'm still using terraform at Ansible. Probably still be using puppet too if somebody didn't make me switch to Ansible.
It's starting to be less hype, I'd say.
Just a little.
And I think you had, we were using salt in a previous life.
Yeah, it seems like in the last few years that Puppet and Salt have gotten a little bit quieter.
Yeah, I mean, I remember clearly we were using salt internally for everything in a previous life. And at some point a new generation came up and said, why are you using salt guys? You need to use Ansible now. It's over. So yeah, so we're... Yeah.
No more pillars.
Yeah, but we're really like, I mean internally we're pretty much like on, I mean we're dealing with a bunch of stuff which also new generation forgets like IPXE, this kind of stuff, even DHCP people forget about it if you are people using us don't need to care about this stuff. But we are still very much like at this level of the infrastructure internally.
We still have the same old problems. Formula problems, this kind of stuff. This didn't change behind the scenes. I remember one of our investors at some point asked us, asked me basically, but if it's serverless, are there still servers? He was getting confused.
but yeah the truth is it's just like it's the same old thing as before. Kind of difference because we replaced I'd say KVM with Firecracker so the networking stack is completely
hardware, but now we managed to basically get it completely virtualized. But yeah, very much.
I had the same conversation like HPC versus Kubernetes with somebody and I was like, listen, it's all just a bunch of computers on a network with some shared storage and it's fine, it's fine. A cluster is a cluster no matter what.
Yeah, and if you look at the storage technology, for instance, it's really funny because like 10 years ago you had 5 channels over ethernet and now you have NVMe over ethernet. So I mean, okay, it's not the same media, it's faster, it's a parallelized infrastructure. I mean you have several channels to be...
and so forth, so it's completely different, but the principle is the same. So that's my way of coping with the changes.
Yeah. And I think that's a really important thing to highlight for the value that you're providing is by using this service, you don't have to become an expert. And the difference between fiber channel and NVMe, you can just write to storage and go, okay, we're done.
Yeah, and so, and then we can enjoy internally having fun with it.
Yeah, I love the people who care about it, deal with it, and let everybody else not, it's the way. You know, it's the way that I like to think about it. Like, I do not care about networking, like, in the slightest. So somebody else doing that for me is great, but I, like, deeply care about, you know, the data management of your cancer data sets. So, like, I want to be dealing with that all the time, and that kind of stuff. But yeah, no networking. I don't like networking. I don't wanna deal with it.
Ha ha ha.
Either computer or social.
Yeah, it is a bit of both. Yeah, I never really got all my peopling skills back after COVID. I'm still like, ah. Like today, I had to go to an ATM. And the ATM in the little plaza across the street was broken. And so they told me, like, oh, go to the one in the mall. And I was like, go to the mall. But there's people there. I can't do that. I don't want to do that. So yeah, any manner of networking. Don't want to do it.
Right? Ha ha ha.
Yeah, but it's so fun.
So what kind of, you mentioned a little bit about the, about like a marketplace type feature, what other, what other new things are coming in the pipeline for you?
So we have two big subjects. One is actually providing block storage. I mean we have a lot of different subjects but blocks, low level storage and actually advanced networking capabilities are two key subjects. Today we limit you to inbound HTTP, so if you want to still want to host something more advanced like a
VOIP server or even a database on the platform, you cannot expose TCP port publicly. You can do it internally through the mesh. If you want to have a private database, you could technically do it on the networking side, but then we don't provide you block storage, so you have a slight problem. We have these two key subjects which are coming up.
which we need to tackle. So really, really infrastructure oriented and low level, I'd say. And we need to kind of still pick up the right technology because it's, yeah, I mean, storage has evolved and not so much at the same time. So it's still a mess.
I mean, you still have the same trade-off. And we also have a feature on the higher level of the stack, like preview environments, so that you can get new environments in a click, if you create a pull request on a new branch as a software engineer.
So today you can do it, but you'd have to create multiple services, each one on a different branch. So we want to provide you the simplicity on this side too. And auto scaling also. So I think it's like the four large jobs that we still need to tackle, which are in the short term. And before that, actually Managed Postgres is coming up in September.
I mean, if you have managed Postgres, do you really need anything? Because Postgres just kind of does it all.
Postgres has vector database capabilities now. That's like the most exciting thing that's happened in quite some time, at least for me.
Yeah, that's actually why we went with Pascris.
Really? Because the vector database? Or just like all the capabilities in general? Because now I want to know what you're doing.
Yeah, I mean vector database is a key subject and then in this direction you have GPU, we're pretty good for inference because we have high-end CPUs, so you'd get good performance on the inference side. We don't provide you GPUs for training yet. So that's also something on the back of our head. But yeah, Postgres with vector DB is one thing.
We probably need the reddit also at some point. That's the two you need to stay. I don't know.
Yeah, you know, just throw a redis on there at any given time. That's really neat though. So you're talking about training. Do you have people who are like taking the data that I presume is generated like by the application and then going and training machine learning models on that and using your platform for that as well.
Yeah, I mean, we store all kinds of use cases. So we have people trying to build training because basically we let you deploy API, full stack apps and also workers. So if you have to run any asynchronous workloads, you can also do this. And so this is also demand we get. It's more a question of how's that now.
making it happen in terms of priority. We're a small team, so yeah.
What's the, how long does it take to onboard a new customer from like the time that I create an account on your service until I can deploy my application? How long does that take?
Oh, I'd say one minute. I think we can do it now. It depends like, I mean, if it's a super sophisticated app with like tens of microservices, maybe more than one minute. If it's a basic API or full stack apps, you can really get your app like live in six locations in let's say five minutes, do not say one.
Yeah, I expected
The time like it builds and deploys a container because we still have we need to pull your image and so forth all around the world so it takes some a few minutes, but and the main thing is like if you have a non standout configuration Anything like Yeah, that's my PR Slow you down a bit, but if it's turned out
Thanks for watching!
or if you have a Dockerfile, it's really like a question of minutes.
Yeah, I think that's something that you definitely can't say for some of the other major cloud providers. It takes a bit more than five minutes.
Yeah, I mean, and so when we announced last week the four additional regions, the fact that you can now deploy in six regions across the world with everything automatically set up is pretty exciting to us. Our tagline was the fastest way to deploy applications globally, and the globally was meant to deploy across the world.
So we had two regions before, now we have six, so we're really excited about the fact that you can do it, that it's a reality today. So it's no more an issue if you want to reach like people in different markets all around the world, you can now do it. You still have some work to do, your app needs to work in this context, but we're making the...
infrastructure side of it, that simple. So, yeah. And you can actually combine, I mean, you still have tough challenges on the database side, but you can combine with other technologies. And we're going to provide you managed Postgres and to distribute the cache, you can combine with partners like ProRescale, for instance, who is doing a great job at caching data at the edge. So,
And yeah, we're pretty, we're really excited about like the number of edge technologies which are coming up to basically improve the experience for end users. Yeah.
Is there anything else that we should ask you about or anything else you'd like to cover?
Thanks for watching!
Hmm, yeah. I'm thinking, but I think, I mean, there's plenty of things we could talk about, but it's more about you. What you're, and actually, yeah.
No, you're the guest. It's about you.
and the listeners, you know the audience.
Yeah, we can cue the Beauty and the Beast song, be our guest. Ha ha ha.
Cool. I don't know. I think we should move on to picks then. Does that sound good? Yeah, all right. Jillian, have you got a pick for us?
I do. I've been using this library pretty heavily called AWS logs. It's like a Python package. And what it does is that you can give it the prefix to your CloudWatch logs and it will pretty much like tail in real time all the logs, which is really nice. If say you're doing something on AWS that creates a whole bunch of tasks and each one of those tasks have their own logs and you don't want to spend all day clicking around in the CloudWatch console, AWS omics, I'm looking
Instead you can use this AWS logs and give it the prefix to the logs that you want and it tails them in real time and it's nicely colored and all that. The other one is a platform that I haven't dug too deeply in but I think it looks really interesting and it's called cube. It's like a semantic, what do they call themselves? Cube.dev. The semantic layer for every data application. And it's...
It's like a really interesting way, I think, of...
Thinking about data within an application and the previous way to do it when you have a web application is that you would have a database and then you would have a model from each one of your tables and that was as far as the model went and anything past that you had to freewheel. Whereas with cube, your models are more the queries themselves so you're really getting into a finer level of control and detail with any of your data. So I just thought that was really interesting.
to just about everything as far as I can tell. I'm not sure if there's any kind of major data providers that it doesn't connect to, but all the ones that I checked were there. And then I guess for my last pick is gonna be the Kangamangas Highway through the White Mountains in New Hampshire. I went and did that drive. I haven't done it for a long time. So I went and did it with my kids and they were like suitably impressed and they're kind of hard to impress these days because my oldest is 12 and she's just like too cool for everything. And then, you know,
and then the other one kind of like copies her, but they thought it was cool. We were like driving into the clouds and it was pretty. There was waterfalls. It was like a good time was had by all.
Right on. So it's late August. Were you getting any fall colors up there yet? Yeah.
Just a little bit. There was starting to be like a couple places where we were like, look at the autumn. I'm gonna take my kids on an autumn trip because I feel like it's terrible that we're New Englanders and my kids have never actually seen autumn because we've been living in the desert for most of their lives. But that is being fixed this October. Yeah.
Yeah, not a lot of autumn activity in the desert.
No, no, not, not so much, not so much. We'll have to, uh, I mean, so that's complicated. I, you know, so I'm American and I'm from New Hampshire, but my husband has a job in Doha, Qatar, so we live here for like most of the year, and then we're in the U.S. for summers and Christmas and, um, I don't know, sometimes other times, you know, just as they like come up.
When are you based?
So my kids, but my kids, you know, they go to school here, they were born here, they were raised here.
Right. What about you, Jan? You have a pick for us?
I was actually looking at new books I just got. And so the funny thing is I love these books, but I never managed to read them in full. I just like, stan them and I stopped not because I don't like them, but... So I just received two new books. One which was gifted by my partner yesterday, which is called Atomic Habits.
That's a good one.
Yeah, it's actually, I started to just started to intro, but I'm really curious about it. And just, and so I actually have a few others ones which are interesting in term of leadership and yeah, also trying to be, I'd say it's more personal development. So this one is, I'm, I
read like one chapter from Simmons and Ike leaders at last already. But it's an interesting thing. I love things about like leadership in general and trying to understand different ways how things and basically and mostly how to build organization which are oriented towards the teams.
and trying to understand how to give more power to people in organizations. So that's like a key subject I love. And actually this one which is I also started and it's actually called the great CEO within. But it's so I suppose it's all it's written to one CEO but I don't think it matters.
because it's also about how to organize basically. And I think it's a pretty interesting topic.
So that's my three picks of the day. And South Africa is also related to visiting, is also a pick because I visited a few weeks ago and it's an amazing country.
Right on. Nice.
Where did you visit? Where, where? Hmm? Oh, okay.
So, Adjini in Cape Town.
Yeah, my favorite part about reading books like that is reading them, not implementing anything that they suggest, and then wondering why my life hasn't changed for the better.
Yeah, right. I feel that way about just like buying books sometimes. Like if I just buy the book on Kindle, that's there's a lot of commitment that's happened there.
Right? Ha ha ha.
Yeah, if you look at my laundry, I have a bunch of them.
Yeah, and along those lines, I'm looks like I'm about halfway through developer hegemony by Eric Dietrich. This one's actually it's pretty it's an entertaining read.
because it talks about the future of labor and specific to software engineering and how you fit into a company. And it addresses the fact that there's like three different types of employees, the idealist, the pragmatist, and the optimist, and how, you know, for most of us, our...
You know, like our career progression has been stay at a company for a while and then jump to another company for a bigger raise that you wouldn't get by staying at the same company. And so it directly tackles that. And it's been pretty entertaining so far. It might actually be relevant to you, Jan, as you're building your company and adding employees and trying to figure out ways to help make them successful in your company.
Yeah, definitely. It's really, I mean, it's a counterpick building a culture where people strive. I mean, in this regard, there is another one which I went further down in the reading, which is, yeah, I think it's this one. This one was really good, like Enchantment by Gi Kawasaki.
That's a good one. I read that book too.
Thanks for watching!
it's actually a really nice book. So it's about basically building a culture of positivity to try to, which is rewarning and self-rewarning somehow. So, yeah.
Right on. Cool, well the last thing, the thing that I always forget, I actually remembered this time, how can people get in touch with you if they wanna learn more about your platform, or just get in touch with you and pitch you on something.
You can reach out to me on Twitter. I'm Yan, I mean, should we call it Xitter or X? That's another topic, but on Twitter, you can reach out to me, I'm yan.euu. And you can anyway reach out to us through our super channel on community Slack channels. I'm also reading there and available there.
Right on, awesome. Well, thank you so much for joining us. This has been a cool conversation.
Yeah, thanks, this has been fun.
Thank you for having me. It was great talking with you.
right on anytime. All right, thanks for listening everyone and we will see y'all next week.