Show Notes
02:08 - Noah Gibbs Introduction
03:06 - Sinatra
03:47 - Rack
07:32 - Deploying Apps
12:22 - Support, Operations, and Monitoring
- DevOps
- Database Administrator (DBA)
- [Confreaks] Paul Hinze: Smoke & Mirrors: The Primitives of High Availability
- Reliability
- Enterprise Tools
- Learning Curve and Lack of Documentation (“Wild West”)
20:36 - Social Differences Between Communities: Ruby vs Python
- Ruby Rogues Episode #198: Expanding the Ruby Community Values to Other Languages with Scott Feinberg and Mark Bates
- COBOL, Java, C
- The SaltStack
27:18 - Deployment Tools Targeting Polyglot Architectures
28:39 - Ease of Deployment
32:26 - The Success of a Language = The Deployment Story
33:51 - Feedback Cycle
34:57 - Reproducibility
35:44 - Docker and Configuration Management Tools
44:06 - Deployment Problems
46:45 - Ruby Mad Science
- madscience_gem
- Community Feedback
- The Learning Curve
- Roadmap
- Multiple VM Setups
Picks
TuneMyGC (Coraline)
Bear Metal: Rails Garbage Collection: Tuning Approaches (Coraline)
Rbkit (Coraline)
Get out and jump in a mud puddle! (Jessica)
Release It!: Design and Deploy Production-Ready Software by Michael T. Nygard (Noah)
Ruby DSL Handbook by Jim Gay (Noah)
Bear Metal: Rails Garbage Collection: Tuning Approaches (Coraline)
Rbkit (Coraline)
Get out and jump in a mud puddle! (Jessica)
Release It!: Design and Deploy Production-Ready Software by Michael T. Nygard (Noah)
Ruby DSL Handbook by Jim Gay (Noah)
Special Guest: Noah Gibbs.
Transcript
NOAH:
Beware the Jubjub bird, and shun. The frumious Bandersnatch!
[This episode is sponsored by Hired.com. Every week on Hired, they run an auction where over a thousand tech companies in San Francisco, New York, and L.A. bid on Ruby developers, providing them with salary and equity upfront. The average Ruby developer gets an average of 5 to 15 introductory offers and an average salary offer of $130,000 a year. Users can either accept an offer and go right into interviewing with the company or deny them without any continuing obligations. It’s totally free for users. And when you’re hired, they also give you a $2,000 signing bonus as a thank you for using them. But if you use the Ruby Rogues link, you’ll get a $4,000 bonus instead. Finally, if you’re not looking for a job and know someone who is, you can refer them to Hired and get a $1,337 bonus if they accept a job. Go sign up at Hired.com/RubyRogues.]
[This episode is sponsored by Codeship.com. Don’t you wish you could simply deploy your code every time your tests pass? Wouldn’t it be nice if it were tied into a nice continuous integration system? That’s Codeship. They run your code. If all your tests pass, they deploy your code automatically. For fuss-free continuous delivery, check them out at Codeship.com, continuous delivery made simple.]
[This episode is sponsored by Rackspace. Are you looking for a place to host your latest creation? Want terrific support, high performance all backed by the largest open source cloud? What if you could try it for free? Try out Rackspace at RubyRogues.com/Rackspace and get a $300 credit over six months. That’s $50 per month at RubyRogues.com/Rackspace.]
[Snap is a hosted CI and continuous delivery that is simple and intuitive. Snap’s deployment pipelines deliver fast feedback and can push healthy builds to multiple environments automatically or on demand. Snap integrates deeply with GitHub and has great support for different languages, data stores, and testing frameworks. Snap deploys your application to cloud services like Heroku, Digital Ocean, AWS, and many more. Try Snap for free. Sign up at SnapCI.com/RubyRogues.]
CORALINE:
Welcome to Ruby Rogues episode 199. We’re going to be talking today with Noah Gibbs about deployments. Our panel today is Jessica Kerr.
JESSICA:
Good morning.
CORALINE:
And myself, good morning. So Noah, why don’t you introduce yourself?
NOAH:
Absolutely. I wrote a book called ‘Rebuilding Rails’ a while back, basically about understanding Rails by building a similarly structured framework. But you start from Rack, and basically the same thing as Rails starts with, and build it yourself. At this point, I’m building a deployment class. And looking at the existing stack it takes to get a server out there around your Ruby app, I think it’s frightening and kind of dismaying. So, I’m hoping to fix some of that.
JESSICA:
‘Rebuilding Rails’, is that like the Lego instructions for how to construct your own Rails instead of buying it pre-built?
NOAH:
A lot like that, yeah. You start by building your own controllers. You build your own views. You build your own ORM. I actually gave a talk at the Golden Gate Ruby Conference from the ORM chapter of that, basically doing an ORM small enough that all of the code fit on the slides. Like really, 80 lines of code really fit on the slides.
CORALINE:
So, that Lego approach sounds a lot like Sinatra to me.
NOAH:
It’s got some things in common. Sinatra is very interwoven. I’ve built on top of Sinatra, replacing systems of it. And in concept, it is a lot like that. And in practice, the Sinatra code is really not built for that.
CORALINE:
I was almost half joking, because there’s that old adage that any sufficiently advanced Sinatra app duplicates Rails.
[Laughter]
JESSICA:
How old is that adage? [Laughs]
CORALINE:
I didn’t make it up. I can’t take credit.
NOAH:
Well, it’s like all of great western culture. I’m assume it’s Chinese sometime before [inaudible].
JESSICA:
[Laughs]
CORALINE:
[Chuckles]
JESSICA:
Or in this case, maybe Japanese. [Chuckles]
NOAH:
But yeah, so in concept it’s a lot like that. And you can also build, I don’t know if you’ve used Rack middleware much but it’s amazing how much of Rails you can build out of Rack middleware layered on top of anything, which is how Rails is built these days.
JESSICA:
What is Rack? I’m not a Ruby developer.
NOAH:
Sure, sure. So, Rack is the protocol sort of like CGI that the Ruby web servers speak to the Ruby web frameworks. And so, the reason that you can use say Unicorn or Thin interchangeably is that they all speak the same protocol. But it turns out built into that protocol in Ruby, not in CGI is how to do a series of layers. So basically, your request goes through Rack middleware on the way in and the middleware can change the request. And then it goes through Rack middleware on the way out and it can change the response.
JESSICA:
Oh, okay. We have something like that in Clojure. It’s called Ring.
NOAH:
Yes, like that. It turns out that every Ruby web server is, every Ruby web framework is built on roughly that model. It’s just some of them have one big lump in the middle, yeah, for the entire ring.
And some of them like Rails have many tiny little layers that you can put in and take out.
JESSICA:
Has Rails always been like that?
NOAH:
Rails 3 is really where they fixed that. Rails 3 had a whole bunch of major refactorings and refinements. And that was one of the big ones. These days, Rails is extremely like that. But Rails 3 is where they did things like every controller action is made of Rack middleware, every controller is allowed to have its own stack of Rack. The routing is basically one more middleware layer and you can field requests before it even gets to routing. Things like that are all from Rails 3.
CORALINE:
I remember writing Rack apps for things like analytics and performance monitoring because you could do that outside of the Rails app and get really good performance information and good performance from the tools as well without getting in the way of anything.
NOAH:
Yeah. There’s some wonderful profiling middleware at this point. I’m forgetting the name of it. But it’s Sam Stephenson’s and it’s called like Rack Profiler or something.
CORALINE:
Creative.
NOAH:
Yeah. Well, it works well. There are several really good performance measuring things. Rackamole is another of my favorites. When I built a metric system built into the web stack a while back, I started with Rackamole and tore out the Mongo DB guts and replaced it with the Cassandra system we were using.
JESSICA:
Nice.
CORALINE:
Cool.
JESSICA:
I like that, the middleware idea. Back in the enterprise Java days, middleware got a really bad name because it was a pain. But this is different. Something that gets the requests before it passes them to you and gets the response before it goes out the web server, that’s actually a very functional programming way to look at it, because it’s like a pipeline.
NOAH:
Mmhmm.
JESSICA:
Here’s the response coming in. It goes through all these different steps, including the performance monitoring one. Then it comes to you in the data format convenient to you. you do the business logic that you need to do, send it back out, and it goes through a bunch more steps in the pipeline before it finally hits the rest of the world.
NOAH:
Yeah. Well, if you’d like it to get a lot more functional, they are designing Rack 2 as a successor to it right now. And right now they just pass one big hash table through and you modify it. So, if you want to suggest that it get a lot more like that, this is the time to speak up.
JESSICA:
That is exactly like Clojure.
NOAH:
[Laughs]
JESSICA:
One big hash map to rule them all.
CORALINE:
Flannery O'Connor said everything that rises must converge. So, it’s neat to see all these ideas coming together and cross-pollination of ideas between functional and dynamic languages.
NOAH:
Yeah. Well, at the risk of being a heretic, I think that modern languages are usually a lot easier to learn than old ones. And I think you see a lot of cross-pollination because you just see a lot more people that know many languages well and are in all of the communities. So, it’s a lot easy for somebody to speak up and say, “Hey, the Rack guys did this. You should do that, too.”
JESSICA:
True. Like Erlang was designed by three people. And it was designed very carefully but they still only had the input of three people. Language design is different these days.
NOAH:
Indeed, indeed.
JESSICA:
Alright, so great. Rails is built out of Legos. And now you’ve moved onto, you have a Rails app and what do you do with that besides run it on your own machine?
NOAH:
Well, I hope you put it up where maybe the whole public, maybe just people in your company can use it. I hope you put it where somebody other than you can use it. I like the Lean Startup guy’s idea that you have a lot of fun ideas when you start but all your really good ideas come from testing things with customers.
JESSICA:
So Noah, what‘s your recommendation for deploying apps?
NOAH:
It depends a lot how much time you have. I will say Heroku is a really good choice. If Heroku works for you then do that first. They have a limited number of free hours a month. So, your app can’t be actually fielding user requests for literally every hour of the month, not for free anyway. But Heroku is really nice because you can just do a git push and literally it’s running. If you haven’t tried it before, please go out and try it just because it’s magic. Even if you don’t use it, go out and try it. It’s that good.
If Heroku doesn’t work for you, for instance they’re very expensive once you actually have to pay them for hosting. And they have limited configuration. You get the Ruby version they choose. You get the built-in libraries they choose. If you want some specific libxml version or you want some specific Ruby version, that’s a little bit harder. And there are external tools. But again, they all cost money. You’re welcome to use Redis but you use Redis through an external vendor with one of the specific versions and packages they choose.
So, once you sort of outgrow Heroku or decide that it’s not going to work for your current project, that’s when things usually get difficult. There are tools that will provision a server for you. Or you can get one of those online tutorials where you just cut and paste a lot of commands in. And the hosting of the server itself is not real expensive. But most of the tools take a long time to learn, Chef, Puppet, Ansible. There are a lot of them, and they all require a fair bit of learning. And that’s again for the server provisioning.
I think the most common approach is do the cut and paste tutorial at one server, hope that you never need to seriously modify it or replace it, and then just push your app onto the server by copying basically with a cp. I think that’s how most of us start. There are definitely some problems with that approach. But it works well enough. If you want to get it online, that will get it online. It’s just that as soon as you want to modify it, as soon as you want another server, it starts to get painful.
CORALINE:
So, that’s where the suite of tools comes in. DevOps has been a big deal for a few years. And increasingly it seems like developers want to take control of that provisioning and deployment process. It’s not like the old days where you can just do a Capistrano push. There’s a lot more complexity to it now, especially if you’re deploying to a cloud platform. So, where do you think that urge to take control of the process came from?
NOAH:
Part of it is that applications are getting more varied. In the end, if you were making shrink-wrap software, if you were making software that you’d build, you’d run it on your laptop. Other people would run it on the laptop. The window interface itself, you didn’t get a lot of choice over it. Apple chose for you or Microsoft chose for you, or the GNOME guys chose for you, whatever you’re shipping. The browser actually is really versatile. And so, there’s a lot of things you can do in it, not just as far as the HTML you push to it, but if you want to take control on the backend of things like WebSockets, that gives you a lot of interesting possibilities you didn’t have before.
And in the old division between, we have one backend server that we run and it speaks binary protocol and then we have the app that runs on people’s machines, transferring to this brave new world with WebSockets and with server push and with interesting stuff you can do on the serverside, developers are restricted by these choices. If you don’t have something that runs WebSockets, you can’t do WebSockets. And so, they’re interested in more of these backend capabilities in a way they didn’t use to be, which is great. I love the fact that we’ve got more power. I love the fact that we’ve got more to choose from.
I love the fact that as a developer I can say, “Here’s this not fully done protocol that there’s these tools for. I want to use all these tools. Even though a traditional Ops department wouldn’t necessarily want to touch them, I would love to use these.” But the flipside of that is as a developer, if I’m going to use these things there’s a lot of work for me there. There’s a lot of interesting things for me to do. And so, tools like Chef and Puppet and even the new versions of tools like Capistrano have done a lot of changing to suit that, to match that. [Inaudible] the fact that much more often, I as a developer say, “Oh there’s this really cool thing I want to do. Oh, I have to use a protocol that’s one year old on a tool that’s three months old. To do it, I need the power and the freedom to use that.”
And when you ask, “How does that work?” the answer is not that your Ops department has anticipated your need and made that a supported platform for your big company. The answer is that you’re going to do a lot of the work yourself.
CORALINE:
So, you’re sort of taking on the burden or support as well and bypassing the entire Ops organization in that case, right?
NOAH:
Often different companies work different ways. The most sensible DevOps people that I talk to out there, DevOps of course is very young and there are a lot of people with a lot of different ideas about what it means, but the really sensible DevOps people that I talk to, the ones that are actually doing that, good DevOps is more like, it involves breaking down those traditional silos. It means you don’t have a separate Ops department as much as you have Ops people in with the team or assigned to the team, or you have people on the team that are picking up a lot of that work. It’s more like how a DBA works. You may have a database administrator for your company and you bring the database requests and schema to them to optimize. Or you may have some database expertise in your team. You may have a database guy on your team who’s also a developer. Or maybe you do some of each of those. But the trick is to distribute that expertise out and to break down those silos.
Operations has been long poll for a long time. It’s very easy when you’re building stuff for a web app. You as the developer could push it out in five minutes, except here’s this operations process. Here’s all the tools and figuring it out and how are we going to maintain it, which if you have a separate department and a separate group can take a really long time. By moving that expertise into the team, you can turn that around a lot faster.
So, when I talk to really sensible DevOps people, that’s what they’re talking about is breaking down the silos. And again, it becomes more like how database expertise works. Maybe your developers pick up a lot of that.
JESSICA:
That works with my experience, too. I’ve seen developers pick up some of the operational work. And I’ve seen, where I work now at Outpace the operations person joins our pair and we do stuff together. So that even though we don’t know all the details of how AWS, deployments, and provisioning goes, we pick that up. And we have permission. So, the stuff we do know how to do, we do by ourselves and we don’t have to bother our operations people.
NOAH:
Nice.
JESSICA:
The beauty of that is that it closes a feedback loop in that now developers are directly affected by operational and maintenance problems. And it’s not just, throw it over the wall and let somebody else deal with it. We get to think about, how is this going to work in production? And is it going to be smooth? And are we going to be able to tell how well it’s working?
NOAH:
Do you go all the way and carry pagers?
JESSICA:
I take my turn, yes.
NOAH:
That’s how we did it at Ooyala. It’s a very good way to get developers to think about operations. It’s sort of the gold standard.
CORALINE:
I’m at MountainWest RubyConf this week. And Paul Hinze yesterday gave a talk about primitives in the realm of high availability. And one of the things that he, one of the mantras that he repeated is always ask yourself, what happens if this component fails? And it seems like if you were a developer going more of the DevOps, or pairing with an Ops person, that’s an opportunity for you to bring your problem-solving skills to bare and look for the single points of failure that maybe your code can address, or maybe part of your architecture can address in a way that you wouldn’t get without that collaboration.
NOAH:
I agree. This will sound bad. I think developers are often at their dumbest when they forget that you can apply development principles to a particular problem.
CORALINE:
Can you expand on that a little bit?
NOAH:
It’s easy to not think of reliability as something that you can throw software at. It’s easy to not think of reliability as part of your responsibility. The reason I say that carrying a pager is kind of a gold standard is when you say, “Ask yourself what will happen if this fails?” If you can feel the pager on your hand, it’s like it’s always asking you, “What will happen if this fails?” It reminds you that, “Oh hey, I have software skills. I really don’t want this to fail in the middle of the night.” It makes you ask that question all the time. I think it’s easy for developers to say, “This isn’t my job. I don’t have to worry about this.” And so, when I say developers are at their dumbest when they don’t think of a problem as being one you could throw software at, we’re not usually dumb once we understand that a problem is one we can address. But we’re often bad about just not even thinking, “Oh hey, I could fix this.”
JESSICA:
Right. We can fix our own problems and automate our own process. For instance, whenever that pager or my cellphone goes off, my first reaction is either how could I make this not a problem, or if it’s not already not a problem, how can I make the alerts more specific so that I don’t get paged about things that are not urgent?
NOAH:
Mmhmm.
JESSICA:
And all of that is done through programming. The monitoring software is software and there’s plugins. And I always try to solve the problem from the outside in. Like, first if I get a pager and I’m like, “What does this even mean?” Right. Let’s change the error message to give me more information.
NOAH:
Yup.
JESSICA:
And so on down, so that each level of the problem becomes easy to solve. Because all of that monitoring fiddly stuff, it doesn’t just fix the problem that occurred that day. It makes our lives easier in a lot of ways and with all of the problems that hit that monitoring software.
NOAH:
Absolutely. And that’s part of the reason that I’m glad a lot of these new DevOps tools, when I say that I immediately think Chef, I think Vagrant, even Capistrano has gotten a lot better. I’m happy a lot of these DevOps tools are getting better about declaring everything as code. At this point, your deploy code should be checked into a source control system. That’s been true for a long time, but people often didn’t do it. And at this point, a lot of your deploy code can live together as a little codebase. Sure it’s using several tools. So does your Rails app.
I have a piece of open source software I’ve been putting together called Mad Science. And it’s basically Vagrant file and JSON file as top-level entries to control Chef, to control Capistrano. Because usually, usually deploy code is the opposite of Don’t Repeat Yourself. Usually deployment code has a lot of, well, this app needs these things and I’ll put that in the Chef file. And it needs this other piece of software and it needs this file to be put in one place. I’ll put that in three separate cookbooks. Oh, and I’ve got to touch this other thing when I’m deploying in Capistrano. I’ll put that in the Cap file or in deploy/whatever. And so, you have all of these separate setups that have to mostly stay in sync. And they do. They mostly stay in sync. We’re starting to see bigger enterprise tools like Atlas, change that. HashiCorp. You had Mitchell on not too many weeks ago.
JESSICA:
Right.
NOAH:
It was awesome. Thank you. HashiCorp is building a lot great tools for this. They basically really understand that you can just declare your infrastructure as a set of code, of config files almost, and just declare a whole giant infrastructure and then have a tool say, “Make it so.” But of course, you’re seeing that for the enterprise first. You’re seeing that for giant setups first. It’s surprisingly hard to do that for a little server. If you’re one random developer and you say, “Well, you know I’ve got my three little Rails apps that I’ve written and I’d love to have them online but how do I do that?” The answer had better be Heroku, because you don’t have many other answers that don’t take weeks of effort or more to learn and to use.
CORALINE:
I found the learning curve on a lot of those tools to be pretty steep. And it seems that there’s a lack of standardization. For example with Chef, when do you use a cookbook, when do you use a recipe? Where do the files actually live? There doesn’t seem to be a lot of documentation of standard practices. It feels a little Wild West.
NOAH:
It’s very Wild West, yes, absolutely.
JESSICA:
We use Ansible, same thing.
NOAH:
Yeah. Well, Ansible’s interesting because a lot of these tools aren’t technically very different from each other but they have very different points of view about how to use them. Ansible is very much, I’m a developer. I hold my nose about infrastructure but if I do infrastructure stuff I want to do it in Bash, which is a fine point of view. But it’s very different from Chef where it’s, I want a big tool. I want it to feel reliable. It’s okay if it runs slow. I may be starting from me but this needs to scale up to big company stuff. Very different points of view.
And Chef and Puppet, which are almost like long-lost twins in terms of their technical capabilities, you can see the difference in their points of view by looking at the operations they build in. Because the external third-party stuff is very similar. But Puppet, half of the built-in operations start with Nagios or with other very traditional Ops stuff. There is no operation to just deploy a Git repo. Whereas Chef’s the opposite. Chef has basically, sort of like Capistrano, built in because it’s clearly designed from the developer perspective, but the big enterprise developer perspective. It’s not that things are short and easy and DRY. It’s that yes, we understand that the reason you want this is to deploy your app and it’s probably stored in Git.
JESSICA:
That’s very useful, thank you.
NOAH:
No problem. There are great breakdowns of how these tools are different from each other, especially I was… we’re moving off Puppet. We considered SaltStack. We considered Ansible. Here’s what the differences were. There’s a wonderful article on that. But the big differences that they emphasize, which is the right thing, are the social differences. Here’s what the community is like for the two. It’s like the difference between Ruby and Python. In terms of technical capability they’re extremely similar. And yet in terms of the community they are very different, hugely different. I like to think of Ruby and Python as, again their parse trees could be long-lost twins in some ways. But their communities are utterly at odds, utterly opposite.
JESSICA:
Just last week we were talking with Scott Fein about the differences between Ruby and Python. It sounds like you have a little more experience in the two communities. Can you talk more about that?
NOAH:
Well, sure. I have a lot more experience in the Ruby than the Python community. But the big difference that I point to is that Python has [ouido]. Python has the Pythonic way to do it. Python has a lot of debate about the right way to do it. Whereas Ruby, we inherited the whole TIMTOWTDI thing from Perl. There’s more than one way to do it. We’re very big on, if you’ve got this weird, crazy idea and it probably won’t work out, man, go do it. Rake was one of those and we use it. We love it. Bundler was one of those. Now we all use it. Try the crazy stuff. 95% of it sinks and the other 5% is amazing. Go do it. Python’s not like that. Python thinks they know the way to do it. And so, the language feels very different. You don’t change toolsets every three years. We all had to go adapt to Bundler. And some of us did it cheerfully and some of us less cheerfully. But the flipside of that is we keep getting better.
JESSICA:
[Laughs]
NOAH:
It’s just that you’re always in transition.
JESSICA:
In Python they can’t even get people to move to Python 3. [Chuckles]
NOAH:
Right. Whereas in Ruby if you say, “Well, they’re going to make giant breaking changes in this major version of the language,” they go, “Wow, they’re waiting that long?”
JESSICA:
[Laughs] It’s so much more fun this way.
NOAH:
Oh, I love it. I wouldn’t be anywhere else. And yet, when a big company guy says, “So, should we be doing stuff in Ruby?” My default answer is no. When I say, “Should I be doing stuff in Ruby?” Absolutely, no question. But here, I’ll pick an easy target and a caricature at the same time. Look at Java for a minute, which is in some ways as opposite of Ruby as any language community could be. Java is COBOL 2.
JESSICA:
Agree.
NOAH:
But the big way it’s COBOL 2, if you look at, I don’t know. Have you ever looked through a COBOL magazine?
JESSICA:
No.
NOAH:
I’d understand if you haven’t. Yeah, they exist. This is a real thing. You look through a magazine…
JESSICA:
Like, currently? Or…
NOAH:
Yeah. Oh, yeah. COBOL’s not dead.
JESSICA:
Wow.
NOAH:
COBOL’s still out there. The big place it’s still out there is banks. And the reason it’s still in banks is they want to write the code and 50 years later they want it to still be running the same way. 50 years, literally, really, truly.
JESSICA:
So, they really think they know the right way to do it.
NOAH:
Well, it’s not that they think they know the right way to do it. It’s that they’re worried that the programmers will go away. And mostly, they’re right. How easy is it to go out and hire a COBOL programmer now? But you don’t want the bank software to keep changing behavior. You want to do the same thing forever. When I say Java is COBOL 2, Java still has ugly, horrible language bugs to maintain, both source compatibility and binary compatibility, like in the bytecode, with the very first version of Java they released when I was in college. And I’m past college now. It’s literally, we can’t break source compatibility and we can’t break bytecode compatibility, therefore we still have type erasure even though type erasure is a stupid bug to still have in a language.
JESSICA:
Well, it’s not technically a bug. But it’s a terrible feature, where feature [inaudible] bugs.
NOAH:
Well, it’s a quirk. It’s the kind of thing…
JESSICA:
Quirk, that’s a good word.
NOAH:
Where not having it is clearly better.
JESSICA:
Agreed.
NOAH:
There’s no question that if you designed Java two years after the y designed Java, you wouldn’t have had that. But they have to be compatible forever. So, when you read those COBOL magazines designed for bankers and people like them, and they have source translators to turn all of your COBOL code into another language, that other language is always Java. Always, always Java. C is older than Java but it’s never C. You know why not?
JESSICA:
Why?
NOAH:
It’s because C changes. It’s because C has things like the whole 32-bit/64-bit thing. C genuinely changes a little bit on each architecture so that it can have better performance.
JESSICA:
Ah.
NOAH:
And as a result, it’s not what bankers want. It’s what systems programmers want. It’s what games programmers want. That’s perfect if you’re a game programmer. But if you’re a bank what you want is something that acts just like COBOL. I love you, never change. Otherwise we have to hire a maintenance programmer. Never change.
CORALINE:
That’s a significant cultural difference. And it’s pretty interesting how the underlying technologies inform the cultural values around those different communities. Are the deployment tools in Python as mature or as quickly changing? You sort of implied that they’re not, because there’s one way to do it in Python. Is there any iteration and innovation in that space in the Python world?
NOAH:
So, there’s absolutely innovation in that space in the Python world. The big one is Salt. I don’t know if you’ve heard of the SaltStack. But Salt is the Python equivalent of most of these Ruby tools. And it’s good. It’s a solidly good stack. A lot of people use it. I mentioned that blog article called ‘Moving away from Puppet: SaltStack or Ansible?’ Salt won over Ansible mainly because of those cultural issues. They have good response to bugs. They’re very on it. Salt also runs a lot faster than the various Ruby stuff because it was designed to run fast on large projects from the very beginning. Whereas to a large extent, the Ruby folks didn’t necessarily know what they were going to eventually be building with it. So, Python absolutely has good tools.
But the primary answer to this is for deployment tools, it’s surprising how often you don’t need the tool written in your same language. It is very easy to use Capistrano to deploy your Python app or your C app or your Java app. It is surprisingly, especially with Capistrano 3 they actually took all of the Rails-specific stuff from Capistrano 2 that you had to have a separate module to patch our the Rails-specific stuff, that’s none of that default in Capistrano 3. They rewrote it the right way where it’s sort of a blank thing that knows how to run ssh on other machines. And Rails is an extra plugin on top. And if you don’t include that plugin, it’s not all about Rails the way it used to be. And so Fabric, which I believe is Python, and Salt which I know is Python, those are very good, very solid tools.
And so again, it’s a difference of approach. The Ruby guys were very big on, we’ve got this problem. We’re going to jump right in. Some of our decisions are not going to be the very best, and that’s okay. We can refactor later. There will eventually be another tool. And honestly, sometimes the tools feel like that. There have been a lot of sort of half-assed Ruby tools. And that’s okay. A lot of the Ruby tools are very seat of the pants. And it’s okay because there have been so many more of them. We have thrown away a lot of bad Ruby tools. But we have kept some very good Ruby tools. And a lot of the very good Ruby tools have informed all of the later tools. SaltStack couldn’t have happened without Puppet or Chef. And Puppet and Chef are both very Ruby tools.
JESSICA:
We do a lot of experiments [inaudible].
CORALINE:
With the advent of service-oriented architectures, a lot of dev teams that I’ve been exposed to are using that as a way for a particular service, they’re using that as a way to experiment with different languages like Go or Erlang or Clojure or some of these other languages. Are there deployment tools that are specifically targeting polyglot architectures?
NOAH:
I would argue that most newer deployment tools are at this point, it’s unusual for a deployment tool not to do that. Because well, because they’re often pushed by companies. And even if they’re not pushed by companies, they’re pushed by programmers to other programmers. And if you’re an elite programmer targeting other elite programmers, you know they screw around with a bunch of different languages. You know they write things in a lot of different languages. And so, they can’t use your tool if it’s a one-language deploy tool.
The question is really, what do you write the tool configuration itself in? But even for say Salt which is written in Python, they make it very easy for you to write one Salt, I don’t remember what they call the equivalent of a recipe, but the one Salt module, in Ruby or in other languages. Puppet and Chef are not quite as friendly as that. Again, I think it’s because they were earlier and in some ways their approach still feels earlier. But it’s absolutely possible to write in other languages as well. Ruby is just very much the default for those languages.
JESSICA:
So, it turns out that when we’re looking into deployment we can decouple that from our language choice for our applications.
NOAH:
Absolutely. You’re also seeing languages where one of their big selling points is being easy to deploy. Go is the language I always point to as my favorite example of that. Do you know about Go and deployment and what they do there?
JESSICA:
A little bit. But go, please, talk about it.
NOAH:
Yeah. So, their big thing is they don’t have a standard library in the same way. There’s libc, you’ve seen, is the C standard library. And so, almost everything on your system one way or another is dependent on libc. Go has no equivalent of that. Instead for all their standard library stuff, they fully compile it and build it into every Go binary. It calls all the way down to your operating systems sys calls directly without any intermediary, without any dynamic library in between. Which means it’s bigger. You know, nice thing about libc is it’s factored out of all of these things on your system that use it. And so, they’re all smaller by that much.
But the flipside of that is the problem that if you’ve deployed C programs you’re used to where you have to separately build on whatever the oldest operating system version you want to support on is, no matter what your developers are running, that problem goes away. Because it’s a fully statically linked Go binary that calls all the way down to the system call level everywhere. And so, any build anywhere is going to be basically the same thing, which is really powerful. In the same way Java just compiles down to bytecode, and anything handled from that level is handled on the individual machine. So, Java has a different approach but it gets the same kind of, compile it once, deploy it wherever you want. Because if you’re going to have an old machine, that just means you have an old JVM and that’s fine.
JESSICA:
I’m confused. You said that it compiles all the way down to the system calls. And then you said you could compile it once and deploy it wherever you want?
NOAH:
Yes, for Go. That’s basically because the libc interface changes a fair bit. Libc is big. Libc has a lot of stuff in it. The system calls actually don’t change very often. The difference between one kernel version and the next is very small. They occasionally add new system calls but it’s very, very unusual to ever change how a system call behaves. And so, if Go compiles all the way down to the system calls, you can’t deploy it on every operating system.
JESSICA:
Oh, okay, okay.
NOAH:
[Inaudible] the Linux one is Linux-specific. But you can deploy it on any Linux operating system from the same build. You don’t have a separate Red Hat build from an Ubuntu build. You don’t have a separate Ubuntu 12 versus 14 versus 14.10. Most of the things that change relatively rapidly, that completely avoids. It’s the equivalent of compiling your C binary with static libc. You’re building that current one directly in. And so, the way you solve that for a C binary is you compile it with static libc. But you’d probably also try again for the oldest one you can find, because some things depend on that interface. They system call interface is a lot smaller and it changes very rarely.
JESSICA:
So, Go is somewhere between Java and C in that Java you write it once and deploy it anywhere with a JVM on any operating system.
NOAH:
Yes, yes. Yeah, for Go you generally compile once per operating system but not for any smaller division than that. With C, it’s not uncommon to have a separate Windows NT version from Windows 200 from more recent Windows. Mac it’s not uncommon, you remember fat binaries when you had the two different operating system versions. And you can in fact compile for multiple versions in the same way on Mac. With Linux it’s very common to have multiple binaries for multiple distributions. And Go gets around that beautifully.
JESSICA:
So, Go has the compromise of build it a few times for great performance on many, many different machines.
NOAH:
Yes, combined with building it on recent machines compiles to the same thing that it would on older machines. And so, you don’t have to keep the ancient rickety Ubuntu 12 machine sitting in the closet just to have a build server.
JESSICA:
You brought up the point that part of the success of a language these days is the deployment story.
NOAH:
Mm, absolutely.
JESSICA:
Are tools like SaltStack making that less of a deciding factor?
NOAH:
I think that when tools like SaltStack get better, they will make that less of a deciding factor. Right now the configuration management tools, the tools that will actually provision a whole server for you, are very hard to use. They require a lot of expertise. They require a lot of time. It also doesn’t help that the debug and deploy cycle is very slow. For a midsized project 10 minutes is a pretty standard amount of time for it to take to do a deploy with a tool like that. For a larger deployment 30 minutes is absolutely not out of the question. There are Chef runs that take a lot more than 30 minutes out there, especially for many different nodes and many different, a lot of servers at once.
JESSICA:
Agree. Our Ansible deploys take at least 20 minutes. And that’s not a big deal if what you want to do is deploy to production. But when what you want to do is debug the deploy script, oh it’s so painful.
NOAH:
Yeah. And so, while we have some good tools like Vagrant that allow deploying locally, which is faster and doesn’t cost you money for hosting, that’s nice, they’re also slow. Vagrant is not fast. And it’s not fast partly because of say Chef which it’s usually running. But it’s also not fast on its own. If you had a really fast deployment tool, Vagrant would slow it down a lot. It’s just that when you’re running Chef Vagrant’s overhead doesn’t matter.
JESSICA:
[Chuckles]
CORALINE:
So, if we’re writing code to deploy code, it sounds like the feedback cycle might be too slow to actually drive that with tests.
NOAH:
It’s old-style testing. If you do it, it’s the way you used to test. It’s daily tests and weekly tests. It’s not constant tests. That’s absolutely true.
JESSICA:
But it is automated tests?
NOAH:
Yeah. Actually, automated tests get a lot easier with these as long as you have a small number of physical machines to do them on. So, I mentioned the Mad Science deploy tool stack that I have to unify this. And the best thing about driving it all through Vagrant is that I have a single command line that will create the new virtual machine, deploy everything to the new virtual machine, run the applications, wait until the applications are ready, run tests on the applications. If you want to do automated testing, new deployment tools are much, much better than older deployment tools. They’re not perfect, but they’re much better than they used to be. Basically, if you can run a virtual machine on your test mode, the same few command lines that you would run to get a production environment are the same few command lines that you run to test your production environment.
And that’s a huge improvement. That’s very nice.
JESSICA:
It’s also reproducibility.
NOAH:
Yeah, absolutely. Although reproducibility is somewhat mixed. Because in the same way that your C code compiles down to Assembly and anything in browser compiles down to JavaScript, all of these things compile down to Bash.
JESSICA:
[Laughs]
NOAH:
And so, reproducibility… Yeah, well no really, truly.
JESSICA:
[Laughs] Alright. So, Bash is the bytecode of deployment?
NOAH:
Yeah, absolutely. You’ll see a few things that try to get around that by using direct calls. But all those direct calls are modeled on Bash. All of the things they do with them. Because the way you think about these things, I want to do something to my server, is in a shell script. Chef, there’s no reason it has to be based on Bash but it is.
JESSICA:
In Windows, is that PowerShell?
NOAH:
Yeah, often. Yeah, all of these things support PowerShell as well, if they have a Windows client, which increasingly they do. Now, the one big exception to this, the one big hope that we’ve got on the horizon, is Docker.
JESSICA:
Mm.
NOAH:
Have you heard of Docker at all?
JESSICA:
Oh, yeah.
NOAH:
Okay. It’s hard not to. Docker’s going to be a big deal. It’s really not arrived yet. But it’s going to be a big deal. So, Docker where you can easily have a lightweight VM that you instantiate within your VM, it’s basically a little Linux running inside Linux and you get to decide exactly how closed off it is, whether it’s got its own CPU quota, whether it’s got its own disk quota. You almost always want it to be chrooted.
JESSICA:
Chrooted?
NOAH:
Sorry, sorry, chroot is… I’m an old C guy so I often throw around old system programming stuff. So, chroot standing for change root in the same way that chmod means change modification bits and chown means change ownership. Chroot means change where the root file system is. So, what you can do is you can tell a particular subprocess this little subdirectory down a number of layers, as far as you’re concerned that’s the root of the file system. You can’t see anything above that, not ever again. It’s a great security technique. It means that there is nothing that you can do inside that process or any of its children forevermore that can write outside of that directory tree.
JESSICA:
It’s also beautiful from the perspective of a functional programmer, because the side-effects are limited.
NOAH:
Exactly, exactly. You’re limiting the scope of side-effects. And pretty much all of the various, when they talk about containerization, that’s what they’re talking about, is different ways to limit the scope of side effects. You have process namespaces where you’re basically saying, we can see which of these processes are yours. And if we need to kill everything that’s yours, we know where to find that. The CPU limited quota and disk limited quota, and yeah, chrooting, changing what portion of the file system you can see, these are all the same general idea. And as you say, it’s a lot like functional programming. It’s, here’s is the space in which you may act. Anything outside of that you can’t act on it, you can’t see it, it’s dead to you.
JESSICA: a lot of reasoning about code is being able to say what it won’t do.
NOAH:
Mmhmm. The big thing about Docker is that it makes it easy to impose certain limits. And it does it really fast compared to previous solutions. The problem is that we haven’t figured out all the tools that will make Docker useful. It’s clearly an incredibly powerful primitive. And we’ve got a few really interesting tools around it. But what you want to be able to do is to wrap your code up in a Docker container before you ever deploy it, which is unfortunately at this point built out of raw bash commands (you don’t use a configuration management tool to build those), and then ship it out. And then you have exactly the version that you had in development for all of the libraries and for the Ruby version that you ran and all of these things. You build all of that into the container.
It’s like if you shipped your own whole VM and then sent it out and ran that VM under another computer, inside it and just hooked up ports between it and the other VMs. And it’s even faster than that. If that was your workflow, if you used to ship VMs around, Docker is like that but really fast. And you’ll love it and it’s amazing. But if you didn’t use to do it that way, you’re waiting for the tools to make Docker usable, because you want to be able to ship several VMs to the same computer, run them partially wired up to each other, and deploy things in various hot swappable ways. Plus oh, how do you get the state out of that container?
JESSICA:
It’s such a beautiful vision.
CORALINE:
Though you said that the tools around Docker aren’t mature enough. Where do you see those gaps being right now?
NOAH:
So, there are some early tools. There’s not a standard, it’s not well-established, for wiring them up. That is part of it. The other thing is, what image do you run inside a Docker container? Because it’s not exactly a full-on first class VM. In Docker, you’re allowed to designate any process you want as the init process. And one of the things you discover, especially if you’re an old-school Unix wonk, is that not every process is equally good as an init process. Because most processes don’t ever expect to be the init process. They expect the init process to be the init process for them.
And so, it’s easy to wind up with huge lists of zombie processes sitting around, because this process doesn’t know it has to harvest them. Or to wind up with problems where, how do you debug it? Say you roll a Docker container out. Do you run an sshd? And if so, does that mean you’re running a separate sshd with every piece of software you put on the machine? There are some ways that you can spawn an additional process inside a Docker container. But again, that’s not how these things are used to. If you use a standard Linux distribution you’re installing far too much and it doesn’t think it usually runs that way.
All of these are interesting problems and they’re solvable problems. We have some idea what the solution to this is going to look like. Now, what we do is we have a fight about what the right workflow is, because this is an incredibly powerful primitive operation. Now we come up with five or six different workflows for it and all but one or two of them are going to lose and go away. We just don’t know which one or two.
JESSICA:
Once we have this, will we have that midrange solution for Heroku’s not enough but I’m not the enterprise?
NOAH:
I hope so. Docker looks like the big hope for that right now. Docker certainly looks like it could make a lot of our existing workflow faster there. But it depends who builds the solution. One of the problems with configuration management tools right now is that they’ve been built out entirely by enterprises. There are a few small people using them, but not many. For most individual developers, it just looks like too much work. And so, the individual developer aren’t in there contributing. And so, the solutions never grow in those directions. They never get good for us because we’re not there as part of that process.
JESSICA:
From a developer perspective, it’s really painful to switch hats and say, “Okay, I’m going to stop writing Clojure or Ruby or Scala for a while. And I’m going to learn Chef or SaltStack or whatever it is.” It’s a completely new world to wrap your head around. And it does, it takes a ton of time. You kind of need a specialist on each team to do that. But as soon as you say team, you’re getting into the enterprise. We have a special team that does that stuff.
NOAH:
Yeah.
JESSICA:
And tries to make Docker approachable for the rest of us.
NOAH:
And we know what those problems look like. We already have those problems, the problems that you get with a special siloed team that uses their own specialist tools for this. That’s the problem we’re trying to fix right now. And so, if we switch right back into that, we get those same problems again.
JESSICA:
On the other hand, you have things like the developer tools team at Netflix whose job is to use programming to solve the problems that programmers have.
NOAH:
Mmhmm.
JESSICA:
Or Coraline’s developer happiness team.
CORALINE:
Yeah, we do a lot of work around tooling as well. Not particularly in deployment just yet. It’s not one of the high-value pain points that we’re seeing at present because we do have a mix of DevOps and an Ops team. But yeah, there’s definitely a place for that sort of glue code or glue process to be put in place before… and work out the issues there, work out the pain points there, before making that available to the broader team.
NOAH:
Yeah.
JESSICA:
And then with a good abstraction in front of it, because that’s really what we’re looking for, right, in that Docker is a good abstraction.
NOAH:
Yeah. And to a large extent, if you want to see what one good abstraction looks like, look at Heroku. Heroku does a beautiful job here. Though I’ll say they only do a beautiful job because it’s specifically for Rails. Because it’s specifically for the HTTP stack. The more you know about what someone is going to do, the better an abstraction you can provide them.
CORALINE:
So, are we still at the point where we’re developing the metaphors that are going to lead to the abstractions that we need to make this a painless process?
NOAH:
I think we’re not going to get to a point for deployment where there is a sort of academic-derived set of metaphors. I don’t think we’re going to get a lot of the kind of metaphors like binary trees and state machines that make a lot of programming so easy to talk about now. And the reason I believe that is because most of the problems with deployment are incidental complexity by their nature.
JESISCA:
Our team at Outpace that worked on Docker was like, “Get Docker running. It’s super-fast. Fix corner cases. Fix corner cases. Fix corner cases. Fix corner cases. Fix corner… et cetera, et cetera, et cetera.”
NOAH:
Yeah. The thing to remember with Docker is that it’s a weird specialized VM. And the first thing that means is that a lot of our existing hacks don’t work. Deployment is full of a lot of our existing hacks don’t work, because it is all incidental complexity in a lot of ways. It’s wonderful when it’s not all incidental complexity. But then the first thing somebody does in that case is to wrap it up in a good tool. App servers in Ruby are a great example of this. We have several competing app servers. They’re very good. But that means the app server part of deployment is now wrapped up in an app server. And so, in other cases where there’s a nice abstraction, you stop thinking of it as a deployment problem.
JESSICA:
Oh, that’s interesting. So, have we sort of defined deployment problems as all the other stuff that’s not easy?
NOAH:
Mostly, deployment is something programmers don’t want to do. And I think that may stay true forever. As programmers…
JESSICA:
[Laughs]
NOAH:
Yeah. Well, as programmers it’s very easy to draw the line at “works on my machine”. There’s a reason that that’s a joke among programmers, because…
JESSICA:
Oh.
NOAH:
No look, as far as my concern it works. I have designed the necessary metal structure to make it all happen and I have proved that it works in the real world. Not my problem from here on out.
JESSICA:
So, the goal then is to use Vagrant and automated deployment to make my machine very much the same as production.
NOAH:
Exactly, something along those lines. And it’s not clear if Vagrant is going to be the right tool. It’s not clear if Docker’s going to be the right tool. But yeah, our current approach is to make something like Vagrant or something like Docker, a little VM or a thing like a VM that you can deploy to just to make people say, “works on my VM,” which is much better than, “works on my machine,” since VMs are more portable and more analyzable.
JESSICA:
Agreed. My machine is a special snowflake that I personally installed whatever Ruby versions I felt like installing. And oh my gosh, the pain.
NOAH:
Yeah, absolutely. So, I think the small-scale problems with deployment, the deployment in the case where you have two servers, three servers, or even just one server which is where I mostly am, if they are fixed they will be fixed in the same way when people find a reason to throw time or money at it. Because it’s a problem that’s going to take a lot of time and money. The people I look at as my customers on my stuff for deployment are small business owners with very few servers. And it’s because if I put a lot of time into fixing their problem, I can then charge a bunch of money for fixing the problem. But in that same way, hobbyist developers then to not to want to put a lot of time into the kind of things they don’t think of developers as doing.
So, the hope is that in the same way that the enterprise work that has gone into, “Oh hey I’ll fix that for the enterprise because that’s worth a lot of money,” okay now for hobbyists we get Chef, which wasn’t for us but it works for us. That in the same way as you’re seeing the rise of very small technical business people, I immediately think of folks like Patrick McKenzie, but the people who have a business and that business needs a couple of servers, that you will see more people say, “Hey, this is worth my time to fix because I can get them to pay me for it.” I would like to say that you’ll also get work for them that becomes open source. And sometimes you will. But it’s hard to get generalizable deployment products in general because deployment is made almost entirely of corner cases. And the extra time to make it generalizable takes time that no individual person needs to use.
So, it may be that it’s like enterprise when there’s money to do it. You’ll see people who say, “Oh hey, I can sell into this market,” which is what I’m doing. So, I hope it turns out to be big and I hope I get a lot of company
CORALINE:
So, you mentioned your Mad Science tool. Can you go into a little more detail about how that works and what the problem is that you’re trying to solve with that?
NOAH:
Sure, absolutely. So, when you’re deploying code, when you’re trying to set up a server around your app, there are a lot of tools that you would usually use for that. Vagrant makes a local virtual machine. Chef is used to configure the various software that you need, because say you need libxml installed because you’re using Nokogiri, or say you’re using Redis so you need Redis installed, things like that. The toolchain is generally Chef configures the server, Capistrano pushes your app to it, and Vagrant is for local simulation. There are also a lot of other tools to do things like download third-party Chef cookbooks. And if you’re going to deploy to say DigitalOcean or AWS or Linode you’re going to need something to spin up new nodes there.
And so, what I do is basically take all of these tools and shuffle them together. Do a top-level integration where if what you want is just a basic Rails app with no dependencies, wonderful. Tell me the Git URL to clone your app from, which is often just give me the GitHub URL. And then press the one button and it will cheerfully set up a Vagrant VM around it with all the basic stuff a Rails app needs. Or if you want to deploy to DigitalOcean, edit the DigitalOcean file to select your instance type and give your API keys and then do the same thing, you giving me the Git URL to clone it. I will make a new DigitalOcean instance and put all the appropriate stuff on there. But hen when you want to deploy the app again, Capistrano runs a lot faster than Chef. And so, push to the server with Capistrano and handle all of the basic Rails stuff like prebuilding assets and migrating your database, and those things that you do when you deploy the app.
So, Mad Science is basically you have all of these disparate tools. Give me a single top-level file that integrates with all of them but give me a migration path upward so that when I say, “Okay, your tool was fun. But I’m outgrowing that, I need to do many of my own Chef cookbooks and I need to configure them in weird ways. I need to stop using Capistrano. I want to migrate to a different tool,” you can easily just strip that top-level away and get down to the tools underneath. Which are intimidatingly complicated if you’re just starting, but awesome when you scale up. And so, it’s providing a step between where you would just use Heroku and where you would actually learn all of these tools in great detail, probably spending months doing it.
CORALINE:
It sounds like one configuration to rule them all. Is that the ultimate goal?
NOAH:
The ultimate goal is to have a one configuration to rule them all that can get out of your way afterward. But yes, it’s very much a single configuration for all of the tools. DRY things out. Use the same thing in a number of places without stopping you from migrating upwards. The trick there is just avoiding lock-in. And I work very hard to keep from locking you into my tool. I don’t try to avoid lock-in on tools like Chef and Capistrano because those are serious industrial quality tools that are used by large companies at scale. If you’re looking for a beginner, how to start out tool, me locking you into those tools isn’t really getting in your way. It’s just giving you something more powerful than you want to learn how to handle that you can then learn how to handle when you’re ready.
CORALINE:
What sort of feedback have you gotten from the community on the tool?
NOAH:
Less than you’d hope. [Laughs] It’s hard to launch open source. It’s hard to get people excited about it. I’ve gotten some GitHub stars. But comments are harder. I’ve had a lot of people look over the various pages about it and say, “Wow, I think this will be really big. Wow, you’ve really captured the pain,” which is great. But it’s not nearly as good as, “I want to use that.” The biggest compliment that I’ve gotten of that kind is Patrick McKenzie saying, “This looks really good. Let’s deploy my next server using it.” So, I have high hopes on that. He’s been a busy guy lately. [Chuckles] What I really want is a lot of stories with, “I went out and deployed my server and it worked great.” and while I have those from me, it’s not the same if it’s from me, you know? [Chuckles]
But it’s new. It’s early. I think it will be very good. And more to the point, this is a very real pain. And it’s hard to do this well. So, I think that there will be a lot more takers. And I hope that this interview will help a few more people hear about it. I hope a lot more people will try it out and say, “Wow, this beats spending months learning Chef and learning Capistrano and figuring out how to coordinate them.”
JESSICA:
I’m in favor. So, how do you feel the learning curve is for Mad Science?
NOAH:
I feel the learning curve is really good. The big thing I’ve done is there’s a two-line install that will download all of the local install tools that you need, clone the repository where you configure it for your app, run the Vagrant stuff to get a local machine there. Or if you want, you can actually set an environment variable and the same two-line install would put you on real hosting, on DigitalOcean or on AWS.
JESSICA:
Nice.
NOAH:
Yeah.
JESSICA:
Is it only for Rails apps? Can I use it for Clojure yet?
NOAH:
You can’t use it for Clojure easily. It’s for Rails apps and it’s for Rack apps. You can do anything that’s got a Chef cookbook. But basically the way it starts is everything Rails needs. Or you can cut it down from there. I actually use it to deploy a WordPress site as well at the same time. But it’s not easily for every programming language. It is not a replacement for Heroku.
A lot of people want a local Heroku. And the problem with a local Heroku is somebody’s got to maintain the code to handle that many different cases. I’m very happy to handle Ruby on Rails. I’m very happy to maintain things like, yes it’ll work with Postgres, Redis and MySQL and it makes the same database account, some things like that I’m very happy to maintain, the Redis and the Memcached stuff. But the problem with any of these tools is that as you add things you support, you get the combinatorial explosion of complexity. You get, this cookbook doesn’t work with this other cookbook. You get, here’s this combination of tools. But I want to use rbenv instead of RVM. And that’s fine. Rbenv is a great tool. RVM is also a great tool. But maintaining all of these different combinations is really hard right now.
JESSICA:
Right.
NOAH:
For the people selling as well as for the… there’s a reason that you wind up seeing things like Heroku which are sort of the Apple App Store for infrastructure where you have a small number of curated things and you don’t get to choose from the whole array. You get to choose from these specific things. And a lot of that is the maintenance problem.
The other thing I do with Mad Science that very few people do and I wish they’d all do is lock down all the versions of everything. Almost all the tutorials say, “Oh, get Chef. Oh, get Vagrant. Oh, get…” No. No, no, no. You should be getting Chef 0.3. And in fact, you should not be getting Chef 0.3. You should be saying gem install madscience, run my setup, and I will get 12.0.3 version Chef for you. I will get the specific plugin versions for you. I will get the specific cookbook versions locked down with the equivalent of a gemfile.lock for you. So that when you are using this, you are using the specific versions I’ve tested together, because all of these things break all the time. It’s like the problem with gems but far more so. And you have far more components and they’re much finickier with each other. They’re not usually tested together.
And so, I wish everyone would lock down the versions completely. And now that I’ve done the months of work to make that happen, I see why they don’t.
JESSICA:
[Laughs]
NOAH:
But I wish [inaudible].
JESSICA:
Yeah, Mitchell pointed out that Vagrant has to be a community effort. It has to be open source to support all those different combinations.
NOAH:
Yes. And so, Mad Science is a much narrower scope but it’s the same general idea. It’s, I’m going to lock it down to we support MySQL and Postgres. I’m going to lock it down to, we support this set of versions for this stack version. There’s going to be a Mad Science 2.0 and 3.0 and 4.0 stack. And the big difference is going to be, okay, let’s support a later Chef version. Let’s support a later Vagrant version, later plugin versions.
But it’s very much the same idea as gemfile.lock. Look, I’ve tested this combination of things. And now you can try it with any Redis version you want, any Memcached version you want, anything that’s got a reasonably good Chef cookbook. Here’s how you do the expanding on it. And again, at some point you outgrow it. One of the problems with curated is eventually you outgrow it. Not only does any Sinatra app eventually grow into reproducing Rails, but every Rails app eventually grows into Java Enterprise Edition at big companies in the end.
CORALINE:
[Chuckles]
NOAH:
Yeah, well anything curated, anything that is sufficiently cut down, people are going to find ways that they want to bend and break those constraints. And that’s good. That’s healthy. That’s perfect. But if you give people a little base to grow from, if you give people a heavily-curated, this is what Rails is, and let them start from there, you’ll find that’s a lot better than to give them the full sanitydestroying complexity of, this is all the possible tools you could use. This is all the combinations.
Go work for months to learn all of it and then try it.
JESSICA:
Agreed. So, this is somewhere between Heroku and roll your own using these tools.
NOAH:
Yes.
JESSICA:
And a friend of mine says, don’t make your software perfect. Make it easy to perfect.
NOAH:
That’s a very good way to put it.
JESSICA:
Yeah. So, if you start with Mad Science it’s not like Heroku where you aren’t changing your version of that. It’s just that if you change a version of that, you need to put some work into it. And then you need to test it.
NOAH:
Yeah. And so, if you buy the class that goes with it, which you could think of as a commercial support package, the big thing you get is a giant troubleshooting guide full of, “Okay, if you change this and it doesn’t work here’s what it looks like when it breaks. If you change this and it…” The troubleshooting guide is not so much for tutorials. If you’re good at Chef you don’t need most of my troubleshooting guide. If you’re good at Vagrant you don’t need most of my troubleshooting guide. If you’re sufficiently good at all these tools then mostly what you want is a quick start. And great, download the free version. The free version is a wonderful quick start.
The difficulty is in growing into that. And the difficulty in growing into that is that everything breaks in so many directions. And so, a little bit of handholding can help you learn Chef even though Chef is in the middle of this giant, complex system. Or learn Capistrano, even though it’s nested in this giant, complex system.
JESSICA:
Right. And the question is always, what are the relevant bits that I really need to learn in order to get my task done?
NOAH:
Exactly. And the nice thing about starting with something that works there is that you can easily say, “Here’s where I’m starting. Run it. It works.” Make a small modification. Run it. It works. That’s what is really difficult for the current…
JESSICA:
How long is that cycle?
NOAH:
10 to 30 minutes, depending on what you’re doing.
JESSICA:
Mm.
NOAH:
Yeah, not fast.
JESSICA:
I like to do that kind of task when there’s really some other little toy program I want to write. And then I can use the cycle as an excuse to work on what I really want to do.
NOAH:
Yeah.
CORALINE:
I’m not slacking, it’s compiling, basically?
JESSICA:
Exactly.
NOAH:
Yeah. Well, the other thing is that you can do it the way that so many people tell you, with your programs. Debug the program but have this going in the background. If this is a 30-minute cycle, there’s no reason you can’t be debugging your app and writing it while you get the deploy right. I mean, you don’t have to. But it’s a useful way to do it, because that’s really what I’m trying to do, is instead of having a horrible constellation of different deploy stuff you actually have one repo that contains all of your deploy stuff, just like your app repo. And you do it in source control and you modify it and you can do it test-driven. And you can absolutely run it constantly in the same way. You add Redis to your app. Well, great. Kick off another deploy cycle. Discover it doesn’t work. Say, “Oh, okay. I’ve got to add a Redis cookbook.” Add that, kick off another deploy cycle. It’s annoying that it’s not fast enough to go in the same unit test suite with the app. But it’s still much better than how this is traditionally done.
JESSICA:
Right. It’s one of those once a day or a couple of times a day tests.
NOAH:
Yeah. And this is why we need the tool suite for Docker. Someday there will be, well when I say someday I mean probably within the next two years, there will be a way to do this same thing using Docker that will be much faster. It’s just that right now putting all that together is horrible and requires you to give up a lot of OS independence and other sort of independence because you can’t use a real config management tool with it. What I really hope we see is config management tools that output Docker files as their output format. If we had Chef but instead of running operations on your VM it output a Docker file, that would be wonderful.
JESSICA:
Another layer of abstraction above Bash?
NOAH:
Yeah, exactly. Your Docker file right now is a glorified Bash script. And it’s not even very glorified.
JESSICA:
[Laughs]
CORALINE:
So, what’s in your roadmap for Mad Science? What’s the part you’re going to do next? Are you going to add more curated components for applications? Are you looking outside of Rails? Or exactly what do you have in mind for future versions?
NOAH:
So, the big thing I’m working on is multiple VM setups. If you want to have a separate database server from your multiple app servers and you want to deploy a load balancer, Vagrant will do that. All of these tools will do that. And so, I’m making it easy to surface that, to use Mad Science with it. And so, that’s my big next thing is that if you’re a hobbyist developer, then even for high performance there’s probably only three or four different app architectures you actually want to use. You probably want to get master/slave on the database. You probably want to get a separate database server from your app server. You probably want multiple app servers with a load balancer in front of it. But that’s not very many different examples. Five or six good solid examples could give you a way to scale your app through these different architectures very painlessly. And so, that’s the big next thing I’m doing. And that’ll take me a while. That’ll be a big one.
CORALINE:
Sounds great. It sounds like you’re thinking ahead and trying to anticipate some of the problems that people are going to have as they scale their apps up.
NOAH:
Absolutely.
JESSICA:
The class that you talked about Noah, is that an online screencast kind of course?
NOAH:
Yes, it absolutely is. It’s PDFs, videos, a certain amount of back and forth via email. But yes, it’s very much an online interactive class. And of course, talking to me and to the other students some. But yeah, it’s basically support for the open source software. The software already has documentation and there will be a lot more. But right now it’s much easier for an expert to use. It’s easiest to use if you’ve been trying to do this and bashing your head against the wall of these hard to use tools. Suddenly, the open source looks pretty good. But it’s going to get a lot easier to approach. And part of that is the commercial support for the class. Part of that is the documentation that I write as a side-effect of the class. But yes, it’s an online class.
JESSICA:
Okay. But it’s not a watch at your own pace?
NOAH:
It is.
JESSICA:
Oh, it is. Oh, it is. You mentioned interaction with you.
NOAH:
Ah, so think forum. Not so much everybody starts at the same time and all has to be moving together in lockstep, because for deployment that’s not even possible.
JESSICA:
Okay. [Chuckles]
NOAH:
The thing that the software does really well is it makes it very easy to get your app up right now before you have to do almost any of the learning. I think that’s a place where most deployment solutions do very badly that Heroku does very well.
JESSICA:
Early success goes a long way toward motivation.
NOAH:
Exactly. Well, and as I teach you the upper layers of each of these tools, here’s a little bit of how to use Capistrano. Here’s a little bit of how to use Chef. All of that is much more motivating if you already have a thing to use with it.
JESSICA:
Agreed. Like if you happen to be waiting while you’re debugging a problem in your deployment and you have half an hour at a time.
NOAH:
Yeah. Or if you’ve got a production-only bug and you need to figure out how to run the console properly via Capistrano. Yeah.
JESSICA:
Alright. Should we do picks?
NOAH:
Sure.
JESSICA:
Coraline?
CORALINE:
Sure, I’ll go first. My first pick is actually an article pointed me in the direction of a tool called TuneMyGC. The garbage collection’s been at the center of performance and memory tuning for Ruby for a long time but it’s not something that’s easily approachable by most Ruby developers. So, this group called Bear Metal has come up with a gem called TuneMyGC. You basically add it to your gem file. You register your Rails application and you boot the app and it outputs an optimal GC configuration, garbage collection configuration when the application ends. So, you can use that to get significant speed boosts in your app and also in your test suite.
One of the anecdotes that I saw talked about reducing a test suite runtime by about 25% just through garbage collection tuning, which is a pretty impressive metric. And as Rubyists we don’t tend to think of the number of messages we’re sending or the number of objects we’re instantiating necessarily. We’re more focused on the problem at hand that we’re trying to solve. And this sort of tool can really give you some insight into how those sorts of things are affecting the speed of your program and how you can tune them for optimization and efficiency.
NOAH:
Cool.
CORALINE:
The other tool that I wanted to talk about is something called Rbkit which does profiling of Ruby apps. It does live object count, heap sizes, GC stats. It’s something that you integrate into your system. You install a gem basically. You add a couple of lines to your application to start the profiling. And it gathers the data and sends it to a desktop application for parsing and processing and displaying that performance data. So, another tool that you can use to understand what your app is doing under the hood, and hopefully use that data to go and drive some performance gains.
JESSICA:
Sweet. I have one pick today. And this is one that I got from my kids the other day who convinced me to take off my shoes and take off my socks and follow them to the mud puddle. And it was wonderful. It was cold. There was still a little bit of snow on the ground. Some of the mud was a little chilly. But it’s totally worth it. So, it’s spring and I recommend that you all take off your shoes and take off your socks and get your toes into some mud. Now on the other hand, one of the sixyear-olds decided that if feet in the mud were good, then she was going to roll in the mud.
NOAH:
[Chuckles]
JESSICA:
Which was hilarious because she looked like a total mud monster. Fortunately for me, it wasn’t my child.
CORALINE:
So, your pick is for outdoor showers?
JESSICA:
[Laughs] She did get hosed off, totally. So, my pick is feet in the mud. Maybe not everything until it’s much warmer outside.
NOAH:
Excellent.
JESSICA:
Noah, what do you have?
NOAH:
I have two picks. My first is a paper book and a classic in the deployment field. ‘Release It!’ by Michael Nygard. It’s a wonderful breakdown of a lot of the reliability practices that we should use and usually don’t use except at big companies. So, things like wrap up all of your external service calls, all of your REST calls or things like them. So that if that service starts getting errors, you can basically cut it off to avoid slow requests slowing everything down. And being able to count that and being able to fall back to other services. A lot of the techniques that wound up in the wonderful Netflix open source Hystrix, their piece of software to do reliability on a lot of your API calls, comes straight from ‘Release It!’ Just a really, really good book.
And my second pick is Jim Gay’s ‘Ruby DSL Handbook’. I’m a big metaprogramming fan. I’m a huge metaprogramming fan. And I think we need far more discussion of sustainable, understandable ways to use it properly, ways to take metaprogramming and get useful debuggable results, when to use it in the real world. I throw metaprogramming at way too many problems. Most people I think don’t throw metaprogramming at enough problems. I love discussion about when it’s a good idea, and approaches to actually do that. So, Jim Gay’s ‘Ruby DSL Handbook’ is my other pick.
CORALINE:
Great. Well, it’s been wonderful having you on the show, Noah. Thank you for sharing your insight and your toolset and your vision of the future of deployments. It’s been really great talking with you.
NOAH:
Thank you. It’s been wonderful being on the show. Thank you for a lot of lovely, perceptive questions.
JESSICA:
Thank you very much.
[This episode is sponsored by WatchMeCode. Ruby and JavaScript go together like peanut butter and jelly. Have you been looking for regular high-quality video screencasts on building JavaScript done by someone who really understands JavaScript? Derick Bailey’s videos cover many of the topics we talk about on JavaScript Jabber and Ruby Rogues and are up on the latest tools and tricks you’ll need to write great JavaScript. He covers language fundamentals so there’s plenty for everyone. Looking over the catalogue, I got really excited and can’t wait to watch them all. Go check them out at RubyRogues.com/WatchMeCode.]
[This episode is sponsored by MadGlory. You’ve been building software for a long time and sometimes it’s get a little overwhelming. Work piles up, hiring sucks, and it’s hard to get projects out the door. Check out MadGlory. They’re a small shop with experience shipping big products. They’re smart, dedicated, will augment your team and work as hard as you do. Find them online at MadGlory.com or on Twitter at MadGlory.]
[Hosting and bandwidth provided by the Blue Box Group. Check them out at Blubox.net.]
[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit CacheFly.com to learn more.]
[Would you like to join a conversation with the Rogues and their guests? Want to support the show? We have a forum that allows you to join the conversation and support the show at the same time. You can sign up at RubyRogues.com/Parley.]
[End of podcast]
199 RR Deployments with Noah Gibbs
0:00
Playback Speed: