[This episode is sponsored by Hired.com. Every week on hired they run an auction where over a thousand tech companies in San Francisco, New York, and L.A. bid on Ruby developers providing them with salary and equity upfront. The average Ruby developer gets an average of 5 to 15 introductory offers and an average salary offer of $130,000 a year. Users can either accept an offer and go right into interviewing with a company or deny them without any continuing obligations. It's totally free for users. And when you're hired, they give you a $1,000 signing bonus as a thank you for using them. But if you use the Ruby Rogues link, you'll get a $2,000 instead. Finally, if you're not looking for a job but know someone who is, you can refer them to Hired and get a $1,337 bonus if they accept the job. Go sign up at Hired.com/RubyRogues.]
[Snap is a hosted CI and continuous delivery that is simple and intuitive. Snap's deployment pipelines deliver fast feedback and can push healthy builds to multiple environments automatically or on demand. Snap integrates deeply with GitHub and has great support for different languages, data stores, and testing frameworks. Snap deploys you application to cloud services like Heroku, DigitalOcean, AWS, and many more. Try Snap for free. Sign up at SnapCI.com/RubyRogues.]
[This episode is sponsored by DigitalOcean. DigitalOcean is the provider I use to host all of my creations. All the shows are hosted there along with any other projects I come up with. Their user interface is simple and easy to use. Their support is excellent. And their VPS's are backed on solid-state drives and are fast and responsive. Check them out at DigitalOcean.com. If you use the code RubyRogues, you'll get a $10 credit.]
JESSICA:
Welcome to Ruby Rogues number 253 about Phoenix with Chris McCord. I'm Jessica Kerr and we have Avdi Grimm.
AVDI:
Hello from Tennessee.
JESSICA:
And Chris, where are you?
CHRIS:
Hello from Ohio.
JESSICA:
Cool. You want to tell us a bit about yourself?
CHRIS:
Yeah. I'm a creator of Phoenix and I work at DockYard. So, that's mostly it. I've written a couple of books about Elixir and I'm happy to be on the show.
JESSICA:
Cool. So, what is Phoenix?
CHRIS:
So, Phoenix is an Elixir web framework.
JESSICA:
And Elixir is a programming language.
CHRIS:
Yes. So, Elixir is a programming language that runs on the Erlang virtual machine. So, Erlang is also a programming language that's been around for a long time. And like how Scala is to the JVM, that's kind of how Elixir is to the Erlang virtual machine. It compiles down and it bytecode compatible and has brought some kind of new features and modernized the Erlang ecosystem with some things that were missing and added some of its own ideas on top.
AVDI:
And just to make the connection, there's some Ruby influence on Elixir, is there not?
CHRIS:
Yeah, there certainly is. I think it's like a veneer. I come from Ruby myself so when I first saw it I'm like, “Oh my gosh. This looks just like Ruby.” So, the semantics are entirely different but I think syntactically there are definitely some inspirations there. José Valim, the creator of Elixir coming from Ruby and being on the Rails core team I think there are some obvious inspirations that he took from Ruby.
JESSICA:
Including the philosophy of programming should be enjoyable.
CHRIS:
Yes. Yeah, José is one of the nicest people I know, too. So, he brought the whole… much of the ethos from Ruby into Elixir I feel like.
JESSICA:
Agreed. And yet underneath, it can be deceptive that it looks like Ruby because it doesn't work like Ruby.
CHRIS:
Yeah, it's definitely deceptive. I think it's kind of like bait-and-switch for… a lot of people coming over from Ruby I think are drawn in initially because it looks so similar. And then immediately, especially if it's your first functional language like it was for me, I had to rewire my brain. So, it was like I could hardly even do anything when I first got into the language. So, it takes a bit of, I call it the frustration gap of trying to just accomplish anything if you're first getting into functional programming. But then once it clicks it's been super smooth sailing. But it takes you a while to get there.
JESSICA:
That's some good expectation setting right there. But for that, you get a runtime that has a very different focus because Ruby is not known for its concurrency. And the Erlang VM definitely is.
CHRIS:
Yeah, that's kind of what got me into Elixir in the first place. I guess I can talk about that a little bit. So, I don't know if either you or Avdi are familiar with the gem named Sync that did real-time Rails partials. Anyone remember that?
JESSICA:
What's a partial?
CHRIS:
It's like if you render a template in Rails you can render… a partial template is thing that is rendered within another template. And I made this gem a few years ago I named Sync that just instead of saying render a partial you could say sync partial. And then any time the data in that template changed it would just update in the browser in real-time.
JESSICA:
Spell Sync.
CHRIS:
S-Y-N-C. Pretty terrible name for a gem. And it also turns out that that's the name of a standard library namespace.
JESSICA:
Oops.
CHRIS:
Which I did not know. So, yeah, [Laughs]
AVDI:
[Laughs]
JESSICA:
Should have spelled it S-I-N-K.
AVDI: [Laughs]
CHRIS:
Well, it's like keeping your browser and the server in sync. So, that was kind of… I was trying to do real-time stuff in Rails and do web sockets in Rails. And I made this gem and it could be made to work but getting there was incredibly difficult. And I ended up having to do pretty much all of what Action Cable did initially, which was I had to run an event machine loop, to spawn an event machine thread, and have that [post out to Faye] which hold the web socket connections. So, it was like all these layers just to get a connection to the client that I could push some message to. And that's what started giving me doubts about how can I solve these problems well? And that's where I started looking around into: what are other languages doing to handle a lot of real-time connections?
And that's when I heard of the WhatsApp. This was before they were billions of dollar fame. But they were getting a couple of million connections per server. And I was wondering if I could get a couple of hundred [chuckles] or a couple of thousand. So, that's what led me looking into Erlang.
And then I remembered Elixir from José Valim's proximity. And the rest is history from there.
JESSICA:
So, WhatsApp is that company that Facebook bought, right? For two billion dollars. [Inaudible]
CHRIS:
[More like] 20 billion. [Laughs]
JESSICA:
Oh my goodness. That had 50 engineers to Facebook's…
CHRIS:
Yeah.
JESSICA:
Thousands and thousands and yet were serving a significant fraction as many requests.
CHRIS:
Yeah. It's… they had yeah, 50 engineers supporting four or five hundred million users. And I think only a subset of those were actually Erlang engineers. So, I think those 50 engineers included everyone working on all the Android and iOS client applications.
JESSICA:
Yeah.
CHRIS:
So, I think they only had maybe a couple of dozen Erlang engineers supporting all those users, which is crazy.
JESSICA:
Which goes to… so WhatsApp was built on Erlang. Facebook is built on PHP and Java?
CHRIS:
PHP mainly. I'm not sure. I think they use quite a few different things. But I think the initial. Most of their “frontend” at least that serves the request is PHP. And I'm sure it calls into other things.
JESSICA:
Right. Which goes to show the potential of Erlang. But if you've ever looked at Erlang's syntax, it's an acquired taste.
CHRIS:
Yeah. It's definitely an acquired taste. And I try to not talk about syntax because it's such a contentious issue. So instead, I think I focus on the features that Erlang… oh I'm sorry, the features that Elixir provides on top of Erlang and why that brought me into the Elixir side of things. So, I'm really big on metaprogramming, especially coming from Ruby. And Elixir has an amazing metaprogramming system. So for me, that was an essential feature coming into a new language and Elixir provides that and also gives you great string Unicode handling where Erlang has historically been pretty lacking in that area. And then also, it brings polymorphism to data types. So, not like [inaudible] polymorphism but it brings a way to extend other developer's code as far as the data types go. So for me, it brings some essential features into the language that Erlang was lacking. But I'd say whether or not the syntax jives with you is I think just a personal preference.
JESSICA:
Can you describe the metaprogramming?
CHRIS:
Yeah. So, with Ruby a lot of the metaprogramming you do, not always but you can programmatically define code with define method, but a lot of the metaprogramming would be doing evals of strings. And to me, that's the only experience I had with metaprogramming in a language was what I had done in Ruby. And when I got into Elixir, Elixir has this abstract syntax tree that is represented by Elixir's own data structures. So, in Elixir you can write code that generates code but instead of just saying, “Here, generate this code with a string of code,” you actually have a data structure of your code. You, you can introspect this data structure at compile time and write Elixir that can introspect Elixir to generate Elixir. So, it's a much richer system than I've had experience with coming from Ruby.
JESSICA:
You said at compile time. Does that mean macros?
CHRIS:
Yeah. So, the metaprogramming system is all macro-based. But it lets you do some neat things. Probably the easiest example is in Ruby we have all these test frameworks but if you say assert one is greater than two, you just get true or false back as far as a failure test case. In Elixir, assert is a macro. So, we can actually say at compile time, “Ah, you're trying to say that two things are equal. You're trying to say the thing on the left is greater than the thing on the right,” so that way if that test case fails, we have a single assert macro that can actually tell you, “Hey, this failed. You were trying to say the thing on the left is equal to the thing on the right.” So, it gives you some neat introspection ability at compile to generate code based on the expression.
JESSICA:
How does that help in your web framework?
CHRIS:
Yeah, so what we're doing in Phoenix, and this is like… so, I wrote 'Metaprogramming Elixir'. And in the book I think we start out… we say that the first rule of macros is: don't write macros.
JESSICA:
[Laughs]
CHRIS:
Because it can be abused. And then the second rule that I made up was use macros gratuitously. Because they're awesome and I think it's a great learning tool. So, I think you'll hear people that say, “Aw, macros are evil,” but I think that if you find the right use case for them they let you do really powerful things. So for Phoenix for example, our router layer is very similar to what you'll see in Rails or similar web frameworks. But since it's macro-based we generate really efficient code from that.
So, in Rails or Phoenix you'll have like [get] to some path should route to some controller. It looks nearly identical between Rails and Phoenix. But what Phoenix does is it says when you say get to this path should route to this controller, we actually compile that as a function call that does some pattern-matching on the route. So, we can generate some code that's incredibly efficient at runtime to do the actual route dispatch. So, we do work at compile time to save work at runtime. We don't have to boot the app and then try to build a path structure to make efficient routing. We just bake it as function calls and it's really fast at runtime.
JESSICA:
So, Elixir having this extra compile step compared to Ruby interpreters, Ruby is doing its metaprogramming at runtime over and over again. And Elixir is doing it at compile-time once?
CHRIS:
Yup. Yeah, so you'll probably have… you have the compile step which is I guess going to take some time. But it would probably give you faster boot time. You don't have to be like [do this caching] as soon as you start up. And it lets you do some neat things, too. Like Elixir's Unicode support is done by, they checked in a text file of all known Unicode code points, which is like 25,000. So, there's a 25,000-line text file of all Unicode code point mappings. And they just open that at compile time and then generate a bunch of code supporting all the Unicode code sets. So, that's what… like if you have new emojis come out with new Unicode code points, they just update that text file and recompile. And now Elixir has support for the latest Unicode spec.
JESSICA:
Supporting the latest emojis is crucial.
CHRIS:
It is. It's incredibly important. [Laughs]
AVDI:
I'm curious. You referred to Phoenix as a framework and you made a couple of references to Rails. How framework-y is Phoenix? Would you say it's similar to Rails in that it really lays down some rules for how to lay out your application and a Phoenix project is always going to look like a Phoenix project the way Rails is? Or what? Is it very framework-y would you say?
CHRIS:
Yeah, I have to be careful how I answer this because [chuckles]… So, I've had to deal with some wrong assumptions, especially there are a lot of parallels between Rails and Phoenix. And I think a lot of people end up with incorrect assumptions. So, it's definitely framework-y in the sense that I
think it should be… it's batteries included in the default application that… like you can say 'mix Phoenix new' just like 'rails new'. And you're going to get a default application that has most of the things that you would expect, like the 80% use case where if you need to connect to a database that's what you get out of the box.
But at the same time I say it's much more modular and much less centric to… like a Phoenix project, we like to say there's no such thing as a Phoenix application. Everything is going to be an Elixir application. Elixir applications are all built in kind of the same way. So, when you generate a Phoenix project it's just an Elixir application that has some default Phoenix dependencies. And then also we have a special directory that we can reload code to give you the refresh-driven development.
But it's not… like I don't see a similar thing that we see in the Ruby and Rails community where you have a Rails application and that's going to potentially be much different than how you would build just some other Ruby project or Ruby application. You don't expect that to happen with Phoenix because ultimately you're just building an Elixir application with the same conventions. Elixir has certain ways to start and stop applications with other dependencies. And Phoenix abides by that contract.
AVDI:
So, it's a bit less framework-y it sounds like than…
CHRIS:
Yeah. It's like I said…
AVDI:
Than Rails.
CHRIS:
I'm very careful because I think people, I find people online constantly saying that Phoenix is like, it forces opinions on you. But it's definitely a framework. I don't think…
AVDI:
Mmhmm.
CHRIS:
I really don't like the term micro-framework or people saying that libraries versus frameworks. I think that if we all share certain needs we should have a framework come in and solve these similar problems for us. But it should be extensible for us to do what we need. So, it's definitely a framework in that we provide certain conventions. But they're not like the law of the land if you want to override them or if you want to start doing something totally different. Some people don't like the concept of a controller for example. And in Phoenix that's no problem because you can just route to things that we call plugs that are lower level. So, it's much more extensible I'd say than Rails. But we definitely have a set of conventions out of the box that if you follow them, most people are going to be familiar with what you're doing.
AVDI:
And you said that extends all the way back to the database, right?
CHRIS:
Yeah, we ship with a library called Ecto out of the box. It's going to give you your typical CRUD layer. But that's an optional dependency. So it's like, our default Phoenix project similar to Rails will include a certain set of dependencies. But the core of the framework if you just want the server requests, I'd say it's much closer to Sinatra than Rails.
AVDI:
Mmhmm.
CHRIS:
Where if you pass a switch to the generator you're just going to get the ability to route requests and then your persistence layer is up to you.
AVDI:
Okay. I haven't actually messed around with Ecto yet. Is it just giving you the ability to issue CRUD requests or is it also doing a certain amount of model mapping?
CHRIS:
Yeah, so Ecto, it gives you a couple of things. It has its own flavor of the repository pattern. So, you construct queries in Ecto and then you can, you ask some repository to actually execute that query, whether that's going to insert data or fetch data. And the nice thing is it decouples the query from the database engine. So, if you want to… for example in Rails and Active Record if you wanted to support write to one master database but read from slaves, I've never tried to do that but it gets difficult. But with Ecto it's just you construct your queries and then you ask a repo to run it. So, people that wanted to replicate their data and write somewhere and read from others, it just becomes a matter of selecting a repo at random that you can read from. But then it also, it maps data in the database to structs or maps in Elixir. So, it's going to give you the mapping between your database types and your Elixir types. But I'd say it's more decoupled than what you have in Active Record.
AVDI:
Okay, that makes sense. You know, it sounds from some of the things you said and some of the things you said you wanted to be able to do, it sounds like you've really wanted to not just have performance but also enable more of like an active or reactive types of applications where when things change they get reflected out to the UI on the web.
JESSICA:
Do you still think that's a good idea for the browser to be intimately tied with this backend state of your application?
CHRIS:
I do. [Chuckles]
CHRIS:
I think that's the reality, whether we like it or not. It's unavoidable. Unless you're just serving up static documents. But for me, yeah I think when I first, the first feature in Phoenix was something that we call channels. So, before you can even render HTML templates you could do these realtime connections. So for me, that's what the framework was all about initially. And that's kind of been my main focus. So, I think almost any application we use today is going to have some realtime data component to it. And for me, we haven't had really good solutions to tackle those problems so far, at least in the way that I'd like to do them. I was looking at if I wanted to do realtime applications, I looked into Go quite a bit. But I never… I wanted the productivity that I had in Ruby writing code that wasn't super dense or convoluted. But I found Elixir to actually accomplish that but give me the scale that I needed.
JESSICA:
You used the word earlier, you talked about generating a Phoenix project. A Phoenix project comes out of a template typically?
CHRIS:
Yeah. When you generate… in Elixir we have this build tool called Mix which is kind of like Bundler and Rake combined. And it also runs your tests. So, to basically do almost anything you're going to use Mix. So, to generate a new project in Elixir you just run 'mix new'. So, if you want to generate a Phoenix project you can run 'mix phoenix new' and that's just going to do what 'mix new' would do but also include the default Phoenix dependencies and default structure for handling web requests. So, it's going to give you an extra directory called web and the Phoenix dependencies. And otherwise it just generates the standard Elixir application structure.
JESSICA:
One thing I noticed about the Phoenix project in your workshop back before ElixirConf the other day was that it generates a lot of files. And I'm not complaining here. I mean, I think it represents a move toward, “Hey, we're making all these decisions for you. But we're also making them explicit so you're free to change them. And they're right here. They're not underneath where you have to cast a magic spell in order to change them.”
CHRIS:
Yeah. It's a balance. And this is a contentious issue in the community. But I think, one thing that Rails got very right was having a fantastic out of the box experience. Because if you have a newcomer that you're trying to get into the ecosystem, if they… for example let's say Rails didn't exist and Sinatra was Ruby's great web choice. I think we have newcomers come in and they would make a hello world application, respond a hello world, but the moment they need to say, “Okay, now my client wants me to add a shopping cart or users,” I think now they have to make all these decisions.
And I think it ends up becoming a barrier initially. It's like experts in the language, it's easy for them to make lots of these small decisions because they've made them in the past. But I think the onboarding experience is something that I've really embraced from Rails as far as having great out of the box defaults but also making them easy to override. So, for example the default Rack middleware in Rails is not something… you control it but it's implicitly applied. And in Phoenix you get your default middleware generated in your application. If you don't want it you just delete the line of code that specifically calls it.
JESSICA:
Yes. I love that. I think you have a great point about the cost of decision making. Decisions are expensive. They sap our willpower and they prevent us from having energy to make the important decisions that actually affect our particular business objectives. So, Phoenix will make those decisions for you but yeah, you can see them. I really like that.
AVDI:
I'm curious about your process of getting started with Phoenix. There are many programmers who see a problem or see a space for a new solution like here's this new programming language but it doesn't have a decent web framework. What makes you the one who says, “You know what? I'm going to write that framework.” What was that like?
CHRIS:
Initially that wasn't my plan. I had ventured down this path into getting involved with Elixir, kind of being infatuated with it. And I was working at a Ruby consultancy and I basically built Rails apps professionally for five or six years. And it became clear to me early on, this was well before Elixir 1.0, that Elixir was the language for me. I remember I did PHP for eight years before I got into Ruby. And so Ruby, I dearly, dearly love Ruby and I still, still love Ruby. But when I'm… I remember I told my wife that Elixir was my new favorite language. She was shocked because she knew how much I loved Ruby. She's not a programmer or anything, but she was like, she couldn't believe that I found this language that I liked better.
But for me it's like once it became clear that I wanted to write everything in Elixir and it could accomplish these kinds of applications that I wanted to write, then it became like okay, if I want to write these real-time applications in this language that I know can do these things extremely well, I need a web framework. So, that's when I started writing one and all my coworkers thought I was crazy. I told them that it was going to replace all of the things I built with Rails. They did not think it was possible, just because Rails has had a 10-year head start. And these kinds of things are not easy to build. So, it wasn't my plan initially but once it became clear that I wanted to do this I had to write a web framework to make it happen.
AVDI:
Okay, so…
JESSICA:
So, can we do with Phoenix what we could do with Rails now?
CHRIS:
Yeah. The only caveat is, so the off the shelf available packages is not going to be nearly as robust
as what you have on RubyGems. But obviously that's improving. But yeah, it's production ready and people are building impressive applications with it.
AVDI:
I'm still curious about the process of creating and maintaining this. Once you did kick it off, once you decided that you were going to be the one to write this web framework for Elixir, there are still lots of cars ditched beside that particular road metaphorically speaking. What do you think has enabled you to keep it going, keep up the momentum, get people involved?
CHRIS:
That's a good question, because I think you're right. Initially, when I started it was like this fun thing that was kind of an experiment. And then once it became clear that, I'd say when I was six months in when it was not I'd say wasn't critical mass but I had spent all this time and effort, I was kind of worried. Is this all for naught? But it just kind of, things just fell together. So, I started talking about it publicly at conferences. And that generated a lot of excitement. And then José Valim the creator of Elixir hopped on as a core contributor. And that was probably the inflection point for me at least to say, “Okay, this is a thing now.” At least if Elixir continues to be a thing, Phoenix should do well. So, I think José hopping on board and helping out was the inflection point. And then I continued to talk about it at conferences. José started talking about it at conferences. And I think that has helped it really grow in popularity.
AVDI:
Now, I have this problem whenever I set out to create something big that nobody's making me do where I inevitably wind up just focusing and obsessing over some little corner of it because I can't seem to get that corner just right and I can't get it out of my mind. But obviously you've managed to avoid that or Phoenix would not exist as a successful project. I'm really curious how you decide what to work on.
CHRIS:
Yeah, it's pretty easy in the sense that it's usually all… the first case was, does it accomplish the needs that I have building my own applications? So, that's where the channel layer came in where I wanted to build real-time applications. And then from there it's if someone needs a feature that's not there, it's like they're using Phoenix in production so it's very much like, I call it on-demanddriven development where if someone has an actual business use case and they're using Phoenix and they need this thing, it's easy for me to prioritize that. And that's really helped I think cut down on some features that we could add but maybe would be better served as third-party. And I'd say the other side of that is having José's wisdom of running open source projects has really been beneficial to me. So, he's helped me navigate how to run a large open source project successfully.
So, I think that I owe a lot to him for that.
AVDI:
Mm. I'm so interested in this stuff. Sorry for focusing on it. But…
CHRIS:
No.
AVDI:
I'm curious if there's an example of a time when José jumped in and said, “Hey, let's redirect over here,” or “Wouldn't it be better if we thought about this?” or something like that. I'm just curious about the kind of guidance that you picked up from him.
CHRIS:
I'm trying to think of… I mean there are so many small decisions. But probably something that comes up time and time again whether or not we're deciding to add a feature, José will say it's easy to add later but it's going to be impossible… it's easy to add later but if we add it now it's going to be impossible to remove.
AVDI:
Mmhmm.
CHRIS:
And he says that time and time again and that's like… I've internalized that. So, especially prior to Phoenix 1.0. It's like if someone needs something but we're not sure if it's a very common use case, we tell them that maybe we'll add a small portion of it. But as soon as we add it, we own it forever. And it's going to be very difficult to remove if it ended up being a bad decision. But I'm trying to think of an exact feature. Usually we make those decisions around subtle API changes.
One thing, just one example is our pub/sub system. We removed the ability to when you subscribe to a topic in the pub/sub system we removed the ability to have you pass your process. You just have the caller get subscribed. And that let us optimize our pub/sub system and also prevent race conditions. But that decision was driven by, we can have this more restrictive API now and we know it's going to work really well for people, and then if we need, someone for some reason needs to pass their process to programmatically make another process subscribed, we can revisit it later.
So, that's probably the most recent one. But it's just a lot of small decisions like that. We try to keep the frame of reference on we're going to have to own this forever. And if it ends up being seldom used now it's just there until the next major release which could be a couple of years away.
AVDI:
Mmhmm. That makes sense.
JESSICA:
So, it sounds like José learned from Rails not only software architecture and what makes it a pleasure to code in but also how to run a community.
CHRIS:
Yes. He's got a wealth of wisdom there and I think… one thing that he has told me which is interesting as well is you make all of these tiny decisions when you're building any kind of large project, and you have to make these decisions. And people don't get to see the decision process, which is unfortunate. So, I'll go back and forth for a couple of weeks and then I'll have multiple conversations with José and other core team members on what we should do, even on minute details. But then I make that decision and all that people see is when you have another release they just see those decisions.
So then, it's been hard for me initially to have negative feedback or flak from different decisions we made because no one sees those trade-offs. They don't see that internal discussion of pros and cons and why we ended up at that final place. They just see the decision and it's easier for them to be like, “Ah, this is stupid.”
JESSICA:
Do you ever write that up and make that… internally we might make an architectural decision document thinger.
CHRIS:
Yes, we have… most of our development is done in the open. We keep most of the discussion on the issues list and the core mailing list. But I think some of the decisions that are… it's a balance because it's easy for people… it's easy just to spend your whole day bike-shedding on a problem. So, some of our really core long-term decisions I'd say are planned internally first. And then we'll, as we get rolling, we'll open them up for discussion in the community. But it's like you'll never get anywhere I think if you try to design by committee totally in the open. But we definitely try to keep the community's… I'd say nothing is done in secret. It's just there's initial decisions are definitely pre-planned to try to prevent spending my whole day bike-shedding. But sometimes that still happens. [Chuckles]
JESSICA:
Do you ever change those decisions based on what the community says?
CHRIS:
Yeah. We haven't… I'd say I haven't had any major backtrack so far. But yeah, I'm definitely… I consider all my positions temporary. So, if something clearly needs a change fix, I'm definitely open to making it better. Because I've been wrong in the past. Certainly before Phoenix 1.0 we had a couple of design decisions that weren't ideal. And fortunately, Eric Meadows-Jönsson for example, he's on the Phoenix core team now, but at the time he wasn't. And he raised a concern about the design of our channel layer and how it could be better. And initially I was like, I'd spent so much time thinking about it that initially I was like, “No. our current way's the way it should be.” But then after thinking about it and hearing him out, he ended up being totally correct. So fortunately, we made some really good changes before Phoenix 1.0 that I can't imagine having not have made then. So yeah, I definitely take feedback to heart.
AVDI:
Alright. So, you've got this framework where you can build applications in a language that you really like. But it's… one of its strengths clearly is supporting live data, live display on the client side. And then on the client side you get to deal with JavaScript.
CHRIS:
Unfortunately, yes. [Chuckles]
AVDI:
[Laughter] How are you coping with that? Are there particularly good ways that Phoenix interfaces with JavaScript?
CHRIS:
Yeah. So, JavaScript's one of those unfortunate realities. But yeah, so we include… there's a Phoenix.js channels client that we include for or real-time layer that's going to do, if you're familiar with Socket.IO or even Action Cable now we have a client library now that gives you trivial connection to that server infrastructure. So, a lot of the minute details that you'd have to deal with like handling reconnects or doing exponential back-off when you have failures, all that's built-in. But you do have to still write JavaScript if you actually want to write a browser application.
But for me, we have a good JavaScript story in that regard for our channel layer. We also include… we didn't write our own asset pipeline but we include a build tool called Brunch from the Node ecosystem that's going to handle… you just put your JavaScript and CSS in a directory in your Phoenix project and it just gets compiled. So, you don't have to think about it. But I think that Phoenix…
AVDI:
So, is… sorry, quick question about that.
CHRIS:
No, go ahead.
AVDI:
Is the Phoenix code running Node on the server to compile that? Or at compile time to compile that? How does that work?
CHRIS:
Yeah, just at compile time. We start up a Node task to build your dependencies.
AVDI:
Okay.
CHRIS:
So, there's no runtime dependency on Node. But as far as… this is another bike-shed issue where someone says a new Phoenix project has a hard Node dependency which isn't true. You can forgo that. But then you have to figure out your own asset story. But I think the other unfortunate reality between if you're building any kind of assets for the frontend is you have to have a Node runtime. If you want support for ES 6, CoffeeScript, Sass, Less, probably what 95% of us use, those are Node libraries. So, you need a Node runtime to build these things anyway. So, we just said, we bit the bullet and said instead of reinventing the wheel or spending a year of my life making yet another build tool, we just include one by default that is simple to configure.
AVDI:
Yeah. That's actually one of the other questions I was thinking of asking you is are there any areas in other web frameworks that you've just looked at and said, “Nope, we're not going to go into that swamp at all”? But it kind of sounds like you just answered that question.
CHRIS:
Yeah. Asset pipeline was the biggest one. The funny thing is the vast majority of new issues on the Phoenix GitHub repo are Node-related [chuckles] which is hilarious.
AVDI: [Laughs]
CHRIS:
Yeah, so it's been frustrating in that regard. It seems like we're constantly supporting Node issues. And all we do is we package a build tool by default. But people have issues with repeatable builds or running on Windows, all these things that you would have thought would not happen given Node's popularity and maturity,
JESSICA:
Wow.
CHRIS:
So, that's been kind of frustrating to where it's like, “Man, by the time I have supported all these issues, could I have just written my own build tool?” But [laughs]
AVDI: [Laughs]
CHRIS:
I don't think that seriously. But every morning I wake up and there's a Node or npm issue. It does sit in the back of my head.
JESSICA:
And you think that a little more seriously.
CHRIS:
Yeah. But I don't… yeah, I think there's a lot of great tooling in the JavaScript community. I just think that it happens to be fragile for one reason or another. So, I think that… I'd love if I didn't have to support those issues but at the same time these things aren't easy to create. I don't have the bandwidth to write yet another build tool.
JESSICA:
Okay. So, maybe that's something that if someone wants to contribute to Phoenix…
CHRIS:
Maybe. But then that's the thing though. It's like yet another build tool for JavaScript frontends, I think we'd just be adding more to the fatigue than…
AVDI:
To really help Phoenix, someone can go and fix Node.
JESSICA:
[Laughs] That's interesting.
CHRIS:
Yes.
JESSICA:
Just last night at the Elm user group Richard Feldman spoke. Or course we asked about, “What's the status of Elm on the server?” And he said one of his coworkers at NoRedInk in fact did run Elm on the server. He got it working within Node and that wasn't even technically difficult. But Richard said that the objective of Elm is to be a great frontend experience, not to be a better JavaScript. And on the backend when Elm does run on the backend, they want it to be a great backend experience, not a better JavaScript. And in order to achieve that backend experience, probably Node is not the ecosystem that they want to tie themselves to and base everything on. So, what you said here really strengthens that decision.
CHRIS:
Yeah. I think Evan, the creator of Elm, I think he said that transpiling to JavaScript is like an implementation detail for him. If he wants to run it in a browser, he needs to transpile to JavaScript. But he kind of sees Elm as sitting up… that's just an implementation detail. So, I don't know about their plans on the server. But I imagine that for them maybe they'd want to not depend on the JavaScript ecosystem at all if they wanted to run on the server. Because it's an implementation detail at that point.
JESSICA:
Yeah. What did you say earlier? JavaScript is one of those unfortunate realities.
CHRIS:
Yeah.
JESSICA:
But it's the reality in the frontend. It does not have to be the reality in the backend. We have choices there.
CHRIS:
Right. And along the same lines too as far as clients go, Phoenix from day one… JavaScript obviously is a first-class platform. So, we're going to write… we have a JavaScript client. But we also… I like to say we're taking Phoenix beyond the browser. So, our channel layer really is about connecting any kind of device and having them talk to each other. So, I could have a native iOS app running a Phoenix channels client talking to my browser app which is running JavaScript. We have… the community's put together channel clients for all major platforms. So, there's iOS, Android, there's one in C#. So, I think we're trying to go beyond the browser. But obviously, it's a web framework and the browser is the biggest citizen on the web.
JESSICA:
That's interesting. So, you could connect to other services on the backend to Internet of Things devices?
CHRIS:
Yep, yeah. You could have a control panel on your desktop that is controlling and sending messages over Phoenix channels to your toaster. Or one example is at ElixirConf last year, you may have seen this Jessica, where Justin Schneck wrote an iOS app that used the accelerometer and sent those coordinates over Phoenix channels to a Raspberry Pi that controlled a labyrinth marble game in real life, controlled gyros based on the iPhone accelerometer. But he was running Phoenix on the Raspberry Pi on this labyrinth game and sending those coordinates over Phoenix channels at like 50 messages a second.
So people, I think are doing some really interesting things outside of the web. But yeah, for me it's about connecting and having any device that cares about these messages to receive them. One of those could be a browser potentially but the other one could be [inaudible] device.
JESSICA:
Nice.
AVDI:
To talk about the browser a little bit more you mentioned Evan and Elm. You two spoke together about using Phoenix and Elm together, didn't you?
CHRIS:
Yup. Yeah, our keynote was making the web functional with Phoenix and Elm, which was kind of a play on words there against JavaScript a little bit.
JESSICA:
[Chuckles]
CHRIS:
But yeah, we… I'm interested… I'm really excited about the Phoenix and Elm story, making that better. But you'll have to wait for the next Elm release to really hear more about that. But yeah, I'm infatuated with Elm but I haven't had the free time to actually really, really dive in. But I'd like to get a native Elm channels client made as soon as I find the time.
AVDI:
Mm.
JESSICA:
Mm.
AVDI:
What would be the benefit of having a native Elm client versus the existing JavaScript client?
CHRIS:
Yeah, so right now if you wanted to talk to channels with an Elm frontend you have to… Elm calls them ports. You can use JavaScript libraries but you have to call into that non-pure land. So, you can do it today but you have to bridge that yourself between the JavaScript world and the Elm world. So, a native Elm channels client would just let you use the Elm primitives to send and receive events and not have to suddenly bridge that yourself.
AVDI:
I see.
CHRIS:
Give you a much, much nicer experience.
JESSICA:
And that would make it more maintainable more likely to be correct because you've got the Elm checks to reduce runtime errors.
CHRIS:
Yes.
JESSICA:
Speaking of Elm, last night Richard talked about the biggest benefit of Elm for them has been maintainability. Compared to writing the code in JavaScript, they found that the Elm portions, even the really complicated stuff on their website, if it's in Elm is just vastly more maintainable. Does Elixir offer any similar advantages over Rails?
CHRIS:
Yes. That's actually a great question. So, one thing that I've really tried to focus on… we'd like to say that we are productive framework but we split that into short-term productivity and long-term productivity. Because we think that they are two separate but important things. So, I've built a lot of Rails applications in the past, like production level. And I think that maintainability has been a problem. And I think this is probably for any long-lived codebase, right? But I think, I've never inherited a Rails application that wasn't a huge mess, that had been around for several years. So, I think one of the problems is there are a lot of different ways to solve these problems. Coming from Ruby we have a ton of different design patterns.
And one thing that Elixir gives us is we have a way to build applications and they're called OTP apps. OTP stands for open [telecom] platform which is a terrible acronym for the modern age. But what Erlang did is Erlang's been around for almost three decades. They were building these systems that needed to be up and running for years. So if you're running a telecom switch in some remote area in the forest and that's running Erlang, you don't want to have to go visit that thing yourself and update the code. So, they build this framework called OTP to… that was born out of their experiences of running robust systems that are supposed to run for a long time and be maintainable. And that's what Elixir applications are.
So, Elixir applications follow a specific design pattern. And it's the way you build an Elixir application. There is no other way. So, there's a framework in the standard library that lends itself to one way to do things that are tried and true and also keep your application maintainable because they're pre-packaged as these self-contained applications that depend on one another. So, I think that in the long-term we think that long-term maintainability is going to be one of our greatest strengths on top of… performance being obviously important.
AVDI:
There's one thing I want to get your quick take on. Recently I noticed that somebody had started an Elixir to JavaScript compiler project. Is that something that you're interested in using Elixir on the client side?
CHRIS:
Yeah, I saw ElixirScript. I'm intrigued by that idea just because Elixir is my favorite language. So, it's one of those things though that it would have to be incredibly stable and well-done. It's like something on my periphery that I'm like, “Oh, it'd be nice to dabble in that.” But I think that trying to get the concurrency model of Elixir into the event loop based JavaScript, I'm not sure how they're going to accomplish that. But if they can, I think it could be really neat. But it's not something that I'm counting on using. But if they reach 1.0 and it ends up being stable I'd love to try it out.
AVDI:
Cool. So, if somebody wanted to throw up a website using Phoenix, are there some hosts that make that particularly painless right now?
CHRIS:
Yeah. I think it's pretty much standard story from [where] you deploy Rails. So, it just works on Heroku with the caveat that you can't run Elixir in distributed mode. So, our pub/sub layer, we have a Redis adapter for Heroku-based deployments that gets around that. But you pretty much can deploy it just about anywhere. And it's going to run much leaner than what you'd have experience with, with Rails. For example, I think the default Phoenix app when it boots uses about 15 megabytes of memory. So, it's pretty lean. And that's going to use all your cores. So, you don't have to, if you're running on a 10-core server it's not going to have to run a 200 megabyte instance times 10.
AVDI:
Nice.
CHRIS:
It's going to consume 15 megs and use the resources as it needs.
AVDI:
Mmhmm. Very cool.
JESSICA:
Is Elixir object-oriented or functional or somewhere in between?
CHRIS:
It's a functional language, not object-oriented at all. So, it's immutable. It's functional. It follows the actor model of concurrency which is funny because… so, when Erlang was designed the actor model didn't exist yet. But what they ended up arriving at to solve their problem was basically what we know as the actor model today. But yeah, it doesn't… I think originally José wanted some of the niceties that object-oriented programming had given him such as polymorphism. And that's what he brought to Elixir. But instead of having class-based polymorphism he has data type based polymorphism. So, if you had a JSON library you could write a… we call them protocols to serialize JSON based on the data type instead of the class of the object.
JESSICA:
So, the actor model, I find that very OO personally because it's all about instances of actors sending messages back and forth to each other, which kind of hearkens back to the original part of OO. Does that message passing mechanic that is at the core of Erlang make it up through Elixir? Do you see that when you're coding an Elixir app?
CHRIS:
Yeah. The semantics are identical in that regard. So, there's no difference whatsoever between Erlang and Elixir as far as message passing goes.
AVDI:
I think it's worth noting that historically speaking, there was cross-pollination between Allan K and the original Smalltalk implementers and people that were working on the actor model. And so, there is inspiration both ways. Actually, I think some… I believe I've seen some notes about influence on the actor model from some of the original Smalltalk work. And if you look at the way some of the early versions of Smalltalk worked, they were… Smalltalk for those who don't know is sort of the primordial OO programming language. Some of the earlier versions were actually closer to an actor oriented system with cells that were sending messages asynchronously rather than the default synchronously that we see in most modern OO languages. So, I kind of think of these actor model systems as fully OO myself.
CHRIS:
That's true. I think Avdi, you had some… you were playing with Elixir, this was I think quite a while ago now. But is that the feel that you got as far as… obviously it's a functional language but did you feel like its concurrency model kind of bridged any gap for you conceptually or internally? Or how was your experience coming from [Ruby] and playing around?
AVDI:
Well, it's a funny thing. If you come to Elixir or a language, an actor model language like Elixir, I think from what I call a modern OO background it will not feel like OO at all partly because modern OO languages kind of aren't. In many ways they've sort of departed from what that was supposed to mean.
CHRIS:
[Gotcha].
AVDI:
But also because there is a sort of fractal layer of OO that Elixir is missing. So, the big difference, the big think you have to wrap your brain around is that whereas something like Smalltalk or in theory Ruby is OO all the way down, so you might be sending messages between actors but then you're also sending messages between synchronous objects inside the actor. In a language like Elixir it's a total brain shift past the cell wall. You've got the cell wall which is the individual actors. And at that level things are perfectly object-oriented. They couldn't be more object-oriented. But then once you pass that cell wall the entire paradigm shifts. And you're basically writing state machines in a functional paradigm.
CHRIS:
Gotcha, yeah.
JESSICA:
Yeah. So, we have the tasty functional core with an OO shell, which is actually a great way to modularize and concurrent-ize.
CHRIS:
Yeah, and it's like, it's the only way… I mean not the only way, but I think the interesting thing for me is how the concurrency model came about. The Erlang folks didn't say, “Ah the actor model is going to solve this.” They started with, “We have a problem that we need to solve.” They wanted to run things on telecom switches, kind of, in a distributed way. And the only way to go about doing that was to come up with this concurrency model that allowed asynchronous message passing and sending messages back and forth and being able to monitor when a process crashes. And they [inaudible] out the concurrency model around solving a particular problem.
And this is before the multi-core age. So, it just turns out that they had solved multi-core without even having multi-core, before it was a thing. Because they got the distribution model right and it turned out that running a program on a multi-core system is almost exactly like running one program on a distributed system. It's just your distributed system is now running on each processor. So, I think that it's interesting how they started with a problem and they solved that problem to the best of their ability. And that has now become a perfect solution to the multi-core age that we have today.
AVDI:
Well, it wasn't without some growing pains. My understanding is the trouble with something that's built for multiple nodes is that you wind up using quite a bit of memory because you assume that there's no shared memory. And so, in Erlang they did eventually implement binaries. I don't know all the technical terms for this but I know that binaries are implemented as a reference counted shared pool of basically strings. And actually, one of the biggest problems that I've had trying to work with Elixir and Erlang was from libraries that leaked memory because of how they interacted with that pool of strings. So [chuckles]…
CHRIS:
Yeah, that…
AVDI:
I'd say there have been some growing pains in making that shift from multi-node to multi-core.
CHRIS:
Gotcha, yeah. I think that issue you mentioned with binaries is probably the most common causes of memory leaks.
AVDI:
I am really glad to hear somebody say that, that it wasn't just me. Because I literally had to stop working in Elixir and shift back to Ruby because there was a bug that was in the most popular HTTP client library that wasn't fixed and to my knowledge still has not been fixed with regards to the memory usage.
CHRIS:
Gotcha. That’s interesting.
JESSICA:
Oh, no.
CHRIS:
I will counter that with I had…
AVDI:
That was a year or two ago.
CHRIS:
Yeah. I will counter that with, just for the listeners, I have yet to actually hit a memory leak like that myself. So, just to temper expectations…
AVDI:
Good.
CHRIS:
I don't think it's…I wouldn't say it's uncommon but I would say that it's not something that you're going to suddenly hit this wall that you just can't use Elixir. I think that we have built tooling now that I think lets you really easily diagnose this kind of stuff. So, you can get a live running look at your system with a tool called Observer. So, it's like if you opened up a REPL like with IRB and you typed a command you can get a GUI application that tells you everything about the running program. So with Elixir, we can run the IEx REPL and we can launch a GUI that tells us where the memory is going, what all of our processes are doing, how much state they're storing. And that's how we can easily diagnose these rare cases of this binary leak.
AVDI:
That's good to hear, yeah. That's really good to hear. Yeah, I got the impression that what I'd run into wasn't super common. But it would definitely… it's always difficult when you run into something like that coming into the language because it's like at that point you know you don't know enough to actually debug it. You know you're at somebody else's mercy as to whether it gets fixed or not. But it's good to hear that there are some more tools now. That's a transition that's going to be hard for any language that is written to assume total memory partitioning when you start doing reference counted memory. That's a really fraught transition to make.
JESSICA:
Yeah. Chris, you mentioned some of the tooling around debugging in Elixir. And some of this is custom to Elixir but some of it's also the Erlang VM's existing tooling, right?
CHRIS:
Yeah. Most of this is the existing tooling for the Erlang VM. So, we get to use all the great innovation that Erlang has come up with for the last 30 years. And it's just there.
JESSICA:
Yeah, so that's kind of beautiful. Also, I love that the Erlang VM is called the BEAM
CHRIS:
[Chuckles]
JESSICA:
B-E-A-M. And yeah, which is useful in activity monitor on my Mac when it starts sucking up the CPU and I know to kill it.
CHRIS:
Finding it. Yeah, but it's been remarkable because one thing I talked about at my Erlang Factory keynote was our pub/sub and channel layer, we were able to support two million connections on a single server, which is incredibly exciting. But initially when we benchmarked it we only got 30,000 active or concurrent connections. And I started having crushing self-doubt on how was I going to be able to get this special sauce Erlang scale that I hear about? Did I design the system poorly? But I launched that Observer tool that we were talking about and I was able to identify bottlenecks almost trivially. It was just a matter on I clicked on the processes tab and I checked processes that had a lot of messages. They were trying to process but they were falling behind. And that was how I optimized the code from 30,000 connections to support two million connections, which just… finding a couple of bottlenecks… the diff of the code actually ended up being actually removed code to support that.
JESSICA:
Wow.
CHRIS:
So, the tooling that's there I think, the hype is real as far as getting a live look into the system and being able to reason about what's happening as the system is running instead of just trying to guess after the fact where your bottlenecks are.
AVDI:
Are you now able to debug into live Elixir processes?
CHRIS:
As far as actually stepping through code, it's not very nice. There is an Erlang debugger but I have actually never used it. We do have something similar to Pry from Ruby. And it's just built into the standard library. So, I can just say 'IEx dot pry' and it will jump me to that place in the code in the REPL. And I can check on local variable values just like you would use Pry.
AVDI:
Mmhmm.
CHRIS:
But you can't actually say, “Okay, now step to the next procedure.” It's only between…
AVDI:
So, I can't debug a live process to see what's going on inside it on the server?
CHRIS:
Well, you can do that with Observer. So, Observer would give you the process state as it currently exists.
AVDI:
Okay.
CHRIS:
And you could also, in the REPL you could just say 'process dot info' and it will give you all of the information that that process has. So yeah, you have tooling…
AVDI:
Mmhmm.
CHRIS:
Definitely to debug that. But I just want to be careful though. As far as a debugger is concerned, being able to step through executions, that is not quite nice today.
AVDI:
Gotcha.
JESSICA:
Because Erlang is known for being able to access the live code, right? To see everything that's going on and to do hot code replaces?
CHRIS:
Yeah. You can do… you can get a live look into the state of the system. But the reason… this is probably one reason why an actual step-based debugger is not as common, is as soon as you try to get a live look into a system, you can't really halt the world so to speak. Because you have all these individual processes running. And those things are going to be isolated but also timedependent. So, if I don't hear a response from you, I might take certain actions. So, trying to run a debugger and step and pause the whole program I'd say isn't an easy thing to do in a system that has all these actors potentially on other machines running.
JESSICA:
Oh, right, because this is Erlang. And in Erlang if you're a process, your actor is too slow, something's going to spin up another one and keep going without you.
CHRIS:
Exactly. So, that's why they have really great monitoring on the… give you a window into the system but not be able to actually stop it and freeze it to look at it. You can just look at it, how it is now. But you can't actually stop the world. But yeah, and then you mentioned hot code uploading. So yeah, Erlang has this ability to update the code as the system is running. So, it's not like… I think some of us in Ruby, we deploy a Rails app and we'll just serve a request under Nginx and have it hand off. But this is like at a way other level where you can say, a process is running, doing some work, we can tell it, “Hey, actually here's a new version of the code you're running. But that state that you're holding, please update it to the newest version of the code and continue running.” So, we can literally go from one version of the system to the next with new code but not have any downtime.
JESSICA:
Which is pretty sweet. So, Elixir certainly has the potential to be not only maintainable but runnable and troubleshoot-able in production to a greater degree than Rails.
CHRIS:
Ah, yeah.
JESSICA:
Yeah. But…
CHRIS:
Definitely. We're seeing that already.
JESSICA:
Yeah. But while you have all that amazing tooling at the virtual machine level, at the BEAM… All things serve the beam. I just had to say that. Have you all read 'The Dark Tower'? That was a Dark Tower reference.
CHRIS:
No. [Laughs]
AVDI:
Yeah, I felt like there must be a reference behind this thing.
JESSICA:
Yeah, yeah. Stephen King's Dark Tower series. It's pretty good. All things serve the beam. But, at another level the Elixir ecosystem is not as big as the JavaScript ecosystem. For libraries that are missing, like in Elm you just call out to a JavaScript library. In Elixir do you call out to an Erlang library?
CHRIS:
Yeah. That's what has let us I think hit the ground running so quickly where we didn't have to write a web server, for example. We're using an Erlang web server called Cowboy that's kind of the default choice internally. And we didn't have to… if we would have started from zero and then had to write a web server, I'd probably be writing a web server right now instead of talking about a framework. So, I think we're able to use any existing Erlang library. And…
JESSICA:
Oh, so that's why it was so fast.
CHRIS:
Yep. We can call out to Erlang and then Erlang can call into Elixir code. So, it goes both ways. And that's one nice thing. One thing that Elixir's been… it's actually been great about embracing that. So, instead of re-implementing any kind of… Elixir has a really great standard library but instead of re-implementing everything we just say if Erlang has a good solution then we'll just use it. So, I think there's no… there's an active convention in the community that to use Erlang tooling and call into Erlang and not just wrap Erlang libraries for no reason. So, if there's a good solution there, there's no reason for us to either reinvent it or try to make it look Elixir-y. We can just call it directly.
JESSICA:
Really?
CHRIS:
Yeah. And that's an embraced idea, to use the tools that are there and not try to wrap them needlessly.
JESSICA:
That's interesting. That's different from Elm, because in Elm there's great value in rewriting the library in Elm so you can make use of the type system.
CHRIS:
Yeah. I think it's… well, we share the same semantics with our language, which helps. Elm is…
JESSICA:
Mm.
CHRIS:
I think that Elm needs to be its own pure world, which makes sense for Elm. But since we have a great concurrency model and shared semantics for us there's nothing that prevents us from bridging back and forth.
JESSICA:
Which means you can make use of decades of performance improvements and testing on Cowboy as a web server.
CHRIS:
Yup. Yeah, so I think Cowboy is amazing. It's one of the things that let us get those two million active connections. They've got a ton of work that's gone into optimizing connection pools and all these other things that I didn't have to worry about.
JESSICA:
Mm. So, Elixir is definitely production quality. People are running it in production. Is Phoenix and the entire ecosystem, is it ready to do anything you might consider doing in Rails?
CHRIS:
Yeah. I'd say, and this can be controversial for your audience, I'd say there's nothing that you could do in Rails that you couldn't do with Phoenix today. People are using it in production to a great success. Bleacher Report is one of the best examples that I've given where they had a Ruby API and they rewrote it with Phoenix. And they were able to go from dozens of servers to two servers and running tens of millions of users per month. And they were able to reduce down to a couple of servers. And they only run two for redundancy. So, they could get away with running their entire platform on one Phoenix server.
And the other neat thing is they had, they're talking to the same database. So, the whole idea of your database being a bottleneck I think isn't actually true. Because they were able to go from the same Postgres database that they were using heavy caching on the Ruby side and they removed all caching. And they just talked directly to the database from the Phoenix side and they were able to reduce dozens of servers down to one or two. So, we're seeing really great success stories. And I think it's definitely ready to tackle your typical CRUD-based application that you'd build on Rails. And then also if you want to do anything real-time or with high concurrency it's there.
AVDI:
Right. So, I want to dig into that a little bit because every time somebody says that about a language or a framework I go and try it out. And the first time I try to do something hard I discover I
have to write it for myself. And sometimes that's a good thing and sometimes it's not so good. So, let me ask you about the first thing that popped into my head of like, “Oh god, I don't want to write this for myself.” OAuth authorization with third-party sites.
CHRIS:
Yeah. There's a package now called Überauth that's written by one of the Phoenix core team members. I think when…
AVDI:
Cool.
CHRIS:
That's somewhat relatively recent. So, when you were checking Elixir out it definitely wasn't a thing yet. Überauth. So yeah, it's there. And I think they have support for Facebook, GitHub, and Twitter today. And then you can make your own… it's based off of OmniAuth. They use that as an inspiration.
AVDI:
Mmhmm.
CHRIS:
But you're right. There's definitely going to be… the caveat is there's definitely going to be less off the shelf tooling. I think it's going to be very similar to how Rails was in its youth where there are some really compelling reasons to jump on and use it over other technologies but you're going to have to be willing to get your… roll up your sleeves and get your hands dirty for certain occasions.
AVDI:
Mmhmm.
JESSICA:
Which means that there's opportunities for people to contribute and get involved in Phoenix and be part of this excitement.
CHRIS:
That's very true. And that's what I tell people, that it's a great time to be an open source contributor in the Elixir world, if that's something that you're interested in.
JESSICA:
You too could write your Phoenix and Elixir library and go speak about it at conferences. [Chuckles]
JESSICA:
And hang out with José and Chris because they're pretty cool.
CHRIS:
[Laughs]
JESSICA:
Chris, what are you excited about lately? What are you looking forward to?
CHRIS:
Yeah, so what I've been really excited about lately is a feature we're calling Phoenix Presence. So, our channel layer lets you write real-time applications. But there's a common problem that people need to solve which is who's online right now? So, for a chat application it would be showing who's here, who's online. And it turns out that that's actually a really difficult problem to solve. It seems simple. But there are a lot of edge cases to make it work in a distributed application.
So, what most frameworks and libraries will do is they'll just say, just shove that data into Redis. So, I think some of the Action Cable examples have done that. I could be wrong. But I think there were some example where they're adding users to Redis when they joined and when they leave you just remove them. But the problem is one, you have a single source of truth. So, it's not going to be as performant. It's not going to scale well. But then you also have, if someone trips over that server, it catches on fire, that data that you put into Redis for that user being online is now, that data's orphaned forever. So, they're always going to be online.
So, what we're doing with Phoenix Presence is we're using a CRDT which is a conflict-free replicated data type. So, I like to say that we're putting cutting edge CS research into practice. So, we've developed CRDT and replication across the cluster where there's no single source of truth for this presence information. So, if a user joins we replicate that information on the CRDT and it's just going to show up to all other nodes. If nodes fall behind, or if you're in a net split, it's just going to heal automatically. That's the most exciting thing for me, probably in this whole process, is putting these cutting edge ideas into practice for a very common use case.
So, at the end of the day users can just deploy an application and they can see who's online. Or they can build on top of that to do service discovery. But they don't have to worry about, what do I do? They don't have to deploy Redis and they don't have to worry about what happens if there is a
disconnect between some of the servers. It's like this data is just going to be replicated and the system is going to self-heal. And you don't have to actually worry about it yourself.
JESSICA:
Speaking of things you're excited about, should we get to picks?
CHRIS:
Sure.
JESSICA:
Okay. Avdi, what are your picks?
AVDI:
I have none.
JESSICA: dan-dan-dan.
CHRIS:
[Laughs]
AVDI:
I have been racking my brain, but there just isn't [inaudible] this time.
JESSICA:
Avdi picks cars that work. [Laughter]
AVDI:
Yes. Cars that work. I would pick that, if that was a thing I had experience with recently.
JESSICA:
[Laughs] Alright. Well, I have some picks. The other day at Stripe Kim Scott came and gave us a talk about Radical Candor which is a way of providing feedback within an organization. It's totally distinct from radical honesty because as she said, there's nothing humble about honesty or the truth. Radical candor is about, “Hey, I might be wrong but I kind of observed this and it would have been nice if something was different.” There's a whole protocol to it. No, I'm wrong. Let's back up. I don't know how to explain it well, so go watch her TED talk. It's a great way to talk about how we talk to each other.
Second pick. I just finished the book 'Flex' by Ferrett Steinmetz, the FerrettHimself on Twitter. And it was awesome. It took me about six hours to read and it was a blast. So, that's recommended.
And finally, the other day I was looking on Audible for something to listen to on a car trip and I
came upon 'How to Listen to and Understand Great Music' which is one of the great courses. And it's 48 lectures for one Audible credit. What a deal. And I'm super thrilled with it because I'm learning all the background on what I usually call classical music but it insists is actually called concert music or western art music. And in the process, not only am I learning terms and how to understand all these different composers and appreciate them but I'm also learning a lot about history. I didn't remember all the stuff from school. Currently I'm in the high renaissance period and listening to music by Josquin. And it's really beautiful and it's really fun to listen to. And I recommend learning about concert music because it's pretty.
Chris, do you have some picks?
CHRIS:
Yeah. I've got to plug the Phoenix book called 'Programming Phoenix'. It's just out now for PragProg. It's available on eBook. And then I think later this month it'll be out in print. So, I'm really excited about that. And I think that if you're wanting to get into Phoenix it's a great introductory start that actually takes you through building a full application. So, check that out.
And my other pick is a talk that José gave at Lambda Days earlier this month talking about… I think the title of the talk was 'Introduction to Phoenix'. So, if you're just curious about what Phoenix has to offer and maybe answer some questions that you didn't hear on this interview, check that out. And I think you'll hopefully be interested enough to look into Elixir and find me on IRC if you have any questions.
JESSICA:
Avdi, your pick emerges.
AVDI:
Yes. Alright, my pick is my habit lately, is going to be the last audio book that I finished. And this is probably one of the most qualified picks I'm ever going to make. The last audio book that I finished was 'The 4-Hour Workweek' by Tim Ferriss, a book that's been sitting on my shelf for years. And I finally got around to it by listening to the audio book. There are a lot of issues that I have with this book and I'm not going to even begin to bore you with it in the picks. I feel like I could probably write a whole blog post about it.
A lot of disclaimers and caveats in recommending it. But at the same time I cannot deny the fact that this book both entertained me and motivated me I think in some very valuable ways. If nothing else, it really motivated me to get off my butt about going through every single little thing that I do and figuring out whether I can eliminate it, delegate it, or automate it in a way that I have always tried to do that to some degree. But I realize I really in order to have the creative space that I need, I really need to do that in a much more systematized and brutal way than I have already. So, if for no other reason I got that value out of it. Also again for all that the book has some flaws, Tim is a really entertaining reader. So, he does voices. [Chuckles] So, as an audio book it's a fun listen. So, there you go. There's my highly qualified pick.
JESSICA:
Cool. Chris, I was going to nag you for something not Phoenix related as a pick but then you just picked a book that you finished writing in addition to writing this web framework and running an open source community. So, I can see how you'd mostly be talking about Phoenix.
CHRIS:
Gotcha. If you want a pick for something non-Phoenix related…
JESSICA:
Yes.
CHRIS:
That's still Phoenix related…
JESSICA:
Ah!
CHRIS:
[Laughs] There's a neat paper on the CRDT that we're using for Phoenix presence. I just tweeted it out a couple of days ago but that would be a cool one. There's a paper from 2014 about the… that's why I call it cutting edge. It's pretty new. A paper on delta-based CRDTs. I think it'd be a good read to get an idea of what CRDTs offer you as far as distribution goes and to maybe give you an insight to our presence layer.
JESSICA:
Mm.
CHRIS:
So, does that count? Or is that cheating?
JESSICA:
Oh, I think that's awesome.
CHRIS:
[Chuckles] Okay. [Chuckles]
JESSICA:
Thank you. Alright, I guess that wraps up episode 253 of the Ruby Rogues. Thank you for listening and come back next week.
[Hosting and bandwidth provided by the Blue Box Group. Check them out at Bluebox.net.]
[Bandwidth for this segment is provided by CacheFly, the world's fastest CDN. Deliver your content fast with CacheFly. Visit C-A-C-H-E-F-L-Y dot com to learn more.]
[Would you like to join a conversation with the Rogues and their guests? Want to support the show? We have a forum that allows you to join the conversation and support the show at the same time. You can sign up at RubyRogues.com/Parley.]