122 JSJ Socket.IO with Guillermo Rauch

The panelists talk to Guillermo Rauch about Socket.io.

Special Guests: Guillermo Rauch

Show Notes

The panelists talk to Guillermo Rauch about Socket.io.
Special Guest: Guillermo Rauch.

Transcript

 

[This episode is sponsored by Frontend Masters. They have a terrific lineup of live courses you can attend either online or in person. They also have a terrific backlog of courses you can watch including JavaScript the Good Parts, Build Web Applications with Node.js, AngularJS In-Depth, and Advanced JavaScript. You can go check them out at FrontEndMasters.com.]

[This episode is sponsored by Codeship.io. Don’t you wish you could simply deploy your code every time your tests pass? Wouldn’t it be nice if it were tied into a nice continuous integration system? That’s Codeship. They run your code. If all your tests pass, they deploy your code automatically. For fuss-free continuous delivery, check them out at Codeship.io, continuous delivery made simple.]

[This episode is sponsored by WatchMeCode. Have you been looking for regular high-quality video screencasts on building JavaScript done by someone who really understands JavaScript? Derick Bailey’s videos cover many of the topics we talk about on JavaScript Jabber and are up on the latest tools and tricks you need to write great JavaScript. He also covers language fundamentals, so there’s plenty for everybody. Looking over the catalogue, I got really excited and I can’t wait to watch them all. Go check them out at JavaScriptJabber.com/WatchMeCode.]

[This episode is sponsored by Component One, makers of Wijmo. If you need stunning UI elements or awesome graphs and charts, then go to Wijmo.com and check them out.]

CHUCK:

  Hey everybody and welcome to episode 122 of the JavaScript Jabber Show. This week on our panel, we have Jamison Dance.

JAMISON:

  Hello, friends.

CHUCK:

  I’m Charles Max Wood from DevChat.TV. And this week we have a special guest, and that’s Guillermo Rauch.

GUILLERMO:

  Hey, everyone. 

CHUCK:

  Do you want to introduce yourself for the folks who don’t know who you are?

GUILLERMO:

  Awesome, yeah. So, my name is Guillermo Rauch. And I was the co-founder and CTO of a startup called Cloudup that specialized in real-time volunteering. I’ve been using Node.js for a long time, created Socket.IO, which is a real-time framework for basically emitting events back and forth between clients and servers. And before that, I’ve always been involved in open source. So, I was part of the core team of developers of a framework called MooTools, and before that involved with different server-side PHP projects like Symfony, and always been in the open source ecosystem. Nowadays, I’m working at Automattic, which is a parent company of WordPress.com that acquired my company late last year, and still working on a lot of open source, which is part of our ethos and working really cool, real-time technology,

CHUCK:

  Awesome. I didn’t know you were a Cloudup guy. I actually use Cloud. 

GUILLERMO:

  Awesome. [Chuckles] Yeah, for Cloudup one of the priorities was to basically make file-sharing as fast as possible. And that meant looking at every part of the stack from, how can we quickly show the upload before it’s done uploading? For example, how can we reproduce a thumbnail with HTML5? Or how can we transcode the video so that it’s easier to play on all devices. And a lot of different things that we took a look at optimizing, and HTML5 and JavaScript were a huge part of that experience. 

And as far as real-time communication and Socket.IO are involved, one of the main priorities was to… when you share something we give you a link. And when you send someone that link, they start getting events about how that file or set of files changes over time. So for example, if I send you the file before it’s not uploaded, you get progress events. So, it almost feels like a peer-to-peer transfer. Or if the backend is doing a long-running job on your files, then the backends send you updates about how your file’s changing. Or if new thumbnails or conversions are ready, you get notifications. So, it was a great experiment in basically applying all these innovations to an old problem, which is transferring files. 

CHUCK:

  Very cool. So, we’ve got you here to talk about Socket.IO. And you mentioned one of the things that you do with the sockets. Do you want to just back up a little bit and explain what sockets are and why people should care?

GUILLERMO:

  Yeah, definitely. So, I’ll actually back up and tell you how I got involved with JavaScript, which was I was developing a lot of server-side web applications that rendered HTML on the server. And I think a lot of people still are. Whenever we wanted to make a certain task faster, then Ajax came up. And we said, okay why would we go to the server and then fetch the entire HTML of an entire new page which involves a lot of moving parts? So, the server has to produce and handle the response, and it needs to call to the database for things that might not be related to a certain user action. 

So, with Gmail, and I remember the web version of Outlook, we got access to this new APIs who were making asynchronous requests. And I think that took us really far in terms of making the web work for applications and not just websites, specifically as it relates to performance and responsiveness, right? We started seeing a lot of spinners everywhere, because we were performing some task that was asynchronous in nature and it was affecting only a portion of the page. So, we went from doing everything on the server and rendering all the HTML to making some parts of the application dynamic through Ajax. 

But then, people as they recognized the value of the web as a platform for application and not just sites, wanted to start doing a lot of the things that they were doing on the desktop before, specifically chat applications for example, or applications where there’s a lot of data flow from the server to the client and from the client to the server. And what we found over time is that Ajax is lacking for making a lot of those kinds of applications, specifically what we call real-time applications. In many cases, it’s the server that wants to tell us something about the data and how it’s changing. 

So, if you approach the problem for example of simply creating a chat application with Ajax, you find a lot of limitations. You start having to fight the model a lot, because you start for example polling. And then you say, “Well, I’m going to try to get updates every five seconds or so,” and then basically in the worst case scenario your chat messages arrive of course four or five seconds late. And just, the model doesn’t either because usually what you want to do is send messages in order, right? So, when you fire up asynchronous Ajax requests, they’re all going to go in parallel. And they’re going to and basically in a non-deterministic way, too, because the way that the TCP sockets work on a page is opaque to the developer. For example, it’s called XML HTTP request and not XML or HTTP socket. So, you’re operating at a higher level of abstraction. 

So, I think a lot of the people that were working on these problems like Google or Facebook or Microsoft recognized that we needed a socket API. And the socket API just gives you a lot more liberty to create new protocols and new ways of passing data back and forth that we didn’t have before with Ajax. So, with WebSocket, we basically got a really clean API for bidirectional messaging. So, the server can send us something or we can send something to the server. 

And when I was working with JavaScript a lot on the frontend, I thought, “Well, we can maybe bring this to the backend.” And Valerio, who was the core main developer of MooTools, came up with the idea of making MooTools compatible with the server-side. So, we started looking at things like Rhino, which was a Java-based JavaScript runtime that would allow us to run the same code on frontend and backend. And that’s how I start getting involved with server-side JavaScript. 

And then another project that was closely related to MooTools came up which was called APE, Ajax Push Engine. And you see where things are going there, because that was, I think, in 2008 or 2009. Their idea was, “Okay. Ajax is fine, but we also need to push data from the server.” And that was actually an amazing project, because I think had it used the APE instead of the SpiderMonkey which was Mozilla’s JavaScript engine, I think it would have gotten to the place where Node.js is today, because they were extremely similar in their design. For example Node.js utilizes the kernel event API to basically handle all the connections in the same process in a very fast way. So, unlike traditional web servers like Apache 1, the way that it handles requests is a lot faster and it consumes a lot less memory. 

So, that was the big difference at the time between Node.js and traditional web servers. But APE was actually doing that with their own wrapper around this kernel event APIs and it was plugging into SpiderMonkey. And that’s when I really, really got interested in running JavaScript on the server side because it would allow us to do this push of data very easily. And when Node.js came out, I basically wrote a WebSocket server. But then I realized WebSocket is just not enough for the kinds of applications that we need to build. It’s too simplistic. 

So, that evolved into what’s Socket.IO today, which is basically a layer on top of WebSocket that not only adds compatibility for older browsers, but adds a lot of features, like the ability to send arbitrary events or automatic reconnection or multiplexing, which are very, very useful when you’re developing actual applications. Like for example, if you wanted to add a chat capability to your application or if you wanted to add a real-time news feed or a functionality like that. 

JAMISON:

  So, I’ve used Socket.IO a fair amount. I’ve never used just the raw WebSocket protocol. Can you talk a little bit about what the base protocol is? And then maybe it’ll make more sense about all the features that Socket.IO provides on top of that.

GUILLERMO:

  Yeah. So, WebSocket is basically, imagine if you were only sending a message event on Socket.IO. In fact, you can basically accomplish what WebSocket does very easily. You just ignore any other event and you just, from back and forth, from server to client and client to server, you’re always sending the same type of event, a message event. And the WebSocket protocol is designed to add really minimal framing. So, what this means is that when you send data from browser to server or server to browser, what surrounds the data that you’re actually sending, like for example user input or I don’t know, you’re sending time or whatever, or a JSON data structure, basically WebSocket only adds an identifier of the message type which has the length of the message, and that’s it. 

Now, let’s compare that with HTTP. When you send a request, the user data or your data or your JSON data is usually surrounded by tons of bytes, which obviously is not a lot in the capabilities of our computers and networks and whatever. But it does add a lot of overhead. So, you’re sending the user agent and the content type and all this stuff. WebSocket basically resets us back to a raw TCP socket with a very minimal protocol whose main goal is to be compatible with HTTP. 

So, the WebSocket handshake which is what occurs when you open a WebSocket connection, is basically saying, “Hello. I want to upgrade and this is an HTTP header, upgrade to WebSocket.” And then the server can reply, “Okay.” WebSocket handshake, they exchange some security secrets, and then the connection is basically yours. You can send packets back and forth with very minimal framing. Now, the story has gotten a lot more complicated though, because the one improvement that WebSocket made, which was minimalistic framing, can actually be extremely useful for any type of HTTP request. 

And that’s what Google realized with their SPDY protocol, S-P-D-Y. And that’s basically the shell of HTTP 2.0. So, what they realized is for example, if you establish a connection with the server and you start making a lot of requests, like regular HTTP requests, they’re going to all include a lot of information that’s common to that session. Like for example, the user agent. Why are you sending it 20 times over the lifetime of the user interaction with the server if we can send it once? Or even better, the protocol can already have a dictionary of user agents and then we can refer to it with just one byte. This is basically the idea behind header compression. 

JAMISON:

  That’s cool.

GUILLERMO:

  There is a gzipped dictionary that’s already built in that’s protocol-aware. And obviously, there have been a lot of debates about how SPDY impacts future developments and how it breaks certain things. But it’s extremely good at just making any sort of request a lot faster. So, their analysis shows that a lot of the top hundred websites, if they just flip the switch to enable SPDY in their backends, which is a dramatically simpler task than refactoring your entire application for a new protocol, they found that in the worst case scenario, there is a 30% improvement and the best case scenario is 70% improvement the last time I checked. 

So, WebSocket which used to be its own dedicated TCP socket is now going to be started to be layered over this SPDY multiplexed connection. So, that’s why I think it’s not so useful to try to keep up with how the protocol works, just because even in the last few years it’s changed dramatically. And that’s what we tried to do with Socket.IO. We try to hide all this complexity and always try to make it as fast as possible for whatever transport is actually carrying the user data. And we let users only care about their obligational, or whatever gets closest to the application, which in this case is the event emitter, the ability to send different types of data back and forth.

JAMISON:

  That was a really long and really good answer.

[Chuckles]

GUILLERMO:

  Sorry.

JAMISON:

  No, that was amazing. That makes a lot of sense. Socket.IO is the framework that abstracts away the changing details of WebSockets. So, you get all the performance benefits that come along with updates to the spec without having to update all your application code basically.

GUILLERMO:

  Exactly.

CHUCK:

  So, you keep mentioning multiplexing, which to me means you’re sending data over multiple channels. How do you generally make the best use of that? Or what is a good use case for it?

GUILLERMO:

  Yeah. That’s a good question. Usually, I think about multiplexing in terms of, it’s similar to how when you include an iframe with Disqus or you include the Facebook like button and you don’t care about the implementation details of that connection over to their servers. It’s similar, you can apply a similar concept to your application where you don’t care about how many TCP sockets things are going over or you don’t care about how your data’s being managed. But each piece of your application thinks that they’re getting their own dedicated socket. And that’s cool, because you can write a lot more modular code that’s just as efficient as if it was a one monolithic thing. 

So, basically you can write different parts of your application that establish… you can have 20 sockets in one page. But they’re being multiplexed. And different parts of your codebase think they have ownership over the entire channel, which is effectively what’s happening with WebSocket now, too, with SPDY. So, we give you that guarantee, even if SPDY’s not enabled or even if you don’t know what SPDY is. The key thing is that we can give you the best possible performance without trading off the simplicity of part of the codebase thinking, “Oh, I have this entire communication channel for myself.”

CHUCK:

  So, one other question I have, and this is something that I’ve seen usually when I see somebody demonstrating sockets or demonstrating a use for Socket.IO in particular. They’re talking about it as, “You can do this or you can do polling.” So, besides the things that you brought up where you send the user agent every time and things like that, I guess what are the tradeoffs? Are there cases where you’d want to do polling? Or [inaudible] just completely [inaudible].

GUILLERMO:

  That’s an excellent question, and I’ve been meaning to write about it extensively, because polling actually makes a lot of sense in certain scenarios. For example, back when we did the first Node.js hackathon with Joyent, I remember that they created an amazing panel for showing off the details of how their teams were doing. And I remember that we started of course implementing it with Socket.IO, or they started implementing with Socket.IO and I was just simply advising them, because obviously that was the hot thing about Node.js and they wanted to show it off. But then when the competition actually started, we realized that it was so much data coming in that it didn’t actually even make sense to send it in real-time to every client. 

This is something that obviously it doesn’t happen always. But in certain applications, it might be too much to send all the data in real-time to a frontend. So, in those cases, if you get a snapshot of the data every ten seconds, the user doesn’t have a meaningful disadvantage over getting it in 100 milliseconds every second. So, there are a few scenarios where polling works. Now that said, I think there is no scenario where it actually is better. So, it’s good enough in that case because you can still throttle data from the server and you have a lot more control when the server is sending the data to the client. And for example, there were actually some parts of that one frontend that did have a lot of frequent updates that made a lot of sense to send in real-time. And so, they traded that off a little bit. 

So, I think it’s a good enough solution in a lot of cases. But what I usually point to is the fact that in the design of most applications that we design these days, the server is the source of truth of the data. So, it never makes sense to make an extra roundtrip for the client to ask the server, “What’s the truth?” or “What’s the latest?” whereas the server knows, and the server knows who’s interested in it, because it keeps track of what clients have open in the page. So, there is no situation that I can think of where polling is actually better. But there are some situations where if you carefully examine the pros and cons, especially as far as implementation time and implementing it in your stack or changing too much of your codebase, polling could be good enough. 

That’s why what I always say is, “Really, it’s all about the user experience.” It’s all about making the UI or the frontend eventually consistent, not having to make the user press a button to get that consistency in the data. So, whatever means you use to accomplish that, they’re all better than not doing it at all. [Chuckles] So, if you’re polling to make the data be up to date on behalf of the user end, you don’t have to have them pull to refresh on mobile or press a button on the browser, that’s better. Now, the most optimal solution, and this is the solution that I obviously try to aim for, is obviously a server pushing data to the client because that’s where the data lives. And the server can be the most efficient way of spreading that data by means of pushing.

JAMISON:

  So, I have a generic question about sockets and push in general, or sorry WebSockets and push in general. There’s a lot of knowledge and resources out there about how to scale standard servers that take POSTs and GETs and stuff like that. How does that change when you’re trying to scale WebSockets or a solution where the server pushes data? Do you use different techniques or are they all the same?

GUILLERMO:

  The technique that is most common is generally the same one that’s been applied for regular HTTP, which is you want to load balance the connections. There is a caveat which is for polling. Most load balancers have to be configured to keep all the polling requests on the same box, which is known as sticky load balancing. But the fundamental technique is the same. Something to keep in mind is that you always have to look at the fault tolerance side of things. So, when you’re writing a real-time application, you have to consider things like disconnections and reconnections. And so, the same caveats that go for any [greater] web application, so for example message retrying. 

So for example, Facebook and others do a really good job at keeping confirmations or acknowledgements of the messages so that if a certain message doesn’t make it through, they retry it on behalf of the user. And if past a certain number of retries, still it fails, you have to communicate with the user that that one message didn’t get delivered. So, something to keep in mind is that even though the scalability doesn’t change that much, for real-time applications, you do have to think about this concept of communicating the state of the connection to the user. In particular, as far as, “Oh, the app is offline. We’ll retry these messages later,” or situations like that. 

So, it does open a series of new problems to examine. But the scalability is pretty much the same as most applications already. The main difference sometimes is that you decide to keep a state associated with the connection. And then you can, just like sessions in most web frameworks, you can keep the state in something like Redis or MongoDB. And then when they use reconnects, you can use some token to resume that session. But other than that, it’s pretty similar.

JAMISON:

  That makes sense. You mentioned the complexities that come with managing connection state. So, is that something that Socket.IO takes care of for you or do you have to handle that in your application as well?

GUILLERMO:

  So, it’s something that we definitely provide a lot of help with. So, we have middleware that makes it really easy for people to keep track of a session and to do authentication. We have a special type of error event to do basically, to communicate from the server to the client that something has gone wrong, which makes it really, really easy to do authentication and other type of error transmission. We used to, we sort of tried to do some experiments with at the core level introduce some hooks to do storage of data or persistence. But it’s just a problem that is really hard to generalize. 

I think this is what most ORMs and abstractions like that have found when they try to abstract over too many different types of databases. It’s usually very hard to do successfully. So, what we’re going to move into soon is different types of persistent solutions on top of Socket.IO that are not generally available for every sort of database, but make persistence and state management for different types of databases really easy to do. But that’s not going to live on the core framework. That’s going to be something that you use on top of it. 

And that’s why I was mentioning earlier that multiplexing comes in really handy, because you say, you can mount, we can call mounting it, a subsystem that adds more features on like, /something. And then a certain part of your web application is going to connect to /something and that works in a certain way. And then if you want to send Socket.IO events separately, you connect to /something-else. So, that’s where the multiplexing comes in really handy. 

But yeah, state is definitely a very, very, very interesting problem. Persistence is a very, very interesting problem, too. And it’s very common to every web application that we develop, with exceptions, maybe an IRC client. It’s very desirable to always have persistence and history and queues and building blocks like that.

JAMISON:

  That’s probably why all the demos are make an IRC client, because you get to skip all the hard stuff.

[Chuckles]

GUILLERMO:

  So, that’s something that I notice a lot, even the chat clients. To get chat done really, really well, I was sort of hinting at that earlier. You have to do a lot. You have to do retrying. You have to do acknowledgements. You have to do, for example sometimes you could get duplicates. You have to do reconciliation with the server when you disconnect and you’re offline for a while. And then when you resume the session, you have to get everything in between. Presence is another thing that seems easy when you first approach it. But to do it on a, for example, multi-device way, it’s not as simple to do. 

It’s a lot of the problems that have already been widely researched though. For example, XMPP tried to solve all of those problems [chuckles] in one protocol. So, that speaks to why maybe they didn’t get as much adoption. That’s why I intend to keep Socket.IO really simple and really approachable, even if the applications that we have on the Get Started demos are not going to be 100% perfect. And actually, even on the Get Started chat that I recently created, I mentioned as homework a lot of different tasks that people could do to improve them. But to do everything perfect, we can’t do it all in one module. So, that’s why we’re going to have different, really good companions to Socket.IO that are going to make all those tasks easier.

JAMISON:

  It’s almost like the worse is better philosophy, not that Socket.IO is worse, but…

GUILLERMO:

  [Chuckles] I think it’s [inaudible]…

JAMISON:

  by being very approachable…

GUILLERMO:

  I think simple is better when it comes to open source, especially because a lot of projects have really big learning curves and tons and tons of features to learn. That’s why keeping the core, that’s I think one of the major things we’ve accomplished in 1.0, deciding okay, this is it for the API for 1.0 and for the rest of the branch. And we can now focus on, for example, there have been tons of contributions of libraries for almost every language and framework known to man. And we can only do that when we say, “Okay, this is it. This is how many features we’re going to have in 1.0.”

JAMISON:

  Sure. So, can you talk a little bit about Engine.IO? I know it’s somehow related to Socket.IO but I don’t understand what it is.

GUILLERMO:

  So, it’s basically transport [inaudible].

JAMISON:

  Okay.

GUILLERMO:

  Which can be swapped with WebSocket directly if you wanted. So, it’s essentially the same API as WebSocket, almost like a shim. But it has support for model transport, like polling and JSONP and WebSocket itself. So, it’s basically a compatibility layer. If you want to use WebSocket in a really reliable way, you probably need to not assume that the network or the browser or the device is going to support WebSocket. So, what we do instead with Engine.IO is we use Ajax, like Ajax polling. And then if WebSocket works, we upgrade to it. So, it’s basically a very reliable way of establishing a socket connection. 

CHUCK:

  Are there any security concerns that you need to have that are different from just a regular HTTP connection?

GUILLERMO:

  I think the common principles always apply. You have to be careful about authentication. You have to be careful about cross-domain. Some people have expressed concern with JSONP as a transport, but that applies to every time you use JSONP you’re basically evaluation code directly in the page that comes from the server, which is different from just evaluation, or I’m sorry parsing a JSON response. 

But other than that, it’s very similar to everything you do for… sometimes, I remember in the first year of Socket.IO I would go to conferences or workshops and I would start live coding an example and the first thing that people would do when they would notice that for the sake of practicality and speed on the tutorial I would use for example innerHTML. And immediately, someone in the audience would troll the live demo and do insert a string with alert.

JAMISON:

  [Chuckles]

GUILLERMO:

  So, it’s basically the same principles. Sometimes I would say I’ve seen that people sort of “forget” them when they are creating real-time applications just because it’s something new. But really, you have to always have those considerations in mind.

JAMISON:

  So, what’s in store for the future of Socket.IO? You mentioned that you nailed down the feature set for 1.0. What’s after that’s done?

GUILLERMO:

  So, in the most immediate future we’re going to be focusing on reliability and speed. We have some, 1.1 is going to be pretty awesome in that we have a lot of nice improvements in terms of reliability and a smaller build actually for browsers. And I think in one situation we’re going to reduce even one roundtrip. So, it’s going to be cool in that you just drop it in and it’s better and faster. So, for the time being we’re going to continue to do that in the 1.0 branch. 

As far as 2.0, we’re probably going to do what jQuery did in that dropping support for very old browsers will make sense. But we’re not going to drop support in that it’s going to be impossible to support them. We’re just not going to make the default build work for things like IE 6 or IE 7, and potentially IE 8. So, you’re going to get a smaller build by default. And it would still be possible to build in for supporting older browsers, just because like I mentioned earlier, Engine.IO to Socket.IO exposes basically the WebSocket API. 

So, now that we did all that work we don’t have to worry about any sort of, almost no browser compatibility problems. And we also have very robust tests in place that span all the mobile devices that we support and all the versions of IE. So, to us it’s not a problem to support older browsers. But I would like to see a very, very lean build for modern browsers. So, that’s definitely in the scope for 2.0. 

And like I mentioned, we’re going to be focusing a lot on the other problems that people normally have to make them really easy with Socket.IO. So, three of them that I can mention right now are, one is peer-to-peer, making it really easy to send events with any arbitrary data just like with the same Socket.IO guarantees between peers directly, with reconnection. So basically, imagine if we apply all the same Socket.IO principles to peer-to-peer connectivity. And even with sever fallback. So, that’s one thing that’s in scope. 

Another one is persistence, and another one is presence. So, persistence means basically like I mentioned earlier, like getting updates over a data set that is beyond the scope of the process memory that you’re connecting to. And presence relates to making it really easy to say you’re online, you’re offline, you’re online but you have two active sessions, one from your mobile phone and one from the browser. So, we’re just going to make that extremely easy to do, and to plug into your Socket.IO server. So, those are the main things that are on my roadmap. But obviously, we want to maintain a steady pace of minor releases as well.

JAMISON:

  Gee, I have a change of subject.

GUILLERMO:

  Alright.

JAMISON:

  And you may be biased because you wrote an open source push implementation. But what do you think of all the third-party providers like PubNub or Pusher or Firebase that are doing data synchronization stuff?

GUILLERMO:

  Oh, I think it’s fantastic. This is hopefully so big that we have dozens of those and we have tons and tons of frameworks. The main reason is, like I was mentioning earlier, it comes into UI and UX. We want people to get their data from the server faster. That means not moving your mouse over to the toolbar and clicking refresh. And that also means not doing pull to refresh every second. So, I think it’s such a fundamental thing to me. It’s such an important thing. I want every application to act like this. There is never a good reason to, well maybe there is a few, but normally a UI has to be self-updating. And that’s how mostly I’ve come to define real-time applications. 

Like I was mentioning earlier, if you’re polling that’s fine. It’s a significant improvement over not doing it and not showing data updates in real-time. So hopefully, there are tons of companies and frameworks that have this as a motivating principle and make it easier for users and companies and everyone else to get there. 

As far as how all those, all those that you mentioned have differences in the technical level. That’s why beyond that, I can’t say, “Oh, this one or that one, or this one, this or that.” But they’re all really good. In fact, PubNub wrote a great article on how they analyzed the Google trends and basically the search data for all the terms of this family of concepts, like WebSocket and Socket.IO and push and all this. And it was showing basically an exponential growth in interest. So, when you have a situation like that where so many people and companies are interested in these technologies, you’re not going to have one solution for everybody. 

That’s why also as part of our work right now, like you mentioned we are all about open source, part of what we’re doing right now is trying to bring the entire community together. There’s tons of clients for Objective C and now Swift. And so, bringing the whole ecosystem together to make the code accessible to everyone is also one of the priorities. 

CHUCK:

  Do sockets work nicely with some of the frontend frameworks like Ember or Angular? Do they require a library?

GUILLERMO:

  Yeah. [Inaudible] What I’ve been noticing lately is Angular has a great implementation of a component to hook Socket.IO. So, I’ve been noticing more than anything else, a really big [op ticket] how… It’s almost become a stack where people use Angular and Socket.IO together a lot. I myself haven’t looked at it. I haven’t used it yet. But it’s something that I would definitely try, because it seems like people are having a lot of success in building the entire application in this way, with Angular. 

But basically any framework that makes it really easy to do data binding is going to play really well with Socket.IO. Or even if you’re writing one component for an otherwise not real-time web app and if you’re doing it with Socket.IO and jQuery, that’s also going to work well. But in general, I think frameworks are going to help with a lot of other issues like offline support and routing. And so, in general it’s very useful to have them in your toolset. 

JAMISON:

  So, with Angular you just modify the data and the digest will take care of updating your stuff? It seems like it shouldn’t be too much work to put it in. If you’re using something like Ember Data, I

imagine you’d have to do some trickery because it has central place where it expects all the data to come from. But I know I’ve seen blog posts about using Socket.IO with Ember.

GUILLERMO:

  Cool.

JAMISON:

  I just haven’t read them.

CHUCK:

  [Chuckles]

JAMISON:

  I’m confessing.

GUILLERMO:

  [Chuckles]

CHUCK:

  Alright.

JAMISON:

  Well, do you have any questions that you wish we would have asked you? Any softballs you want us to serve up?

GUILLERMO:

  [Laughs] No. Actually, there have been great questions in this podcast, specifically the ones pertaining to scalability but also the polling one was really interesting because I haven’t had an opportunity to talk about that one funny anecdote with the first Node.js example that we did. But yeah, so far, so good.

CHUCK:

  So, one thing that does come to mind speaking of scaling is that a lot of times you wind up with the load balancer that sits between your app servers and your web browser. So, does it play nicely over those or does it have to get some kind of direct connection to the server?

GUILLERMO:

  No, it plays very nicely with those. At Cloudup we use Nginx as a load balancer. It has remote address sticky load balancing. I’ve seen people do it even on iptables where you can a bunch of processes running and then a set of iptables rules for matching ports with connections. That’s what Zendesk does for scaling Socket.IO. But I’ve also seen a lot of successful usage of HAProxy, Vagrant, ELB, the Elastic Load Balancer, other services but with a certain configuration works as well. In general, it’s just fairly easy to do. There are some issues with Heroku right now. But they’re working on addressing them. Like I said, it’s pretty standard.

CHUCK:

  Alright. So, should we do the picks?

JAMISON:

  Sure, I’m ready.

GUILLERMO:

  Yeah.

CHUCK:

  Alright, Jamison. What are your picks?

JAMISON:

  I have three picks. One is a SlideDeck on CSS. CSS is a thing that I wish I was better at. And this one would make me better at it if I knew all the stuff in here. It’s a presentation by my coworker and friend Alma. He does some crazy trickery. There’s one where he does, he creates as periodic table in CSS and then has all these hover effects. And it’s all pure CSS and uses some nth child magic to make sure that all the different groupings in the periodic table are colored correctly and that the hover effects will move the hover into the right place so it doesn’t obscure any elements that are close by and things like that. It’s pretty nuts. 

My second pick is just a thing I saw posted on Twitter in The New Yorker magazine. And it’s just a short story about an immigrant from the 1900s that moves to New York and then has a Rip Van Winkle thing where he falls asleep for 100 years and then wakes up in current day New York and just makes his way in the city full of hipsters. It’s pretty funny.

And then my last one is a thing that I think I’ve picked a few times. Well, I’ll pick it again because Guillermo’s here. It’s a blog post you wrote called the ‘Need for Speed’ just talking about some of his ideas about real-time applications and how to make things fast. It’s a really good read if you haven’t read it yet.

GUILLERMO:

  Thank you.

CHUCK:

  Awesome. I’ve got a couple of picks. I picked these books on all of my shows but I really liked them. So, I’m going to pick a couple of them in here. One of them is called ‘QBQ! The Question Behind the Question’. And it talks about personal responsibility. It’s actually a really short book, but it really, I don’t know, it really inspired me. So, I think everybody should go and read it. 

Another book that I got from the same list was ‘The Go-Getter’. And it’s a story with a moral, so sort of a fable I guess. But it goes through and really talks about what it means to be a go-getter. And I don’t know, I really liked it and really found it inspiring, too. 

And then the last book is ‘Rhinoceros Success’. And if you don’t like books that have a metaphor that take it too far to the point where they get a little bit hoaky, then don’t read that one. But I thought it was really good. I thought the overall message of the book was right on point. And I really liked it. So, those are my picks. Guillermo, what are your picks?

GUILLERMO:

  I’ll go with three as well. The first one is a project that came out recently, I think maybe a week ago. It’s called p5.js. You guys might have even talked about it. So, it’s basically Processing, the language, that makes it really easy to do any sort of interactive are or graphic, interactive graphics. Before I think it was for [inaudible] and then it was ported to JavaScript by John Resig. So, p5 is basically what would Processing look like if it were started today with JavaScript, with HTML5, et cetera? 

And I love it not only because I think it makes it really easy for people to get into programming because it’s so visual. But also it shows the versatility of JavaScript and how you can create higher level domain-specific languages very naturally. It basically reminds me of Mocha, what TJ did for testing, introducing all those globals and just making it really expressive, almost like English to write a test. So, this does the same but for graphics. So, check it out, p5.js. 

The next one is also on the same note of getting more people to learn and get interested in computer science. There’s a great post by my coworker, Beau Lebens on why JavaScript is the next programming language you should learn, or the first programming language you should learn. It’s a really great succinct blog post. 

And finally, it’s a Twitch.tv channel called FishPlaysPokémon.

[Laughter]

GUILLERMO:

  I think it’s an awesome display of computer vision and just badass ideas being implemented. [Chuckles] I think it’s funny how we got, I think 20,000 people to watch that concurrently. And someone that has written a collaborative Pokémon emulator online, you can check it out on weplay.io, it’s really fun to see. 

And actually, that also inspired an idea that someone should do. So, on the Socket.IO homepage there is an eight-line example of how to get events for Twitter searches. And then they get emitted with Socket.IO to the client. So, it’d be really fun to make a game where Twitter plays Pokémon by using a hashtag or mentioning someone. And I think it would be maybe 15 lines of code and be really funny to watch. So, those are my three picks. 

JAMISON:

  I think the fish is dead.

GUILLERMO:

  I think some people reported that he was dead, yeah. But then when I first started he was dead, then he came back alive.

[Laughter]

GUILLERMO:

  [Inaudible] dead fish. 

JAMISON:

  Oh wait, no. I see it. Fish is alive.

CHUCK:

  Alright. 

JAMISON:

  This is incredible.

CHUCK:

  I’ll have to go check it out in a second here. I don’t want the audio to play through the podcast. That was really cool. Thanks for coming, Guillermo. 

GUILLERMO:

  No problem. Thank you.

JAMISON:

  Thank you. This was great.

[Working and learning from designers at Amazon and Quora, developers at SoundCloud and Heroku, and entrepreneurs like Patrick Ambron from BrandYourself, you can level up your design, dev, and promotion skills at Level Up Con taking place October 8th and 9th in downtown Saratoga Springs, New York. Only two hours by train from New York City, this is the perfect place to enjoy early fall and Oktoberfest while you mingle with industry pioneers in a resort town in upstate New York. Get your ticket today at LevelUpCon.com. Space is extremely limited for this premium conference experience. Don’t delay. Check out LevelUpCon.com now.]

[This episode is sponsored by MadGlory. You’ve been building software for a long time and sometimes it’s get a little overwhelming. Work piles up, hiring sucks, and it’s hard to get projects out the door. Check out MadGlory. They’re a small shop with experience shipping big products. They’re smart, dedicated, will augment your team and work as hard as you do. Find them online at MadGlory.com or on Twitter at MadGlory.]

[This episode is sponsored by RayGun.io. If at any point you application is crashing, what would that cost you? Lost users, customers, revenue? RayGun is an essential tool for every developer. RayGun takes minutes to integrate and you’ll be notified of your software bugs as they happen with automatic notifications, a full stack trace to detect, diagnose, and fix errors in record time. RayGun works with all major mobile and web programming languages in a matter of minutes. Try it for free today at RayGun.io.]

[Hosting and bandwidth provided by the Blue Box Group. Check them out at Bluebox.net.] 

[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit CacheFly.com to learn more.]

[Do you wish you could be part of the discussion on JavaScript Jabber? Do you have a burning question for one of our guests? Now you can join the action at our membership forum. You can sign up at
JavaScriptJabber.com/jabber and there you can join discussions with the regular panelists and our guests.]

Album Art
122 JSJ Socket.IO with Guillermo Rauch
0:00
48:28
Playback Speed: