114 JSJ Asynchronous UI and Non-Blocking Interactions with Elliott Kember
The panelists talk to Elliot Kember about asynchronous UI and non-blocking interactions.
Special Guests:
Elliott Kember
Show Notes
The panelists talk to Elliot Kember about asynchronous UI and non-blocking interactions.
Special Guest: Elliott Kember.
Transcript
[This episode is sponsored by Frontend Masters. They have a terrific lineup of live courses you can attend either online or in person. Their upcoming course is JS Framework Showdown with Brian Holt from reddit. You can also get recordings of their previous shows like JavaScript the Good Parts, AngularJS, CSS3 In-Depth, and Responsive Web Design. Get it all at FrontEndMasters.com.]
[This episode is sponsored by WatchMeCode. Have you been looking for regular high-quality video screencasts on building JavaScript done by someone who really understands JavaScript? Derick Bailey’s videos cover many of the topics we talk about on JavaScript Jabber and are up on the latest tools and tricks you need to write JavaScript. He also covers language fundamentals, so there’s plenty for everybody. Looking over the catalogue, I got really excited and I can’t wait to watch them all. Go check them out at JavaScriptJabber.com/WatchMeCode.]
[This episode is sponsored by Component One, makers of Wijmo. If you need stunning UI elements or awesome graphs and charts, then go to Wijmo.com and check them out.]
CHUCK:
Hey everybody and welcome to episode 114 of the JavaScript Jabber Show. This week on our panel, we have AJ O’Neal.
AJ:
Yo, yo, yo, coming at you live from a late morning.
CHUCK:
Jamison Dance.
JAMISON:
Hey friends.
CHUCK:
Joe Eames.
JOE:
Howdy.
CHUCK:
I’m Charles Max Wood from DevChat.TV. And this week we have a special guest. That’s Elliott Kember.
ELLIOTT:
Hi everybody.
CHUCK:
Do you want to introduce yourself?
ELLIOTT:
Okay. I’m Elliott Kember and I work at Dropbox doing prototyping and development. And yeah, that’s me. I just recently moved to San Francisco where the sun shines. I used to live in England where the sun didn’t shine.
JOE:
San Francisco’s nice, isn’t it?
ELLIOTT:
It’s lovely. The wind will [inaudible].
CHUCK:
Did you just say you live where the sun don’t shine?
[Laughter]
ELLIOTT:
I hear the sun never rises on the British Empire. [Laughter]
CHUCK:
Very nice. Alright, well we brought you along today to talk about asynchronous UI and nonblocking interactions.
JOE:
And it’s just dot, dot, dot.
AJ:
Wait, what about, we’re using Swift now, huh?
ELLIOTT:
Oh, yes. You guys have been using it? I’ve been using it.
CHUCK:
Swift Jabber.
JOE:
Been on it for about a year.
JAMISON:
[Inaudible] the whole thing here though, Joe.
AJ:
Yeah, I was pretty much on the dev team for Swift five years ago when we started with it. It used to be called Go back then. But…
CHUCK:
I’m kind of curious because I don’t think I was the one that set this one up so I’m not exactly sure what is meant by asynchronous UI. Care to fill us in on that?
ELLIOTT:
Well, asynchronous UI and non-blocking interactions are more or less the same thing. It’s like if I click something and then I want to do something else, I don’t want to be waiting for a spinner or be blocked on something. I want to feel like I’m actually using a computer that is here in the room listening to me. And especially on the web, there’s been a lot of movement recently towards building interaction models that allow for this kind of asynchronous interaction where I can do something and it’ll show that it’s loading but the screen isn’t frozen or I’m not waiting for a whole new page to load. It’s super important.
And I think it gets forgotten a lot, especially here in Silicon Valley where everybody lives really close to the service. I’m originally from New Zealand. In that particular part of the world, we have a serious problem with latency, especially on sites that are hosted in the states. And you end up really, really getting a good feel for which sites are asynchronous and which are super-duper synchronous. And anything that’s synchronous, it makes you wait. It makes you wait five to ten times as long if you’re a long way away. So, it’s a topic that’s near and dear to my heart. And it’s the new crazy on the web, I think. It’s the new jam. It’s been happening for a while, but recently everybody’s started to move towards a few JavaScript frameworks that allow it a bit more easily. I think it’s a topic on everybody’s minds in web development.
JAMISON:
So, there’s a bunch of stuff you said that I have questions about. One of them mentioned there are a few new JavaScript frameworks that allow it more easily. Can you talk about the tools we have to build asynchronous UIs?
ELLIOTT:
Yeah. Well, I don’t know if you guys are using Angular or Ember or Backbone or whichever. I think there’s a couple of new ones, too, since I last looked. I’ve been off of these for a few weeks while I’ve been getting a few things done in the Mac OS development world. But recently, I’ve been using Ember.js which I really like. It’s my go-to JavaScript framework. There’s also Angular which is similar, but different. Backbone I used to [inaudible] a lot of it.
JOE:
Never heard of it.
[Chuckles]
ELLIOTT:
Okay. Well, I don’t actually know what your guys’ background is, all of you guys, or our listeners. So, I may be telling you all things that you already know.
JAMISON:
I think there’s probably a mix of, I would imagine most of the listeners have familiarity with at least one of those. Can you maybe talk about how these tools specifically enable asynchronous UIs?
ELLIOTT:
Yeah, sure. One of the biggest things about Ember that I like is a fact that it’s got a data layer. It’s called Ember Data and it sits between your frontend and your API, service, or whatever you got running in the background. And it means that when you save a record, instead of making a save all the way up to the server and coming all the way back waiting on that promise, you can update the record and update all of your UI and just show that it’s saving in the background. But the data’s already there. The data’s in the view and you can see it. And it just gets persisted away to a storage in the background. So, rather than waiting for each atomic save to happen and each record to be saved to the database, you can keep editing and edit the next thing before that save comes back. It means you get a few complications with things that are in different states and things that have to happen after other things, because that’s pretty normal. But the advantage of it is that if you get it right, it’s just a really nice way to build things.
It also, I found it means that you can develop asynchronously as well. If you’re not waiting on a certain part to be finished, or a certain API to be available, you can fake it or you can, if things are very slow in development, you can get around it. But yeah, I really like it. But Ember is like a top to bottom JavaScript framework where it handles all your routing, it handles all your models, templates. There’s not a lot that you can or really need to change when you’re using it because it’s been designed from the ground up. I haven’t used Angular nearly as much. But it’s a little bit more component-y. So, you can just have a little widget or an app that sits inside your normal site. And it lets you jump into it without quite so much learning and quite so much magic. Ember is more Railsy that way.
JAMISON:
I mean, the capabilities to asynchronous UIs are, it’s a consequence of some of the design decisions in JavaScript where it’s got event loop. You could do this stuff with plain jQuery spaghetti, but it sounds like you’re saying it’s easier now with these newer frameworks.
ELLIOTT:
Yeah. This technology is certainly nothing new. People have been doing asynchronous work on the web for an awfully long time. But it’s that long tail where 5% of the developers out there might have been doing this for a long time, but a lot of people either don’t have time to or just don’t really understand how it works. And it takes them a lot longer to catch up and to start doing this stuff.
And with these frameworks and with these sets of rules and workflows that people can work alongside, it means that anybody can come to grips with an asynchronous workflow or asynchronous products a lot more quickly. And they don’t have to do as much thinking about what technology they’re using. They can just follow a template, follow a prescribed way of working. And that works out really nicely. I found it to be super-duper helpful. And especially just in giving recommendations on application structure and things like that, where I don’t want, really necessarily want to have to think about how my controls are wired together and things like that, which is why I prefer using these frameworks.
And also, it means you can update the framework codes separately of course. But there’s a lot of JavaScript programmers out there that didn’t grow up with a proper programming background or the fundamentals of application design. And it’s really helpful to bring these products out to the front and to get all the parts that don’t change between applications to be consistent between separate projects that you work on. So, instead of having to upgrade your JavaScript core stuff between sites that you work on, you can take a common core of libraries around with you. I think it’s really nice. I think it’s starting to become a bigger thing on the client side now.
CHUCK:
So, it seems to me though that you could write an app that posts and does all this stuff synchronously anyway, with some of these frameworks. Do they make it hard for you to do that or are there still things that you have to keep in mind while you’re doing it in order to avoid having that issue?
ELLIOTT:
Did you say synchronously or you can do things synchronously?
CHUCK:
Yeah. Do they allow you to do things synchronously?
ELLIOTT:
Yeah, they do. They allow you to, I guess the way I would say it is that they allow you to override the asynchronicity with the synchronous interactions. So, you’re still performing asynchronous events. You’re just blocking the interaction, if you see what I mean. You can block something and make it wait until an asynchronous thing comes back. But it still means that other things can happen in the background or you can switch states in your application. I guess asynchronous for the web is a large part of it is whether or not you’re actually loading a new page. And if you’re not loading a new page and the information is still there and you can click around and you can go to, say a different tab or go back to the original tab… I mean, I’m sorry, I mean like a tab in a page if your web application has multiple tabs or windows open at the same time inside it, and you can be editing one and jump to a different part of the application while that’s still saving. And that can be as synchronous as you like and can take a while.
And you can even prompt the user when that thing’s finished. It’s like when you send an email in Gmail and they have that, what is it, ‘z’ to undo? It goes away into the background and that email is processed synchronously. You can’t really do anything until it’s finished sending. But it disappears and gets out of your way, which I really like. And I think that’s what I mean. You can still do stuff synchronously of course, when it makes sense. But to do everything asynchronously in the meantime means you can override that with synchronous events and you can wait for stuff to come back.
CHUCK:
I was just going to ask, is there more to asynchronous UIs than just using a framework?
ELLIOTT:
Oh, yeah. Oh, absolutely. The framework will get you started. But it’s important to use the framework and get an idea for what you’re being told to do in it and how you’re supposed to work. You can certainly break these frameworks and do horrible asynchronous stuff and change pages and reload and all sorts of stuff. But it pushes you in the right direction, I think. It gives you the tools to do things correctly, to use promises and callbacks. And everything’s there. It just makes it easier to do it that way than to do it the wrong way.
JAMISON:
So, I was going to ask about the performance aspect. You mentioned it as one of the motivations for asynchronous UIs, especially if you’re dealing with longer latency from being on the other side of the world from some of the server’s you’re visiting. How does asynchronous UIs, how do they interact with performance? How do they affect performance?
ELLIOTT:
I think they’re huge. One of the things you can do with asynchronous stuff is you can preload things that you know you’re going to need later on. For example, we wrote an app called Forge, which is a stack hosting environment. And one of the things we did was compile your whole site, all of the HTML, into one JavaScript manifest file. And then while you’re browsing around the site instead of pulling each file from the server, you just display it on the page. So, the whole downloading all those pages in the background is asynchronous. And it comes down off a CDN so it’s super quick. But that data is already there for you, because you preloaded that asynchronous interaction. You didn’t have to wait for someone to click or anything. You just load them all at once. So, because those asynchronous calls when you do them, because it doesn’t change between requests, you can do them in advance sometimes. You can cheat a little bit and get stuff first. And you can’t really do that if you don’t know what the content in those things is.
CHUCK:
Let me stop you real quick because I’m not sure I understood. So, what you’re saying is you have
a manifest or a list of the JavaScript files that you’re going to need for.
ELLIOTT:
HTML files.
CHUCK:
Or HTML files.
ELLIOTT:
Yeah, this was something we pioneered. It worked pretty well, actually. It’s basically a giant hash and the keys for the hash are the URLs for all the pages and then the contents were the HTML contents. And when you click a link that matches one of those hashes, instead of actually requesting that HTML link from the server, it pulls down that entire file. Now, there are some caveats to that if you design your JavaScript in a way that expects something, expects an event to fire or something like that. You can get into trouble. But generally, if it’s just a static HTML site and there’s nothing to do, there’s nothing going on with the JavaScript, it seems to work pretty well. And this is super useful if you’re on mobile or you’re in New Zealand or you’re on the long end of a really [chuckles] really useless dialup connection, these things make a huge amount of difference.
CHUCK:
So essentially, what you’re saying is you get all of the HTML upfront.
ELLIOTT:
Mmhmm.
CHUCK:
Does that slow down the initial page load or do you do that after that page is already loaded?
JOE:
Yeah, what’s the cost? You couldn’t be talking about a lot of HTML potentially, right?
ELLIOTT:
Well, it’s gzipped which helps, and so it is quite small. And it loads after that initial page load, so you never actually see that performance hit. We benchmarked it for most sites as being smaller than any one image on any of the pages of those sites. So, compared to other content that you can pull down, it’s much smaller once it’s gzipped. So, it’s pretty small. Yeah, it’s tiny. And when you think about it, that whole model of requesting the about page and then requesting the contact page and then requesting the home page again is kind of nuts. And it’s silly that you’d want to do that. I know it’s some of the way that the internet was built when it was designed, which is quite some time ago now.
But having all of those tiny, little HTML files as separate resources, each one of which needs to be requested from some server that could be a thousand miles away or 10,000 miles away, is sort of a broken model in today’s international setup where you still have to worry about the speed of light. You still have to worry about the unknown connection on the other end, one of those things that you need to think about. So, if you could download everything in one [inaudible], it makes a lot of sense. And I should say that individual requests are what really hurt with latency. If you’ve got a slow connection, once you’re download is going, it’s actually not too bad. Once it’s on the move, it’s okay. You don’t have that same latency once the stream is coming down to your computer. But every single HTTP request needs that horrible roundtrip way to the states or wherever it is, [inaudible].
JOE:
You know, on my 3600 baud modem, it is pretty slow.
[Laughter]
ELLIOTT:
Man, you joke. Where abouts are you, Joe? Where are you?
JOE:
Utah.
ELLIOTT:
Probably not too far from most of the servers that you hit, right?
JOE:
Oh, no. No, not at all.
ELLIOTT:
So yeah, people always joke about, it’s like, “Ah, we don’t need this. We’ve got ADSL now. We’ve got DSL. We’ve got Fiber,” all this stuff. But again, every couple of years, I fly back out to New Zealand, or every year. And I browse around the net and I use some sites and I do all sorts of bits. And what I’ve noticed every time is that things are getting fast. Things feel faster. I don’t know whether things actually are getting faster. On a lot of sites, you’d still have that latency, so you still feel it on initial page load. The apps that are doing things asynchronously are just delightful to use. I get away every once in a while into the country. It means a pretty rural connection. And it’s almost impossible to use sites that are synchronous where every page has to go back to the server and get all its contents. And it’s super frustrating. It’s like death by a thousand paper cuts once you load 10 or 12 different pages one by one.
JOE:
One of the worst connection types is the satellite. Because you get the satellite down which is really high latency because it’s bouncing around up there.
ELLIOTT:
[Chuckles] To space.
JOE:
And then you get your modem up, right?
ELLIOTT:
[Chuckles] Yeah. Yeah, so it’s super, I guess that is an asynchronous connection to me.
JOE:
Yeah. [Chuckles]
ELLIOTT:
Or it’s one is super, way faster than the other.
JOE:
Right. But the down is still highly latent, still really, really latent.
ELLIOTT:
And the two aren’t connected. So, the two aren’t synchronized over a single line or anything.
JOE:
Right.
ELLIOTT:
They’re two separate connections where the one is hosting and one is getting.
JOE:
Sorry Chuck, didn’t mean to cut you off.
CHUCK:
Thanks. I was actually wondering. Are there circumstances under which this doesn’t make sense? Because it seems like if you request, I guess HTML is just text and you may not be pulling a lot of extra data once you have the main layout there. But are there situations where you need to think about this just a little differently or where it doesn’t make sense?
JOE:
Sure, if you work for 37signals.
[Laughter]
ELLIOTT:
Yeah. They go the [inaudible] route, right? They request everything and they just leave the header intact, which does cut down on a lot of content. And in a lot of ways, it’s sort of more applicable. What they’re doing is they’re generating the content on the server and they’re posting that down as HTML. So, if your content is changing all the time, obviously you need the updated data rather than whatever it was you got at load time. If something changes while you’re using it, you want to get the latest updates. So, if you’re rendering there on the server, you actually need [inaudible] down. But if you’re just rendering the UI at the client side and rendering data in, then you can build it in that way. All the sites have a lot of problems with this. If you’ve gotten all the site written in Rails before, the Turbolinks or something, you [inaudible] all the server setup. But your JavaScript doesn’t quite handle this properly, then yeah, you do need to look at it. It’s not bulletproof. It doesn’t work for everything. But it’s an interesting way to go forwards, I think.
JOE:
So Elliott, are you saying that the people at 37signals are crazy or just uneducated? [Laughter]
JOE:
I can’t tell.
ELLIOTT:
Those are my only two options for this answer. You’ve given me, this is a [inaudible]. I don’t [inaudible] right. No, I think they’re opinionated and they certainly know better than I do for their specific setup what makes the most sense. There are a lot of factors there, right, in terms of development and costs, the people that they have, the team that’s building whatever it is, any legacy stuff they’ve got lying around. So, I wouldn’t call them crazy to do things a certain way. Probably not the way I would do things, but I have never built that specific application. I don’t intimately know its needs. So, I don’t know. I don’t know.
CHUCK:
It sounds like though, that then you’re pulling down full HTML pages for each one, like the about page and stuff. Or are you just pulling down the relevant bits that need to change on wherever they’re at?
ELLIOTT:
I’m pulling down the whole HTML. The reason being, most of it is compressed using gzip. Gzipped that first load [inaudible]. Anything that is, I don’t have a detailed understanding of how gzip does its actual compression. But from what I can tell, any parts that are the same between two different sections of two different pages or especially pieces that are used over and over, and it’s all just text, and most of those are HTML tags. So, HTML tags are pretty ‘symbolifyable’. I don’t know. They probably compress HTML pretty well. So, the more pages you push down with that, probably the better compression ratio you’re going to get.
CHUCK:
Yeah.
ELLIOTT:
And what’s more is you don’t have to, we never actually got around to doing this, but you don’t have to load all the pages at once. You can load it a specific subset of the most visited pages. And then you can load in some other ones later on if you think you’re going to need them. Or, you can do it like a spider graph. You can get to, once they click on a certain page, you can then asynchronously load in a few other pages, preload them in. And we’re not talking about a huge amount of data. And all of this can be on a CDN anyway, so it doesn’t matter too much. But that’s the way it should be. That’s kind of the way it should be dealt.
CHUCK:
Now, do you do the same kind of thing with JSON data off of APIs? So, you know that you have this service and they’re most likely going to need this data at some point. Do you pre-send that as well?
ELLIOTT:
I don’t. I don’t do that, no. There’s no reason why you couldn’t, although that might make sense to just pull down on page load on the client side, depending on how that’s going to work. Yeah, I guess you can preload that stuff. It depends on your use case. If they’re definitely going to need it, then you do pull it in. If they might need it, then you do. Otherwise, you just figure it out case by case. If not, if they can handle the load, then when they click on it, you can pull that data in.
The great part about this asynchronous system that we built was that it fell back really well. You request the URL when you get that actual page. So, the pages still exist in the wild. And if you reload on one of those pages, you get the original. So, if falls back. And the same is true of preloading the data in. If you preload it on page load, then the data is in there. But when they inevitably visit that URL, what you’re showing, that page section where you’re showing that data, you can make that request again and update what you have with the new information. So, there’s stale data and then an update rather than just a loading speed. So, the update may be transparent. There may be no changes since you pulled it.
But I don’t know. That’s where I like to look at using something like Pusher or using a web socket or streaming it into them on the fly, because rather than make two GET requests, you just seed them with a certain amount of data and then just fill in everything that’s happened since then. Still quite a bit of effort in terms of actually implementing that. So, it comes down to whether or not you can [chuckles], whether or not you can do both with the same amount of [love] for whichever site you’re doing.
JOE:
So, Jamison was talking a little bit about a blog post called ‘The Need for Speed’.
JAMISON:
I’ve referenced this a few times on this show I think. It’s a really good presentation about performance. But there’s a section on responding optimistically to user interaction. And I thought that fit in really well with some of the performance things you were talking about where you immediately render data as soon as it’s input. You don’t wait for responses from the server. Or when you’re initially loading the site, you can load a skeleton and then load the rest in the background or things like that. Are there other techniques like that for performance with asynchronous UIs?
ELLIOTT:
Yeah. I guess, I’ve just clicked that link and I’ve been looking at this page now. I think I read it a little while ago. I’m not sure.
JAMISON:
It’s been around for a while. I think it’s a few years old.
ELLIOTT:
Yeah.
JAMISON:
But still some good writing on there.
ELLIOTT:
Yeah. For me, I think this is super-duper important. Speed is one thing, but perceived speed is actually what you’re dealing with. If you’re not doing any optimization then your perceived speed is pretty much the same as your actual speed, right? If you don’t cheat at all, then they’re not going to see it be any faster than it actually is or actually how long it takes to load in the thing. And there are heaps of things you can do.
You can load in, for example low resolution versions of an image and then replace them with the high resolution images. I think Facebook used to do this, where if you were quickly flicking through a gallery, you would flick, flick, flick, flick, flick, and you’d start to see these really compressed, really low quality versions of the images that you’re looking at. And I think they would load them all in at one time, probably as a single image, and then move it around using JavaScript wizardry on the front end. But it meant that you had an idea of what you were looking at before the entire content came down. And the entire content might be a few hundred K, which sitting here in my Fiber optic connection in San Francisco, is very, very fast. But it’s not always if you are in another country or on a bad connection, or just when you go through a tunnel on a train. You want to be able to flick through and have an idea of what you’re looking at so that you get the contextual information about something long before that second load comes in.
In an ideal world, everything is super-fast and instant and you [inaudible] high-speed access everywhere. But if you build with the concept of slow and latent connections, if you build with perceived speed in mind, it raises all the boats. You get the advantage for everybody. So, even a high-speed connection, if they’re switching off a Wi-Fi connection, if they’re coming off that as walking away from the house or something [inaudible]. I walk away from my house, my Wi-Fi connection is still connected but the data is not going back and forth. [Chuckles] It’s just stopped and the phone is like, ‘I don’t know what’s happening here.” You’re not actually connected to the internet. It takes a few seconds for that cellular connection to kick in.
So, while it’s doing that, that request can be sitting there in the background. If you build it correctly, you can wait for that connection to die and try it again or give the user a notification saying, “Hey, this one particular thing you were doing has stopped. It’s not working.” If you’re submitting a form on a website and you submit the form to another page as you’re walking away from this thing, the page just doesn’t work. And you’ve got to go back to your form. And hopefully, the browser’s got all of your data. You don’t know whether your request went away and if it did, if it will come back, or what. You don’t have real, you don’t have any options. You cannot just sit [with that] again. It’s not, this spinner’s not still spinning.
So, I think that’s sort of what you’re talking about. But what we’re looking at down the bottom is they loaded the [inaudible] header and logo and buttons and things that are always going to be the same before they load in the user data. Is that what you mean? The frame of the page loads in and then the data loads in?
JAMISON:
Sure. Yeah, I wasn’t really going for anything specific. But that’s one of the things.
ELLIOTT:
It’s a super nice trick and it’s pretty cool. And one of the great parts about this is you could put all of your UI code, all of your icons and CSS and HTML and layout stuff, you can put that on a CDN. So, you can put your framework files and the templates and everything up on CloudFront. Get served from close to wherever they are. So, that’s pretty fast. You render that in and then you update another object. We do this in Ember. It’s built in. And say I have a box at the top right of my page that has little icon [inaudible], you can load that part of the template in before you actually know who the user is, if you know he’s supposed to be on a page with the little icon at the top. So, you either have a little loading, or just a gray default icon. And then as soon as the request comes back with the user’s details, as soon as you know a little bit more about them, you just update that part of the page.
So, the location of them of is already set. It’s obvious that something is about to happen, that there’s something still to come. You know where you are and you know where that’s going to be when it eventually has the data. And it’s super optimistic and it’s totally cheating. But it means that, for one thing it means that you’re pulling in less information from the server. So, that slow connection or that slow JSON request that’s coming from an AWS server somewhere on the other side of the planet just has to carry just the data itself and nothing to do with the presentation.
CHUCK:
So, one thing that I’ve run across with this a little bit is I’m working on an app and we’ve got Angular in there. And so, it shows the little handlebars, episode.number, episode.name. You know, eventually it gets the data and so then it goes and it inflates all that, puts all the data in the right place. So, the page loads fast but it’s got a bunch of data in there that’s not…
ELLIOTT:
Oh, so you’re saying that you actually have those variable names in your templates and the user sees them?
CHUCK:
Yeah.
ELLIOTT:
Yeah, see that’s super gross.
CHUCK:
Yeah.
ELLIOTT:
You want to avoid that. And you can, by putting a loaded flag on that thing. If the model is loaded, show those bits. Otherwise, just show some text that says loading. I think that’s not the right way to do it.
CHUCK:
Yeah, I agree. I just, I was trying to figure out a better way.
ELLIOTT:
We do it, I [inaudible] on the actual object. So, I think the object is a promise in Ember and you can check whether that promise is being resolved. So, you put a conditional around that part of your HTML on your template and you say, if user.loaded, I think it is, show the user information. But if not, you show the placeholder stuff. You can have it on the actual object itself where it has a default value that gets updated when it gets pulled in from the server. But it’s a template thing and you want that to come first. So, you don’t want to be showing a [chuckles] a double handlebars tags all through your HTML. That’s not ideal.
CHUCK:
Yeah, I wasn’t impressed. But I was trying to figure out.
ELLIOTT:
That isn’t totally a thing. You have to start building your code in that way. You’ve got to start thinking about the separate states that each object can be in, whether or not you have the data, what you want to show when it’s not there. Otherwise, yeah, you just get placeholder bits for the thing. This is where your name will be, or lorem ipsum, which is even worse. So, it does take some more work. And I think the idea is to minimize how much more work you have to do to get that functionality, whether it’s free or whether you have to do a whole lot of grunt work to make that stuff happen.
The more layout or architectural or structural UI code you could bring in before any content comes in, I think it just makes the waiting less painful. It means that you can start to get an idea of where you’re going to need to look for the information that you want. So, your advert hasn’t come in yet, but you know that’s where it’s going to be. So, when you eventually need it, that’s where it’ll be. If you wait for the whole page to load in and then display it all at once, you’re just sitting there looking at that white screen. And I think it feels like it takes ten times as long. It’s just a blank, white screen. There’s no loading spinner or anything. It’s just like, “This is a page. It’s just not here yet.”
JOE:
Well, that shut everyone up. [Laughter]
JOE:
You told us.
ELLIOTT:
What did I say? I don’t know. I think this is important and I think it gets forgotten. I don’t want to sound too foreign here, but by Americans and North Americans who live close to the servers that they use. And yeah, out on the second world, [chuckles] the bottom end of the pacific, this is, it’s just something that we deal with on a day-to-day basis. I don’t know. Living here and each page just loads in fast and everyone’s on Fiber and LTEs on your phone. The world is a good place to be.
AJ:
Well, whenever I’m using my phone, it seems like it sucks because the connection can drop between a request, right? So, I can get half the page downloaded and then all of a sudden, the connection decides, “Blip.” It’s not there. And then I have to hit refresh.
ELLIOTT:
Right.
AJ:
And then it makes all 30 calls again. So, I understand that that’s a real pain. It’s actually interesting. That’s something that the Firefox OS guys seemed pretty adamant about when they spoke at OpenWest.
ELLIOTT:
Oh, really?
AJ:
They were talking about how Firefox OS is not for American developers. They want American developers to get into it because there are good developers here, but they’re like, “This is not for America.”
ELLIOTT:
[Laughs] Yeah.
AJ:
“This OS is for people that have different needs,” and they were saying how if you’re going to develop for Firefox OS you have to consider their bandwidth constraints and that kind of thing. And that’s something that it seems the traditional Rails type of framework doesn’t handle very well because it’s always like every single individual thing is its own resource so you make a bajillion requests.
ELLIOTT:
[Chuckles]
AJ:
And it seems like some APIs are moving towards this idea of when you make your GET request, it pulls down all the data you’re going to need for your session.
ELLIOTT:
Yeah.
AJ:
[Inaudible] when you do your updates, it updates atomically, this particular subset of your data.
But you do that GET and it’s like, boom, here’s everything.
ELLIOTT:
Yeah. Yeah, side load. You can load in other resources that are related to what you are pulling in as separate records that aren’t child records of what you’re pulling in, but as related objects that you’ll need. And it’s true. Yeah, you don’t want to saturate the connection with [chuckles] 16 requests running at the same time, because then you’ll start to hang the browser. And nothing is more blocking than hanging the browser. You can’t even change tabs or anything if you start just overload the browser. So, you can’t do that either.
CHUCK:
When we talked to Steve Klabnik about APIs, it really did come across that do what makes sense. So, if you have a request that needs to do a specific set of things, or a specific thing, then make an API endpoint for it. And so, if you want to side load all of that data, or you want to make your system break it up into separate requests, just make those decisions based on what makes the most sense to get those people there. And then I really like where this conversation has gone in the sense that yeah, the crappiest connection that I have to the internet is on my phone. So, push some data through the phone.
ELLIOTT:
Right.
CHUCK:
I was just doing some work over at my old house because we’re about to sell it. Our renters moved out. And yeah, I didn’t realize I lived in a dead zone over there. [Chuckles]
CHUCK:
And it’s 3G and it really sucked to be over there.
ELLIOTT:
Yeah.
CHUCK:
But you know. And so yeah, go find a place where you don’t have your LTE connection and give it a shot.
ELLIOTT:
It’s hard for people who live with good connections to really emotionally understand what it means to have another little blue dot. Because we don’t have LTE, we didn’t have in England in the city that I was previously. It was in England. And it doesn’t exist in New Zealand. And a lot of it is rural and you just get that blue dot and you’re like, “Well, I can’t use this thing anymore.” But you’re totally right in that mobile is just where this stuff happens, is where asynchronous really came to the forefront. And with me, iOS has been so incredibly good at handling asynchronicity, at handling variable connections, variably latent and untrustworthy connections to the internet.
So, if you think about pull to refresh, pull to refresh is a good example where, I found in my little Twitter app when I pull to refresh the whole page, it doesn’t kill the app while it’s pulling in stuff. It doesn’t do a big thing over the top of the page. It doesn’t load in all those tweets again. It just gets more stuff and puts it at the top, like a big queue, which is what I want. If I still want to read these tweets and load in some more at the same time, and that might take two seconds or a second or [chuckles] 20 seconds over there, then it just sits there in the background. And that’s what you want. This phone is incredibly smart and powerful and it has a good connection to the internet. And yet something can slow it down. And so, the idea is to take that something that can slow it down and run it basically in a different thread, like in the background, just not even there that you cannot see and you’re not aware of until it puts things to the front.
And you didn’t really use to be able to do that so well when everything was single-core CPUs on the phones. That was pretty bad. But these days, you don’t really have an excuse. And all Objective-C, all of the iOS apps that I used prioritize asynchronicity. It prioritized having the application on the phone and then sending the data in from the web, which is [chuckles], which is the way it should be. I’ve used a few web apps masquerading as native apps that they have these days. And it’s just not the same. Your content and your presentation all come down together. And they’re supposed to be [inaudible]. You know, it’s tricky. I think it’s smart to get all of the layout related code and data and sized, push it to the client and have it there, and then just mess with JSON. Just push JSON back and forth. You can even then tell the client when there’s an update to the files that you need updated.
And I’ve seen some apps that could update some of their JavaScript on the fly, I think. There was, I can’t remember what it’s called. It was named after a dinosaur or something. But you could update the files that they were using as they were using them, using diff for your JavaScript. It blew my mind a little bit. But you know, I really liked it. I like the executable part of the app being local. And you just don’t get that when it’s synchronous, when every separate page is a separate resource and you’ve got to go away to Virginia to go and get it.
CHUCK:
Yeah, of course when making those requests for the JSON, doesn’t that have the same problem as requesting the HTML?
ELLIOTT:
Yeah, but that’s just data. The JSON is just data. It’s not presentational at all. So, you can just wait for that to come in.
CHUCK:
I see.
ELLIOTT:
And still have your original JSON there. But if you’re saying, okay, give me all of the new content for this page, suddenly the page you have is stale. So, you might as well gray it out or something like that. You can’t touch this data anymore because some of it may be wrong or old. And that means [inaudible].
JOE:
You know, you really make this a lot more visceral when you describe it the way that you did. Like, “I’d like to show you this page but I’ve got to go to Virginia in order to do that.”
ELLIOTT:
[Chuckles] Yeah, we forget, right? We forget. We forget that it’s going so far away.
CHUCK:
Oh, it’s just going through something on the airport Wi-Fi.
ELLIOTT:
Everybody always wanted to disappear this, in the same way that JavaScript disappears memory, where you’re just like, “Okay, well you’re in this magical environment where everything is fast and the speed of light doesn’t matter and it doesn’t really matter that you’re inside a Faraday cage and you’re internet’s not working.” But the real world doesn’t work like that. And we need to accept that and work around it in a way.
And in actual fact, the working around it makes the application that you build so much more robust and so much more physically nice to use. Because when these issues hurt, the best part is they’re non-deterministic. It may not even be your fault that your internet just suddenly went slow. Someone turns on the microwave and it kills your Wi-Fi connection. [Chuckles] Suddenly stuff’s not loading and it’s frustrating. You don’t want to be sitting there not being able to scroll what it is that you were already looking at.
Infinite scrolling is both a good example and a bad example of asynchronous interaction. It’s a good example because you never get to the bottom of the page because stuff just loads in the background. Usually it’s fast. If you get about hallway down the page, and it’ll just load a little bit more content that you eventually get to. Bad of course…
AJ:
Side rant, side rant.
ELLIOTT:
Is you don’t know what page you’re on, yeah. Here it goes.
AJ:
Why do people put footers on ever-scroll pages? [Laughter]
AJ:
I want to get down to the contact us, but I can’t get there.
ELLIOTT:
Yeah. There’s a lot of not thinking it through that goes on with that stuff. So it’s, you cargo cult that. It’s a, not a fault-terminating cliché, but it’s like a solution that breaks everything. Oh, we’ll just put infinite scrolling on it. You have to use common sense for this stuff. This doesn’t do itself. Ember is not, well Angular is not a fix everything solution that you just throw at your site and it works. You can’t do infinite scrolling with a footer. It’s dumb. It’s not perfect.
JAMISON:
So, what you’re calling for is a JavaScript framework that prevents you from doing infinite scrolling with a footer that’s sticky, right? [Chuckles] What we need.
ELLIOTT:
Is it… Have we got a pull request then? Is this feature an existing one? No, I think no.
JAMISON:
No, no. You got to make your own framework.
AJ:
Guys, guys, guys, guys. If Angular isn’t solving all your problems, you’re doing it wrong. [Laughter]
ELLIOTT:
You are, you’re totally. If you have a footer and that’s scrolling through, yeah, you are doing it wrong. There’s always a middle ground with this work. My favorite example for any of this pagination and infinite scrolling stuff is Amazon. And they know the latency cost more than anybody, because it literally costs them money. They don’t make money if you’re waiting around making people leave. So, what I’ve noticed them doing is you still have paginated pages, one, two, three. But then when you get to that second page, it has three or four items in it and it loads the rest. So, rather than making a page two request then wait for the server. Of course, you can get around this all by having all of your pages just in memory and preloading page 11 when you’re at page 7. But for them, that was probably overkill and nobody goes past page 2 anyway.
So, what they do is they just load in enough items to have page 1 and half of page 2. So, you get to page 2 and the time it takes to load in the rest of page 2 and probably the first half page 3 is enough time for you to sit there and look at the first four items on page 2 and decide whether they’re for you. So, infinite scrolling aside, you can have a button at the bottom that says get more tweets, or to get more content that has already been loaded, that’s loaded in the background. The data’s there, ready to go. You hit that button and it just pops down instantly, without a loading spinner.
JOE:
Amen, amen.
ELLIOTT:
Which is perfect, which is the right way. Instagram did this with their JavaScript SDK. They had a one-click login button. And what I mean by one-click was that you click the button and you were logged in. You didn’t go to a pop-up page. You just were logged in. And the way they did that was the iframe hack. They load up an iframe with your session data in it and they change the URL in the iframe, detect that from the parent frame, and then decode that to get your login information and they know who you are. So, you click this login button, it just fires the callback instantly because it has all of your data and your avatar and stuff. And you’re just, you’re logged in as soon as your mouse is unclicked, on mouse up. It’s nuts. It was the coolest, most lovely interaction I’ve ever seen. And it just doesn’t happen often enough. I think they disabled it because it was a huge security flaw or something. But for the brief shining period when that worked, we had one-click instant login. And it just, it felt like the future. You felt like that’s how it should work.
It was never going to be that fast if you didn’t build it in another way. You could never get to instant without doing it in advance. You could never optimize the way that connection costs, even with Google Fiber, even with two computers hardwired together. There’s still going to be some time after you click before you get the response you need. So, in order to get that time from 0.1 second down to 0 seconds, or 0.1…50 seconds, down to 0, all the way 0, you have to do it in advance. How long you do it in advance is another question, whether it’s going to take five seconds to do this or you need to do it five seconds in advance, or whether it’s pretty quick. But you can’t get to asynchronous by speeding up synchronous. I don’t think it works. Dead silence. This is the second in this conversation.
JAMISON:
No, that’s a great quote. I was just thinking about that.
CHUCK:
Yeah.
JAMISON:
That’s the money quote, I think.
ELLIOTT:
This is what we did. This is what we did with Forge. We basically said, we’re going to make the fastest hosting system in the world. But we can’t, you can’t do it by just serving pages faster. You can’t take the traditional approach and optimize it to be next generation. It has to be, there’s some quote somewhere that says if it’s not an order of magnitude better, it might as well be worse. It’s the same. I’m not going to use it if it’s not an order of magnitude better. The only way you make it ten, a hundred, a thousand times faster is doing it a completely different way. Otherwise, you’re talking about quantum computing or god knows what to get those bits across faster.
Well, it’s 2015 or whatever it is. These computers are fast enough. And we’re smart enough to be able to do this stuff in the background. All we need to do is make it easy for us to build these things so that they’re asynchronous. And that’s why these frameworks are so great, because then I just don’t have to think about it. It’s engineered into the way I build an Ember app. And it’s engineered into the whole thing. And I don’t have to think about it. It makes it easier to it asynchronously than synchronously. And then it’ll happen.
CHUCK:
This is really cool. I’m going to have to go back and listen to this again and just really wrap my head around it.
ELLIOTT:
I think it’s important. And I think it will shape the way that we do a lot of our development for the next little while. It may not even be obvious. Best case scenario is we don’t even notice that this stuff is happening. And the end user just doesn’t even care about how it’s actually working. We just, we want it to be exactly the same way as it is now, but infinitely faster. And the way that we do that is changing everything, but the end result is not, should not be obvious. And the end users don’t even notice when your stuff is really fast.
We did this on the Hammer for Mac site, HammerForMac.com. And it is fast. All the pages loaded fast. You can go through the docs, click all the links, with no lag time. It’s just that least commented upon feature. It’s super disappointing. [Chuckles] It’s just not something that people notice. Every once in a while, someone’s like, “What the hell is happening. Why is it so fast?” But in actual fact, they expect it to be this fast. Anything short of instant is slow I think is how we should think about this stuff. Anything short of, it’s already there, because why am I waiting? And it’s so much more disappointing as computer [chuckles]…
AJ:
So, I want to take a moment to springboard off of that one comment. So, sometimes I feel like instant is too fast. Sometimes, having a 200 millisecond delay just makes me feel better as a person, you know?
ELLIOTT:
Fine, fine, absolutely. That’s fine. But fake it. Fake it so that the 200 milliseconds is a 200 somewhere in your code so that you can change it.
AJ:
Right.
ELLIOTT:
So that the second time it’s faster, or something like that.
CHUCK:
AJ’s worried about having too much money, too.
AJ:
No, no, no. But do you guys know what I mean? Do you know what I mean? Where the brain wants a little bit of time. It wants a little bit of time between the action and the reaction.
ELLIOTT:
Okay, sure.
AJ:
Because honestly, if you get something shorter than a tenth of a second you can’t tell which happened first. Have you ever had that problem where you’re watching TV and the audio and the video are out of sync by less than a tenth of a second?
ELLIOTT:
Oh, yeah.
AJ:
And it’s even worse than it being out of sync regularly, because you can’t tell which is happening first.
ELLIOTT:
Yeah.
AJ:
Because it takes more than a tenth of a second to process cause and effect.
ELLIOTT:
Okay. My argument for that one is usually the Facebook thing. Facebook did a study. And ever since I read this, I’ve been trying to find a link to it and I haven’t been able to. But they found that [sighs] and I go against exactly what I said earlier, is that the actual time that something takes and the latency of doing something is not as important as the consistency of it always taking the same amount of time. When you click on something, you want it always to take the same amount of time because it feels familiar. When you click on messages in Facebook, you want it to take 1.3 seconds. If the second time you come back and it takes 0.1 of a second, you’re like, “This is different. Something has changed.”
And this is what I don’t like about Turbolinks. Turbolinks, that first load of a new URL is always slow. It has to go and fetch from the server. The second time around it caches it and then updates some wrapping or whatever it does. But you want it to be the same each time you do it. And you really want to curate that experience and have it be the speed that you want it to be. If something takes half a second, but it takes half a second consistently and you know what to expect and it’s not too long, half a second’s not so bad, it’s ok because you’re like, “Oh yeah, I remember. I remember this takes half a second. I understand this interface emotionally. I understand that it’s doing something.” But if the second time it’s fast, you’re like, “Ah, what’s happening?”
So, it’s got to be fast and always fast, or it should be slow and consistently slow. And in that case, if something can be slow, what you should be doing is setting a minimum timeout on whatever it is. So, you’re talking about 200 milliseconds. It’s a good example. If you take that 200 milliseconds and say that the server response can be between 3 and 80 milliseconds to respond with this data that it needs, you could say, “Okay. Well, how about we have that minimum timeout be 60 milliseconds for this thing?” So, the slowest it’s ever going to be is that 80 or 90 or whatever it is, the slowest example. And then when it’s cached, the 3 millisecond thing, it’s going to drop it down to 60, maybe 50. So, every time it still takes a little bit of time. But the first one is just slightly, slightly slower. And every one after that is reasonably quick, like not too bad.
So, by making this asynchronous and by preloading things and doing it really smartly, you can engineer artificial delays in your application, in your code, that work with the interface. And they’re direct. And you know how long it can take. They’re not as variable by doing it asynchronously. You just gain more control of it. That 80 milliseconds is not dependent on [chuckles] on what cellular connection you’re on or something. It’s more curated. You’ve got more control over it. Do you know what I mean?
AJ:
Yeah.
ELLIOTT:
You can fake this stuff.
AJ:
That makes a lot of sense.
ELLIOTT:
You can fake delays and I am all for it. You can lie, cheat, and steal to end users and you should. If it makes it a better interface and a more consistent flow, and it makes the computer really feel like you’ve done something really nice with it, then by all means, slow stuff down. Make stuff take a while. Use transforms and transitions and all these things to move stuff around so that things are logically visible to users. But don’t let the cellular connection be the [chuckles] the reason or the thing that you’re relying on to give you these delays. That’s nuts. That’s nuts. You’re putting your whole application and your user interface and what you’re building in the hands of a Telco. And I cannot think who [chuckles], I can’t think who would be worse [chuckles] than a Telco.
JAMISON:
So, I read Ilya Grigorik’s book, ‘High Performance Browser Networking’ a while ago. And the main takeaway was just how fantastically complex cellular networking is and how anything ever works all the time is a total mystery to me.
[Chuckles]
JAMISON:
So, I agree. It seems like it’s so variable that you need to have some certainty in your control, not in someone else’s.
CHUCK:
Alright, well we’ve been talking for about an hour. So, we should probably get to the picks. But before we do, I want to ask, are there any other critical bits or tricks or tips that you like that you want to give us before we do that?
AJ:
Or anything else that you want me to quote on the Horse AJ Twitter account?
ELLIOTT:
Oh, is this me? You’re asking me? I did see that.
CHUCK:
I’m going to tweet you.
ELLIOTT:
I tweeted [inaudible].
JAMISON:
I have one more question. We talked a little bit about some examples. But are there any other examples of sites or apps that do this really well that have really great asynchronous UIs?
ELLIOTT:
You’re putting me on the spot here. I’m going to have to [inaudible].
JAMISON:
It’s okay.
ELLIOTT:
[Laughs] I’m just trying to think of some off the top of my head. And not really. Anything that’s built on iOS is an interesting example of how asynchronicity is more built into that platform.
JAMISON:
You mean a native app?
ELLIOTT:
Yeah, a native app, a native app. Native apps have the understanding built in, whereas the web, rather the web apps, websites have no concept of anything to do with you might actually want to have pages than just the single page. Websites without JavaScript just don’t have [inaudible]. But on iOS, it’s built with that in mind. And I think built with that, built with asynchronicity in mind is what we need to do. We need to build with this concept, just as such a basic fundamental understanding of how the world works and how the wire works. I’m trying to think of things that are asynchronous.
I work here at Dropbox and I think Dropbox is an interesting example. If you save a file, it goes up in the background. You don’t want to have to wait for the file to be saved. You don’t want to upload it to a website. You want it just to happen in the background with a little progress bar. Anything with a progress bar, I really like, because it says there’s something happening. We’re still updating. You’re going to be aware of it if there’s an error and something’s moving. I don’t have any off the top of my head that do this asynchronicity stuff really well. But just every major site is starting to do this a little bit better. And I like it. And it is starting to happen really well.
CHUCK:
AJ, do you want to start us with the picks?
AJ:
Yeah, I do. Okay, so I haven’t been as diligent in paying attention to awesomeness lately. But there is something I don’t think I’ve picked before. Bookshelf.js. Have I mentioned that before? No?
Okay, great.
ELLIOTT:
I haven’t.
AJ:
So, Bookshelf.js is an ORM for SQLite, MySQL, PostgreSQL, and Maria SQL. So, when you realize that you’re finally over this whole NoSQL crap and you want to get back to what works, you got Bookshelf. Actually, it’s a pretty good library. I haven’t used too many of the other ORMs that have existed for Node over the years. But Bookshelf is a very, the documentation’s very clear. The maintainer is just absolutely phenomenal about answering any issues that you post in either of the two repositories. Bookshelf is the abstraction layer and then Knex is the layer that actually interfaces with the core underlying modules, the pg and sqlite3 and so forth. And it’s nice sometimes to have SQL. I know that we’ll probably get some very negative comments on this episode from me saying that. But SQL does work sometimes. So, huzzah! [Laughter]
CHUCK:
Alright, Jamison what are your picks?
JAMISON:
I just have the one pick. It’s a free online book about the web audio API. I’ve been getting a lot into it lately, just playing around with it more. And the actual API itself isn’t crazy complex. But, just the whole background and digital signal processing and physics of how sound works and that stuff, that’s a little harder for me to pick up. And this book does a pretty good job of walking you through how to use the web audio API but also explaining some of these underlying concepts so that you can do cooler stuff than copy paste someone else’s code with it. That’s my only pick.
CHUCK:
After AJ’s picks, I was hoping you were going to pick NoSQL.
[Chuckles]
JAMISON:
Just assume I have a standing pick for the opposite of whatever AJ picked.
CHUCK:
[Chuckles] Joe, what are your picks?
JOE:
Alright. So, I’ve got a few picks here. The first one is a book that I picked up off on an Amazon eBook sale. It’s called ‘Off to Be the Wizard’ and it reminds me a ton of book I’ve picked in the past called ‘Ready Player One’. It’s about a guy who is browsing around on the internet and hacks into some server and finds a text file that describes everything in life and realizes that life is actually just a computer simulation. And because he has write access to the text file, he can now control everything. And so, he’s like a wizard. And so, for some reason, he’s running from the law and he teleports himself back to medieval England. And it’s like a little bit of a Connecticut Yankee in King Arthur’s court but done with a real modern day twist where he needs to have his smartphone on him in order to affect all this magic, because that’s how he edits this text file. I’ve been reading it for a little while and it’s been really quite entertaining. So, I’m going to pick that. It’s only $4 in eBook form off Amazon.
And then because I picked that and it reminded me so much of ‘Ready Player One’, I want to pick again ‘Ready Player One’. But this time, not just the book. I’m picking the audio book version that was read by Wil Wheaton because apparently Wil Wheaton is an amazing narrator. He’s narrated a few books and he’s supposedly just fantastic. So, my second pick is ‘Ready Player One’ read by Wil Wheaton.
And then my last pick is a family party game called Idiom Addict. That’s idiom like the language construct and then addict like you’re addicted to something. It’s called Idiom Addict. And it’s a really fun game where it has a bunch of cards that have idioms on it that are described using synonyms. So, early to bed, early to rise might be described as in first to get up, first to go down. And then whoever’s reading it, somebody reads it and then people have to try to figure it out in a given time limit. And it’s actually really fun. I’ve played a bunch of times with friends and family. I just had a great time playing that game. So, that’s my third pick, is Idiom Addict.
CHUCK:
That does sound like fun.
JOE:
Yeah, it’s really fun.
CHUCK:
Awesome. So, last week I picked ‘The Miracle Morning’ and I’ve been sticking with it. And it’s been terrific. I know I mentioned it and then said I’d report back in. And so far, it really is making a difference for me every morning. So, I’m just going to let you know that it’s still awesome. So, go check it out. And yeah, that’s all I really got this week. Elliott, what are your picks?
ELLIOTT:
Has anyone picked Swift? Can I pick Swift? I know it’s not JavaScript, but can I pick Swift?
JAMISON:
I don’t think anyone has.
ELLIOTT:
I pick Swift. That’s my number one pick. I like it. I’ve been using it. It’s a lot like JavaScript. It’s really nice to write. It’s like writing Objective-C without having to do all the Objective-C stuff. You still have to deal with Cocoa. And if you’re on the Mac, that’s just as dumb as it’s ever been. But it means that you, the playgrounds are nice. Xcode 6 is fine. It’s cool. I like it. I really like it. I ran into some amazing segmentation fault bugs that you only get when you have to archive something. So, I had fun with that.
But I honestly think that Swift is like JavaScript for iOS and Mac developers. It’s so similar and it feels so similar that I think it’s going to be nice to use for a lot of people who’ve only had experience in JavaScript. Of course, only Objective-C people are tearing their hair out that the single language that they can write is now deprecated and they’ll never need it again. But yeah, give it a try. The playgrounds are quite fun. You can sit and write code and see as it’s evaluated. That’s my first pick.
My second pick would have to be Framer. Does anyone use Framer? Do you guys use Framer?
JAMISON:
I do not even know what it is.
ELLIOTT:
Framer.js. It’s a JavaScript library that gives you a set of fundamentals and objects for doing prototyping. So, like layers and states for animations and click events and things. There’s some new stuff coming out for it that I have heard rumors about that’s going to make it pretty amazing if you’re doing any just UI playing, if you’re messing around on interfaces and prototyping stuff out. It’s a really nice way to get from zero to functional, a working demo. I like it a lot. So, that’s Swift and Framer, are my two picks.
CHUCK:
Very nice. Well, thanks for coming, Elliott. We really appreciate you taking the time to talk to us about this stuff.
ELLIOTT:
Yeah, thanks for having me, everybody.
JAMISON:
Yeah, it was great.
JOE:
Yeah, thanks.
CHUCK:
We’ll go ahead and wrap up and we’ll catch everybody next week.
[Hosting and bandwidth provided by the Blue Box Group. Check them out at Bluebox.net.]
[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit CacheFly.com to learn more.]
[Do you wish you could be part of the discussion on JavaScript Jabber? Do you have a burning question for one of our guests? Now you can join the action at our membership forum. You can sign up at
JavaScriptJabber.com/jabber and there you can join discussions with the regular panelists and our guests.]
114 JSJ Asynchronous UI and Non-Blocking Interactions with Elliott Kember
0:00
Playback Speed: