DAVID:
Well, my mom doesn’t remember Woodstock, which means she was there. [Laughter]
ARA:
Oh, that’s hilarious.
[Hosting and bandwidth provided by the Blue Box Group. Check them out atBlueBox.net.]
[This podcast is sponsored by New Relic. To track and optimize your application performance, go to RubyRogues.com/NewRelic.]
[This episode is brought to you by WAZA, Heroku’s one day celebration of art and technique. Join Matz, Aaron Patterson, and more on February 28th in San Francisco. Use exclusive code ruby-rogues-13 for $50 off registration atWAZA.Heroku.com.]
CHUCK:
Hey, everybody and welcome to Episode 94 of the Ruby Rogues podcast. This week on our panel, we have James Edward Gray.
JAMES:
Hello everyone.
CHUCK:
Avdi Grimm.
AVDI:
Hey from Pennsylvania.
CHUCK:
David Brady.
DAVID:
Hey everybody. I’m really excited to be here today because I’ve been stalking Ara for about seven years now. And the closest I’ve come is I filled in for him at MountWest Ruby Conf last year when he couldn’t make it to the conference. I took his speaker slot.
CHUCK:
So, the closest you’ve ever gotten is him not being there.
DAVID:
Yes. [Laughter]
CHUCK:
Alright. We also have Katrina Owen.
KATRINA:
Hi, I’m very excited to be here today because I’ve missed the last two episodes due to being stuck in a time travel paradox.
CHUCK:
Wow!
JAMES:
Ouch!
CHUCK:
I’m Charles Max Wood from DevChat.tv. You have one week left to sign up for Rails Ramp Up. And we have a special guest. That is Ara T. Howard. Is that how you want to be introduced?
ARA:
That will do just fine.
CHUCK:
Alright. Since you haven’t been on the show before, do you want to tell people who you are?
ARA:
I am Ara T. Howard. I’m the CTO of CodeForPeople.com which is my first consulting company which is and has been a partner of Dojo4.com. We are a software and creative agency in Boulder, Colorado.
CHUCK:
Awesome, how long have you been doing Ruby, Ara?
JAMES:
A looooong time. [Laughter]
ARA:
Yeah. I forget what Ruby Conf I was at, one of the ones in Denver here I think when there was two or three people that raised their hands that were getting paid to be full time Rubyists, and I was one of those people. I was definitely one of a handful of people to be doing it professionally. I was actually Research Associate with the Cooperative Institute for Research in Aero Sciences which is, it’s actually in Boulder here, working at NOAA in Forecast Systems Laboratory which is a huge research facility here. There’s thousands of people that work there. And I introduced Ruby to that building after swearing trying to install, oh, what was it? Lib dubdubdub for Perl trying to do some network stuff and unpacked Ruby, tar.gz, configured make install and started doing a bunch of network stuff because it came with good HTTP support out of the box. And that was, I want to say, pre-1.4 something, something like that.
DAVID:
Wow!
CHUCK:
Wow!
DAVID:
We were talking last week in the after show about you and we were talking about what version of Ruby you had started on. And I think we had guessed maybe 1.6 or 1.5. So, yeah. We lose. [Laughter]
ARA:
No, it was real -- I was checking it out from SVN, I think, at the time or man, it might have even been CVS now that I think about it. But yeah, and I pulled up the docs, the docs for the standard labor are quite good and just started doing my daily tasks in it at NOAA and I never looked back. Just basically one day, switched 100% from C++, and FORTRAN, and Java, and Perl, and all that crap and just basically started doing everything in Ruby, and occasionally, when I needed to, dropping back into C. But I don’t think I’ve ever programmed anything in anger in other languages except where necessary since then and that was 12 years ago or something like that. It’s hard for me to remember.
CHUCK:
So basically, what you’re saying is Matz has been doing Ruby for 20-something years and you’ve been doing it for 25 plus something years. [Laughter]
ARA:
Something like that, yeah. It was a small. IRC was small back then, let’s just put it that way. [Crosstalk]
DAVID:
IRC was IR. They hadn’t even got to the C yet. [Laughter]
JAMES:
To put some perspective on it, I think Avdi and I have seniority around here. And I’m pretty sure both of us were learning Ruby from Ara T. Howard way back when. [Laughter]
AVDI:
Yeah. Ara, I’m super excited about this episode because Ara is one of my all-time Ruby heroes.
JAMES:
Agreed.
AVDI:
I’ve been, when I look over my history of using Ruby and I look at whose code I’ve been using apart from Ruby itself, the one name that comes up over and over, more than anybody else is Ara because he has really so many just generally useful little gems that I just keep coming back to. So many projects that’ll have various task specific gems but then they’ll have a few of Ara’s gems because they’re always useful. Things like the open4 gem or the fatter gem, how do you pronounce that, or main for command line utilities and it just goes on and on and on, array fields.
JAMES:
Tags.
AVDI:
Yeah. I don’t know if I’ve used that one but I think I’m aware of it.
JAMES:
It was like, what was that one? Was it Markaby?
ARA:
Yeah, same concept. Just DSL for creating HTML, it takes a slightly different -- it’s similar to Markaby. The main difference with Tags is just the method naming strategy makes the DSL unambiguous. So, it’s more useful as a mix and it doesn’t pollute your name space so you can always generate tags, for example, PTag like doesn’t collide with a method name. But yeah, very similar. And we use that one a lot internally just like in presenters. We don’t use any helpers at all in our Rails apps, and we have super, super thin controllers, very fat presenter and conductor objects. So, we do quite a bit of programmatic HTML generation, just keeping the RB views super, super clean and light. We do use Tags. My favorite of my gems and I have -- I don’t even know. I think I have more than 150 at this point. My favorite one that I just use all over the place is Map which is less important than it used to be, now that hashes are ordered. One of my big disappointments actually, the improvements in hash in 1.93 were not significant enough. Map is, at a gross level, it’s like hash with indifferent access except much better. Hash with indifferent access kind of breaks down when you start inheriting and really using it heavily. It’s also better implementation of that. It’s also ordered but it’s also a tree structure. So, it supports like nested set operations. So for example, set deeply,nested,key to a value and it will autovivify on the way down, right? And we use that because -- well, the reasons that I use it so heavily is, one, I spent way too much of my life with somebody’s stupid option parsing code that’s not string symbol indifferent. It’s like, “Why? Why? I’m passing that option in.” And of course, you’re not because you pulled it in from a config file and it’s very fashionable to use symbols which is a very sad side effect of the Rails boom.
AVDI:
Indeed.
ARA:
Yeah. And so, we just have all this code that’s just like, your config is always coming in from outside. Strings, you’ve got to type one more character, whatever. So, map is indifferent that way but the other reason we use it a lot is, I’m a big fan of our controllers and our Rails applications at Dojo4. I shouldn’t say never but almost never pass models to the view. I like to be able to say to frontend guys, like, “You cannot do an in plus one query.” So the data is prepared, like you have a data structure. And we have a presenter, conductor library that we use that I wrote called ‘DAO’. Sounds esoteric, stands for Data Access Objects, very old data pattern. It’s been around forever. And one of the fundamental problems of caching pure data to views is just that you end up with a lot of, say it’s a hash, pretend it’s a hash. If you have deeply nested data, a tree of data, you end up with a lot of like, “If the hash has this key, then you can index into it the next level.” Anyway, Map supports that, those kinds of retrieved deeply nested keys and don’t go boom with a nil object error. So, it’s the backbone of the data structures that the DAO library uses for their presenter and conductor objects that it implements.
JAMES:
So, if you guys haven’t figured it out from that conversation right there, about the best way in the world to level up 16 levels in Ruby is go read anyone of Ara’s libraries. Just crack it open and start reading through it. I swear, every awesome system programming trick I know of, I learned from reading Slave and stuff like that which are libraries he didn’t even talk about just there.
ARA:
Yes. Old school Unix stuff.
JAMES:
Old school Unix stuff, that’s right. You do a lot of that kind of file locking and process communication and stuff like that. Talk to us about RQ, I remember that project from a long time ago and it was really fascinating.
ARA:
Yeah, it’s funny. Actually, Jeremy Hinegardner, @copiousfreetime on Twitter is helping me out do a little consulting work. So, Dojo4 right now, we do mostly web stuff. We’re building tablet applications, IOS applications, Rails apps, of course. But we’re not doing a lot of low level system stuff anymore which, you know, my background is in super computing. I helped build what was, at that time, the biggest super computer in the world which is Jet when I was at NOAA. I helped build some of the systems for that, ported a couple weather models to that system which was basically writing lots of Ruby that programmatically wrote lots of FORTRAN and C code generation stuff. And I switched groups after awhile and I was at their National Geophysical Data Center which is the night time lights group. If you’ve ever seen a picture of the world at night from space, everybody’s seen those, I made that. I can assert that because there’s only two developers working in that group. There’s only one data set. And yeah, it was really fun stuff. Heavy duty data processing. I mean, there’s lots of people, big data was fashionable then but that work didn’t even exist then. You certainly couldn’t provision EC2 Notes on AWS at the time we were building the compute clusters for NGDC. And so, the system administrators, IT group, genius guys that they are, had said, “You guys can’t have…” It was Cray supercomputers at that time and SGI supercomputers. “You can’t have that anymore. We’re going to buy a bunch of commodity Linux boxes and we’re going to -- you can use those.” But because of the draconian security restrictions that they had, there was actually no way to program them. So they were like, “Here’s 50 boxes, coordinate them.” And so, the research group that I was working for at that time, primarily scientists, not developers, they were SHH’ing in and running lots of jobs, right? It was just insanity. And so, we looked at, how do you coordinate these? There’s this, at that time -- this may sound funny to some young developers. But there weren’t things like Redis and queuing systems that you could just pull of your shelf and use at that time. And even if there had been, the security policies were quite strict in the government. So like, just opening a port’s non-trivial to do. So, we looked at a bunch of solutions. I forget what the -- SGE, Sun Grid Engine, they called it Grid Computing back then. I don’t know why that went out of fashion. But anyway, for coordinating these nodes and we had a bunch of fundamental pieces of code that we needed to write. And the first one was, it’s a queuing system, like put a job into a queue and have someone run it. So, RQ, which stands for Ruby Queue, was built on some really good pieces of software, some of the best Ruby gems out there and C code. It uses SQL light under the hood to manage its queue. And in our configuration, that queue actually lived on NFS. And although it sounds insane, the architecture, it’s a poll architecture, so every node simply works as fast as they can to read from the database and atomically acquire, mark a job as being run. And so its command line tools, the RQ is basically a pretty robust command line tool for applications to be able to submit jobs and manage an NFS managed queue. And that system, has been, right now, porting it to a new Rev App enterprise version and from 32-bit to 64-bit. But that system has been running 24/7, unmanned for nearly eight years. So, the system uses RQ. There’s a bunch of nodes that are basically pulling on a classified data feed, satellite feed. Tons of data’s coming down on these DMSP satellites and then there’s a bunch of worker nodes that are processing the data. It’s a quite sophisticated processing pipeline. One of the backbones of it, the other one is called DirWatch which is something lands in a directory and notice it and it’s also built on SQL light. So, it can survive a reboot. It’s built on RQ. And you know, we tested that system very, very heavily as in walk into the room and power it off and have it come back up. And it’s just -- there used to be a group of five people running that satellite Ingest System. And now, it just has sat there and run untouched 100% of the time for eight years.
AVDI:
That’s crazy. So Ara, you’ve just described all this infrastructure that you built out for image processing and satellite image stuff. So, I was getting into Ruby in 2005, and that’s when Hurricane Katrina hit. And I wonder if you could talk for a little bit -- you have worked on arguably the coolest Ruby project ever. Could you talk about how you used all that processing power for the power of good?
ARA:
Well, brief aside, one nice thing, if you’re into downloading MP3’s and you have a really heavy duty networking infrastructure... [Laughter]
ARA:
I’ll just stop that right there. [Laughter]
ARA:
So, it wasn’t all good.
JAMES:
[Laughs] Oh, this is the ‘how we used it for evil’ story. Got it!
DAVID:
Did a lot of Napster back in the day.
ARA:
Yeah, some of my -- if you look for hype.rb, or MP3 scrape, there’s some fun code that I released when I was working for the government. In any case, yeah. So basically, when I was working at NGDC, there was two main systems that we had. One’s the smaller cluster that does the real time, new real time satellite data processing. And that’s less image processing and more about collating satellite data and geo locating it and adding various bits of post processing to it to clean it up. Technically, they are images but they’re raw images. So, not normally what people would consider image processing. And then, there was another compute cluster called CF. The first word is Cluster. I’ll let you guess what the other word is. [Laughter]
ARA:
And that cluster is the research cluster. And so, that group uses that cluster to basically take a bunch of data and submit big jobs over it to produce images. At the end of the day, it’s an image. Actually, if you Google my name in like Linux Journal, we wrote an article about this because the data sets, the way that it explodes during the intermediate steps, it’s really quite remarkable. And so, that particular article was about a change image. So, it’s an image of the United States and the change is represented by like the R channel. So, like cities that get bigger are redder. So, it’s an RGB image of the United States. It looks pretty simple, just showing population growth over a number of years, I think five years, three years, something like that. And so, the kinds of jobs that we would submit are they would basically take the satellite data and do all kinds of crazy operations over it, like trying to find cloud free stuff, determining moonlight cloud tops, distinguishing moonlit cloud tops from actual lights. So like, moonlit cloud tops have the thermo band, they’re colder than lights on the ground, so you can distinguish them. All kinds of processing like that.
DAVID:
So, you’re tracking civilization via light pollution, essentially.
ARA:
Yeah. It’s funny, you think about it. People, they think satellites -- satellites are amazing, right? They can see the color of the hair on the back of your neck, but only if you tell them where to go because the datasets they collect are so large. It’s not like they’re continuously scanning the entire earth. And so, for most high quality satellites, you need to know a priority, where to collect data. Like, they send out instructions, “Collect data over Iraq tomorrow.” The DMSP satellites continuously collect data. They’re polar orbiting. They orbit our world, I think, 14 times a day, something like that. And so, they’re continually collecting data and they can detect night time lights, assuming they’re not covered by clouds, which is actually quite a big factor. But when you think about it, as sophisticated as remote sensing is, and it’s actually just recently changed with the launch of the Veer satellite, but there’s really no data set that measures man. It’s kind of a crazy idea. Of everything that we measure, we can detect a boreal forest from tundra. But we can’t detect man, because what’s the signal for man? [Laughter]
ARA:
Yeah, you could approximate it with like pavement or something. But light, though, at night that uniquely identifies human civilization on the planet. So, it really is a linear proxy for a lot of human activity. So, research has been done to show that light has a linear correlation with GDP, right? So you can get the numbers on how the economy’s doing in India, two years later after it occurred, or how population growth is occurring right after the census. But you can get it in real time by just using light as a proxy for man. So, it’s quite a unique data set. Anyway, but specific to your question about Hurricane Katrina, it was a similar process. We actually did change images over Hurricane Katrina. At that time we were -- it’s funny if you think about who was in the White House. We had newsgroups and everybody asking us for these images because there was a power outage after Hurricane Katrina. How do you get information out of an area where the power’s out? You don’t.
DAVID:
Light pollution!
ARA:
Yeah. Nothing comes out of an area where the power is just devastated, not phone, not computer. And so, the only way that the only -- and this is abnormal because usually after a storm, it’s still covered by clouds so you can’t see it. But in Katrina’s case, storm came in and went out, we could see the lights. And so, we were able to monitor the extent of the power outage which was significant. And then basically, update it every day. And if you Google Katrina DMSP and my name, we actually -- those loops are still up on the government website, of course. They never change anything. So, we were actually providing those reports initially to like news offices because they were right on it, and this says something about the administration at that time. But three days later, they called up, “Gee, can you tell us the extent of the damage of the storm?” It’s like, “Yes, it’s been on our website for 72 hours. You can get it like everyone else.” But they were using that to monitor the extent of the storm and we were doing some of that processing. A lot of the heavy lifting, of course, is in CE or IDL, Interactive Data Language, FORTRAN. But we were actually doing some, all of the pipeline, all the orchestration, was done by Ruby but some of the low level stuff too. So, I had a few C extensions. But we were also doing some crazy stuff with mmap basically which is, for those of us old timers, that was a very nice Ruby wrapper over the Unix memory mapping code that Guy Decoux, Gee, I guess if you’re French, rest in peace, wrote and we were using his code to basically map in huge images. And so, for those of you that don’t know how memory mapping works, when you map in an image, it just gives you access to a file as if it’s in memory and so you just address it. But the operating system manages reading it in. So, if you have to do, say a kind of an image processing where you have an image that’s one huge, like 64 gigs or whatever, and you only need to touch parts of it. So, you might have some algorithm that’s seeking around or whatever. Naively, you would just read the whole thing into memory. But with mmap, you can basically just say, “Okay, virtually have access to the whole thing,” point in memory. And then, just do your manipulations and the operating system will manage paging in and paging out parts of the file that you would have to seek to or read to. So, we were doing some image processing like that actually in Ruby. Not because Ruby’s fast per se, but because that strategy of only touching the parts of the image that we needed to and in this case, it was like little parts of the header of these records was faster than reading the whole thing in C and then proceeding to process this giant in memory object. That’s a common pattern that you see. You can have -- when you have a language that provides higher level abstractions, you can try trickier algorithms, like, “Why wouldn’t I spawn up eight processes that use IPC to talk to each other that won’t let the children be zombied?” Because they have a sophisticated methodology of the parent having a heartbeat, the child having a heartbeat to the parent. Things that yes, you could do in C, but you’d fall on the sword before you finish it. Just the complexity is too high. So, I think that’s something that’s generally overlooked in scientific or high performance computing is that high performance computing requires high level of abstraction and that’s what makes languages like Ruby or even Python good. It’s not that they’re fast. It’s that you can have high levels of abstraction.
JAMES:
So, that’s really interesting what you just said. Because the whole time I’ve been sitting here, listening to you talk. And I’ve been thinking, you used Ruby in a situation where you needed robust code that ran forever. You used Ruby in a situation where you were processing massive amounts of data. You used Ruby in all of these things that I’m pretty sure if we said it to most Ruby programmers, they would tell you, “Oh! Ruby’s a bad choice for that.”
AVDI:
Yeah. Or, “Didn’t anybody tell you that Ruby doesn’t scale?”
JAMES:
Yeah. Like, didn’t you just get the memo, or what? [Laughter]
DAVID:
We all got the memo.
CHUCK:
Well, the other thing is that I’ve actually worked on apps that, in one way or another, didn’t scale well in Ruby. And sometimes, it was limitations of Rails or Ruby or whatever. But sometimes, it’s my fault. So yeah, I’m really curious to know how you make it that robust.
ARA:
Yeah. I mean, it’s just architecture at the end of the day, right? For example, there’s fundamental principles of systems, of trying to make large scale systems that it doesn’t matter what language you’re working in. It’s just that when you’re designing those systems and you’re trying to implement some of those important paradigms, a lot of times, developers give up because they’re too hard to implement. We’ll take RQ for example. And it has this queue, right? Queue of jobs, and I wanted it to be durable across reboots. In some systems, not so much now, but that was back then. They were systems that used to have memory map queues and various things that would -- when the machine would crash, they would come up and it would be corrupt, right? Even MySQL, at that time. That was back in the day where with InnoDB, like your MySQL node might be okay after a machine crash and you may have to recover the database. At that time, SQL light, which is still a freaking amazing piece of software, if you want to learn C, that is a very good piece of code to read. But it’s incredibly robust, as well. And so, that was a case where I’m going to use that for my queue because it has the ability to recover after a reboot. It manages that queue on disc, even on NFS which is not recommended. But it’s so good that I’ll go ahead and use the API’s of that to build over my queue. So, there is an example of a good architectural pattern which is, why don’t we put our queue into something that’s persistent and safe that it would have been a little bit -- now, let’s just say I was writing in C. That would start to get a little bit painful because constructing SQL queries, a.k.a. string manipulation and building all that tooling, it would have been a little bit harder. And I would have been tempted to just use something very simple like BDB, Berkeley Database or something like that, where I’m basically just passing data structures around. Maybe now, I would just use Redis or something like that. But that didn’t exist back then. So, like that code, that’s another example. Like that piece of Ruby code has an exponential back off with reset for the way that it acquires a lock. So try, try, try really hard and then back off. Try, try, try really hard, back off exponentially longer. Try, try, try really hard. Okay, at this point, we’ve become very impatient and we’re going to get impatient again, reset. So something like that, like a simple exponential back off when you’re writing like in Java or C or something that’s robust, you're like -- you just don’t. You just write something like, “I’ll just try every ten seconds because that loop is really easy to retry the loop because I don’t want to write that much code.” And I certainly don’t want to write a class doing capsule like that and start using that exponential back off all over. But in a real system, of course, what happens is all the people, the system starts thrashing. So the network goes down and then everybody -- let’s just say everybody goes to sleep and then everybody comes back up and tries ten seconds later. That’s like what a naïve system does. But to like make it durable, you want to interweave that. You want it to be like responsive, like, “I don’t miss getting a lock by a millisecond.” That’s why try a couple times in rapid succession. But at this point if I didn’t get it, aqueous and sleep for a random -- it wasn’t just exponential back up, but it was exponential back up plus randomness. So, that it went 18 nodes, come back up. And then of course, their clocks are all synchronized, right? So, they would be trying at roughly the same time. When the system’s having trouble, you don’t want that. You don’t want the system to start to thrash. So, that’s the kind of algorithm that like you’d think about it, “It would be sweet.” Wouldn’t it be sweet if our system did that? And you’re like, “Yes, and I don’t have a week to write that in C.”
AVDI:
And ultimately, what you want is a system that works. You don’t want to have to care about that.
ARA:
Exactly.
AVDI:
And the beauty of it is, is that you’ve just solved the philosopher’s dining problem. [Laughter]
AVDI:
But you didn’t want to have to. You know what I mean? You want fed philosophers. That’s all you care about. You didn’t want to have to care on how the philosopher’s got fed. You just wanted to make sure they all got fed.
ARA:
Yeah and…
AVDI:
In Java or C, you have to care.
ARA:
Yeah, exactly.
AVDI:
Is that true? You don’t have to care if you expend the effort to do the architecture right, you don’t have to care. But I think you’re touching on something that I’ve experienced which is that in Java or C, you will end up caring because the architecture bleeds through very, very easily. And with Ruby, you’re almost forced by the fact that Ruby doesn’t scale. You’re almost forced to make the architecture scale right.
ARA:
Right. And you know, another classic example of this probably closer to most of the people who are listening his heart is say, caching. Caching is like super hard to reason about, and if you don’t have a high level abstraction. Let’s just say, you have revisioned hash keys. So, you put your git-rev in your hash keys in your app. And in a Rails app with Ruby, there’s minimal interfaces to the cache. Let’s just say you do that, that gives you all these sweet properties, right? Like if you deploy, you automatically invalidate the cache because your git-rev is in that key. And when you roll back, the cache will still be warm for the old version. These are simple ideas and they seem -- these are not new ideas for web developers. But it’s those kinds of things that they get hard to reason about when you don’t have high levels of abstraction like cache right, cache read and some global configuration of cache key prefix, “Hey, why don’t we put the git-rev in there?” That’s what allows you to have a caching strategy that you can actually wrap your head around. It is that abstraction power. And guess what? Caching is still the way to make a website fast. It’s not actually making your application answer requests real, real fast. It’s about minimizing the number of requests as a pipeline, right? Another great example, wouldn’t it be sweet if we had a tool chain that like fingerprinted our assets so that it knew if it add that to the new bundle. And every time we deploy, everything got bundled up in one file. And we do this. We do all our requires in application.js and style sheets application CSS. So like, we just do that. We’re like, “Of course, you have one request for CSS in JavaScript.” Like, “Why wouldn’t you?” It’s easy with that stack. But that stack, if you think about that code. It’s a dependency chain, it’s quite complicated. And oh, by the way, we don’t actually want that to happen in development mode. We want everything kersploded so that you can interact in the debugger. And oh, by the way, we want this to be transparent when we deploy. Giving the person the ability to compile it on the server or pre-compile it locally and have the deployment fail. This is hard. Like, you just don’t roll up your sleeves and write this in a week in Java or C. But in Ruby, you totally can. Like whatever. I mean it’s not that bad.
CHUCK:
I have this question that I want to ask. And I’m going to preface it by saying that I talk to people that basically come to things -- you used the word design and that’s what kind of triggered this. And the reason is that a lot of people say, “Well, we do Agile.” And so, they kind of shy away from doing any design before they start working. And it sounds like what you’re saying is you have to think about these problems to at least some degree, and I happen to agree with that, before you really start digging in. And so, I’m wondering, where is that balance? Where do you find yourself with the thinking about and kind of mapping out the problem before you get started versus kind of, I don’t want to use the word agile, but kind of dynamically just building it up on the fly and kind of exploring the problem that way?
ARA:
Right. So, that’s a very good question. We just wrote some code that we put on a machine, a big server, Dell, whatever, I don’t even remember, some meaty thing that’s getting shipped to a remote location. They’re going to be testing in a lab and they’re going to fire it up, configure it on their network and the thing’s just supposed to work. So like, I can’t deploy code to it again. So, I just wrote a script that actually synchronizes -- so, here’s an example, right? Well, backing up a step. I was an XP guy back in the day. I was writing test driven C code, running everything in the debugger 15 years ago. So, I’m familiar with test driven development. But I’m quite disappointed in today’s crop of TDD adherence because they think it’s a panacea. They think that it actually eliminates the need to think because code coverage and CI, and their tests, that it eliminates their responsibility as developers.
DAVID:
But I have a green bar. [Laughter]
ARA:
Exactly. It still doesn’t -- just because the user put this story in and you implemented it precisely. I
actually just blogged about this the other day. It’s like, say you’ve got to write a system that models triangles. And the Agile, the current crop of Agile developers which are not adhering to the Agile manifesto at all, that’s in my belief. [Laughter]
DAVID:
They're Agile, but…
ARA:
Yeah, exactly. And they’re implementing the system to build the triangles, the two kinds the user asks for. And they’re not stopping to truly understand the problem and say derive the Pythagorean Theorem. They’re not actually stopping to do that because if they had, other solutions may suggest themselves. So, they’re focused on the trees and not the forest. And I guess the reason I don’t like doing that, is I actually don’t like writing code. I hate programming. I mean, I hate computers. I don’t use them for anything personal and I hate programming. It’s like I’m sitting pressing plastic buttons like a rat to get paid. I don’t like programming computers. I like solving problems, however. And computer is a tool that I use to do that and that’s fine. But because I don’t actually enjoy like computer systems, I don’t play video games, I don’t surf the net, I don’t hang out on Facebook really. I'm like, I don’t enjoy computers fundamentally. I want to solve a problem once. I want to understand it, abstract it, solve it. And then, I’m done with that. The enjoyment is out of it. You know, if I have to do it again, I don’t really enjoy it. And so, when I’m looking at a system, I’m not looking at, what do I need to do to get the test to pass, to commit it? I’m looking at, what is the problem here? What’s the abstraction? I want to do this. I want to solve it once and then move on with my life and then never come back to this again, which is impossible. But that’s the goal. And so, I think that’s -- with this system that we’ve just deployed in this airport, so I’m writing a synchronization script. It’s basically synchronizing files. It’s not super sophisticated but some of the things that I decided architecturally were -- so, these files of course reside in the server or in the Cloud and you can access it. There’s API’s and all this. But I was like, “You know what? I don’t want to know that the server’s up, I don’t want to handle that.” So, what I did instead is I decided that I’d decouple the location of the files from the API. So, in other words, instead of the agent asking the API what files do I need, which it unambiguously knows, it’s the master, the nodes are stateless. That was a design decision, the nodes are stateless. It will ask, “Master, what should I do?” And it will just blindly do it. So, I wanted that property stateless. But then, I didn’t want to introduce an HA requirement at the server level. I’m like, “I don’t really care if the server’s up.” The remote CTN can contain the serve files and function just fine even if a server’s down. And so, I decoupled that by -- and so, the server will actually create a manifest and put it on S3. So, every agent has its own bucket. The file is a YAML file, I could have chose JSON. But because I’m looking at it as a human being, I prefer YAML in that situation. But anyway, the server actually creates the instruction set of what to do and puts it up on S3 which, granted could be down but like it never happens. It’s like I’ve got nine nines with no infrastructure. So, the agent actually pulls S3 for its instruction set and then brings those files down. Now, when it’s bringing it down, of course, those are network operations. And so, I have some things built in because I’m not going to be able to see a bug and update this. The network, it’s not when it’s going to fail. Their local network is going to fail. Not S3, it will be down. So what should I do then? And so, I have an exponential retrying back off is what occurs, where it’s like it will try a certain amount of time. And then eventually, it will log the error. And that comes back to another idea, how will I know what this agent is doing in the field? I might naively say, “Oh, I’ll have an API and it will report its status back.” But then, I’ll have to debug like the API being down versus the agent being down. I have two points of failure instead of one. And so what I did is, I actually made this script which is running in a CRAN, it actually ships its logs back up to the S3 under the same bucket with a dated strategy. It only ships logs if they are not empty, the logging is pretty quiet. So now, that just literally in transmit, I’ll be able to tell when that node comes up and it will automatically ship its logs back to S3. So that system’s like totally decoupled from the server and has an independent uptime profile. So, that was an example of just thinking through, like, “Yeah, the network’s going to be down.” And yes, I need for it to report back and I don’t know what’s going to happen. And so, for example, there’s an add exit handler that says, “If I’m exiting and the error is not a system exit, be sure to catch that thing in the log.” Because if you’re script is logging and it goes boom, that exception doesn’t go into the log, right? [Laughs] That’s a very common mistake. “I’m logging everything, but it died.” And you're like, “Well, you didn’t log that.” [Laughter]
DAVID:
There is exactly one case when I will not hunt you down and stab you for rescuing exception and that is around the outside of main. Alright, it’s basically…
JAMES:
You don’t have to actually rescue it though. Ara actually gave the way how to do it there. It’s kind of hard to follow but you set an @exit handler in Ruby, and if Ruby is going down because of an exception, then the global $ variable, that variable is non nil. It holds the exception that you’re dying with. So, you put an @exit handler and check that variable. We use it in tons of places. I learned that trick from Ara, of course. And we use it in tons of places like when I did a lot of work on TextMate and you would do things like, run some script by pressing Apple R in TextMate, then we would figure out if you exited normally or if you died by error by injecting that @exit handler. And that way, if you died by error, we could grab the stack trace and hyperlink it back to your code.
AVDI:
Shameless self-promotion, that technique is covered in Exceptional Ruby.
ARA:
Oh, yeah.
CHUCK:
So now, David, will hunt you down if you rescue exceptions anywhere.
AVDI:
Yes.
JAMES:
That’s right.
DAVID:
Don’t rescue exceptions, kids.
ARA:
You know what? I’ll probably upset some people here but I’m a huge fan of rescuing object. I do it all over the place. This code I’m looking at right now has it in four places. And so, that’s a very different philosophy. Now, having said that…
DAVID:
Rescuing object?
JAMES:
It’s the same thing as rescuing exception.
DAVID:
Okay. Alright.
ARA:
Right. Yeah, rescue exception is ugly. I just like rescue objects for some reason. It’s prettier to read. And I’m a big fan of the shape of code having -- I really feel like code has a shape to it that it actually has an aesthetic value. One of the reasons that I wrote Main and my main scripts -- I’m still a big fan of using Begin at the end of code to like do mundane set up. If I open up a command line script and I’m not immediately reading what it does, it’s just categorically fail. I’m just going to re-factor it right then. Like if there’s so much of option parsing shit at the beginning of your script, it’s like, “Oh, my God!” Like, “Come on.” I want to know what this does. I want the shape of it to indicate what it does. I want a high level description then I want to get into the top level run loop which is like, get index of files from server, or synchronize files locally, big long method names. I hate comments, I absolutely hate comments. I like the code to have a beautiful shape and for it to look like what it does.
DAVID:
If it’s a 12-line method and you spend 10 lines doing housekeeping and the last two lines are the important stuff, get rid of the ten lines.
ARA:
Right, exactly. The main reason I factor things out is just to make it readable. Big methods are actually good, especially in a language like Ruby. It’s much, much, much faster to have huge methods. Well, factored Ruby code makes a deep stack and a deep stack is slow unfortunately.
Having said that, of course…
DAVID:
Can somebody hear sacred hamburger? [Laughter]
DAVID:
I just heard a sacred cow go on the barbeque grill.
ARA:
I’m not the only one, right? I mean, Aaron’s been talking about this. But I do like small methods. It is a reality though, that big methods are -- they are slow. However, that’s Matz’ problem. That’s not my problem. I’m not using Ruby for doing high performance tasks. But a big method, it can be justified. I guess, that’s all I’m really saying. So ideally, at the compiler unrolling stuff but whatever. But when it comes to rescuing object, I was just reviewing somebody’s code the other day and it’s like they’re building up an API response. So it’s like, let’s just say the API response has three keys. And it’s like, step one, run this query, populate this key B, step two, populate key B, step three, populate key C. And I’m like, no. First, initialize the API response to have the keys A, B, and C. Then populate, override those, like set them to nil or the blank array, so that on the client side, you don’t have to handle partial API responses. Now, you could just say the thing should 500. But in this case, it was okay for it not to have everything. So, it’s like think about the exceptional cases first. And that’s why I end up with rescue object a lot. I’m like, “Look, I really don’t care.” So for example, I’m submitting a background job to Email somebody on sign up. And I actually have a rescue object thing and I'm like, at this point, I don’t care what went wrong but I don’t want to not tell the user that I didn’t Email him, he just signed up. I don’t want the generic 500 page or whatever. I want to let him know, “Dude, I did not get your Email out. I don’t know. The network was down, SMTP library blew up. The database was down when I tried to write the object. I don’t really know. But I really want to tell you that I didn’t do what you asked me to.” So that’s the cases where I’m rescuing object where it’s like, “Do I want to carry on if this failed?” And from an engineering perspective, you don’t. Normally, in most cases, you want an exception to propagate up to stack and your code to die so somebody knows about. But once it comes time to the place where users are interacting with it? So here’s another example. You have a command line script, you give it to normal users, say scientists, non-developers. And it prints a stack trace when it fails, you’re fired. Come on!
[Laughter]
ARA:
Tell them what went wrong. A stack trace? Come on, that’s just weak sauce. So, the cases where you should rescue exceptions is where like, what does your user want to happen here? There’s huge swaths of code where user doesn’t give a crap what went wrong but you still want to give them feedback. Or, it’s possible, say because it’s a daemon-y long-running process, this is transient. We all know as developers, all the bugs are transient. You can never reproduce them. Do you want to get paged on Sunday? I don’t. So it’s like, yes this code should go boom, like this should never happen. But if it does, is it reasonable for me to wait a while and reset some things and retry and log it? Big caveat emptor, and log it loudly somewhere, like get good notifications. But is it reasonable for the code to keep trying? What would I do if I was debugging this? If you can answer that question, if you know what you would do, well, that’s what computer programs are. I mean, that’s the whole point of writing computer programs is to do what you would do. But most people only think of that one level deep. Like in other words, what’s the program supposed to do for me? But then, what would I do if I had to log in to debug it on Sunday? If you know what that is and you don’t write that code, you’re not a software developer. You’re not developing the software to be soft like for human beings. You’re just writing the first tier. In other words, you could program support staff too. [Laughter]
JAMES:
Nice!
CHUCK:
Been there, done that.
ARA:
Yeah. And that’s not always a legitimate strategy, of course. But I think it is more times than people think like, you know, they one, know the code might fail here, and two, don’t do anything about it.
AVDI:
I have a question about your testing/documentation. Your gems have kind of a unique style of testing and documentation. Can you talk about that a little bit?
ARA:
Yeah. I make the TDD people angry sometimes by ranting about test suites. But anybody that looks at my work can see I write a lot of unit tests. I’m a big fan of it, for sure. My style of testing, I guess I’m a big fan of unit testing and I’m a big fan of functional testing. So for example, in libraries where there’s a lot of complexity, I will have a lot of unit tests. And the tool chain that I’m using very often I use -- it’s my own command line script called Rego. There’s a very similar one called AK47 which is basically a little script that watches for changes in a directory and runs a command arbitrary, right? These are not fancy. But my style is I’ll have my tests running repeatedly in iTerm and writing my code inside a screen session in Terminal. That’s just so I can click over back and forth between them. But I have them both up. And so, anytime the code changes, the test is running. I’m a big fan of always having -- I insist with developers that they’re not writing, they don’t start script server or start running their tests and don’t have the logs visually in front of them. I really like people to be seeing the logs as they’re going by with database statements, just so that they notice things. So, I have Rego running my tests, unit tests, just so it’s running continuously. But then, the actual testing methodology that I use, I have a little testing gem that I usually just drop in code and it’s a few small hacks on test unit. It just gives you a slightly better declarative DSL and also does numbers, the test names as you create them. So, I’m not a big fan of RSpec. It’s quite good now. But over the years, I definitely have debugged test suites that have bugs in them because of hacks on core. So, I do believe that a testing suite should not hack core. It should not add any methods to any Ruby object. And so, test unit is fine for that, the API is great, that’s why I have a tiny shim to just add a couple, add like two methods to it, test due and context due, they’re just declaring classes and methods. I’m also a big fan in not using a DSL for assertions. So, I basically have just one teeny helper. I just wrap the test unit assert so that it takes a block so that all tests both assert that nothing’s raised because they take a block and assert that the value is true. So this is old C style, right? That’s when assertions were in Macro. But the reason I like it is when it comes to porting a test suite which you occasionally do, like when you’re merging something into a bigger project, it’s literally gsub. Like blammo! This is our assertion method. I don’t like unwinding. I don’t like having to think about what the assertion means in a test suite and I find, should equal, match, whatever. “What does that really do? Does that call === or == on that method?” Oh, God! I don’t want to think about that in my tests. I want to think about the domain, not the test suite. So, I want like zero ceremony. Then Brinckerhoff and I talked about this a lot. And another legitimate reason is for tool chains that are trying to interact with your test suite. So for example, he was working on some code at the time that was trying to parallelize your testing. So imagine that, like you want to require a library that will suddenly make your test suite run in parallel. And this is not news, right? A good API is a minimal one. I think everybody would agree. Ruby is super expressive, it’s got match operators, it’s got include. We do not lack expressiveness in Ruby. And so, when you have a minimal test interface, it just makes it easy to do something like, “Yeah. I
just require this one thing and blam! All my tests run in parallel, or it shuffles them, or it runs anomalies in order, or ships them off to various nodes to run in parallel. Who cares? But that kind of tooling requires a minimal interface. And so, I do tend to lean towards a very simple testing framework.
AVDI:
I’m thinking particularly of the A, B, C, D directories that are found in a lot of your gems and the [crosstalk] that get generated out of that.
ARA:
Yeah. I wrote a testing gem a while ago. I’m a big fan of Ruby driven development. If a text file doesn’t cut it, there’s probably an architectural problem with your code. I’m not really generating docs anymore. I’ve not really culminating code very heavily anymore. It’s like read me should suffice, read me, test suite, code, that should be more than enough. But I’m a big fan of example code. In fact, you know, I was sort of famous for posting a.rb examples. But I actually stole that from Guy Decoux. He was the one that introduced me to that. He was this old French guy that gave a lot of examples and his English was really poor. So, he’d always give his examples in running code. And I started doing that on newsgroups because I found it was amazing how many times I thought I knew an answer, like I could explain it. But then you try to write the code and you're like, “Oh, actually not. It’s edge case.” And we’re computer programmers and English is not a context free grammar and programming languages are. And so, electronic communication in text is inherently difficult. So, if you can explain something in a minimal piece of code, that’s a better answer than documentation, English, anything in English. And so, I like checking in examples of common things for the people who are going to be learning how to use the library but honestly, I usually start there. In other words, what do I want the --how do I want the API to work? I want it to be very obvious. So, I literally start with, “What do I want the example usage to look like? Now, how will I make that happen in the cleanest way?” So, interface versus implementation. And an API is a user interface, it’s the user interface that developers have.
AVDI:
And then, you basically roll those into a read me, right?
ARA:
Yeah, exactly. And I’ve tried various strategies of a bunch of my old gems basically automatically run those. And so, it’s kind of like white box testing. So, I’ll run the code, which will run a little white test that runs the samples to produce the read me showing the output. And I have messed with actually having some assertions around those as well. It’s a little tricky when it’s just arbitrary string output. But just to generate it, just so that I have a little bit of a sanity check, like does that make sense? It’s kind of like it is document, it’s like running RDoc on your code and that it’s an automatic way of generating some documentation. It’s just, instead of extracting it from comments, it’s actually running programs to generate the documentation. So, yeah.
CHUCK:
Alright. Well, we’re getting pretty close to our time. Are there any other things that we would be remiss in not talking about before we wrap up?
AVDI:
I have so many questions left. [Laughter]
DAVID:
Same here. [Crosstalk]
JAMES:
I could sit here and listen to him talk the whole day. I’ve been sitting over here the whole time, nodding my head. Tell me some more bed time stories, Uncle Ara. [Laughter]
DAVID:
I just have one question left which is, can I hang out with you? [Laughter]
ARA:
Well, the whole shop’s full of you guys. So, you just have to make it down to Boulder sometime. Dojo4 is open every Wednesday morning for Rubyists who are in town and anybody that’s in Boulder on that day is welcome to stop by.
CHUCK:
Alright. Well, it’s been awesome. I mean, you’ve really answered some things for me that I’ve been thinking about for a while. Really appreciate you coming on the show.
ARA:
Yeah. Thanks a ton, guys.
CHUCK:
Alright. Let’s get to the picks. Maybe, we’ll have to do this again. And then, Avdi and James can ask the rest of their questions. James, why don’t you start us off with picks this week?
JAMES:
Okay. In the spirit of having Ara on the show, there is this thread on Celluloid that sprung up. It’s on Github, the Celluloid Repository, that sprung up around one of the commits about concurrency. And this thread is just like pure gold. I mean, if you’re any at all interested in concurrency, it’s all about people arguing in a very polite spirited programmer way over, “Is this thread safe?” “Is this thread not safe?” “What’s the right way to implement this safely?” It features Tony Arcieri, who we had on the show recently, and Ara T. Howard is also one of the big contributors in this thread. They go on down to the point where Ara’s linking to code he wrote that’s like pure Ruby concurrent hashes which is just like epic. You can’t read this thread and not learn something about how computers work especially in a multiprocessing environment.
AVDI:
Challenge accepted. [Laughter]
JAMES:
Yeah. It’s really good stuff. Go check it out. And then, just for a fun pick, I think I said I was going to have a good game pick like four episodes ago and then I never got around to it. But I do have it.
I’ve been playing this game called Sentinels of the Multiverse which is a total blast. It’s a card game where you basically pick a deck of cards that represents a superhero. So, everybody playing with you picks a different superhero. And then, you pick a deck of cards that represents a villain and then you pick a deck of cards that represents a location and then those cards all get played. And so, that changes basically the game that you’re playing and everybody works together to defeat that villain in that location. It’s just a blast. So, great cooperative card game, if you enjoy that kind of thing. So, those are my picks.
CHUCK:
That looks like fun. Avdi, what are your picks?
AVDI:
Well, I think there’s really only one programming related pick I could make this week and that is to go look up Ara’s repos, look up his projects and go down the list until you find something that makes you go, “Oh, my God! Why haven’t I been using this for years? Because I guarantee, you will. Can people find all of your stuff on Github now? Or is there still some stuff that’s just in the code for people directories?
ARA:
There’s definitely a few gems that are still just on Ruby Forge or RubyGems.org. They back store there, of course. But there are some gems that I have not imported into Github. But most of the stuff, obviously most of the stuff that I’m actively maintaining is there. There are some gems that I never imported just because they’ve been stable and I haven’t changed them for years and years. I’ve got, I don’t know, more than 100, I would say, of them are imported on Github. So yeah, Github.com/ahoward.
AVDI:
Cool. For a less programmy pick, Tom Bihn bags. I’ve had a Tom Bihn computer bag for many years now. And the thing looks like the day I bought it. They’re really, really high quality stuff. I think, it’s Tom Bihn with a B-I-H-N. It might be Bine, I don’t know. Anyway, TomBihn.com. And I’ve been incredibly impressed with their quality. I just got a new bag from them, really actually a new insert. They’ve got a cool system where the laptop bags are actually inserts for the bigger bags. And so, when I switched laptops, I just got a new insert and they have these really sturdy protective inserts. And I really like them.
CHUCK:
Awesome. David, what are your picks?
DAVID:
So, my first one is to do what Ara said at the beginning of the episode and that’s go to Google and type in ‘Katrina DMSP Ara T. Howard’. And the first links you’ll get back are mailing list posts from Ara which include the code that he wrote to process the images. They’re small enough to fit in an Email post which is freaking awesome. My second pick is that will lead you to CodeForPeople.com/Katrina which is where you can actually see the output of those images in a little Java app. My other pick for today is -- I usually, we pick technical things. But I’m actually going to pick Ashe Dryden. I know objectification of women is a bad thing but she is actually my pick today. [Laughter]
DAVID:
She is a fantastic, fantastic human being and you need to be listening to her. You can read her blog
at AsheDryden.com and you can follow her on Twitter as @ashedryden, there’s an E at the end of Ash. The reason I pick her is because I live in a red state where we have two political parties. We have conservatives and really crazy conservatives. And she’s very much on the left side of the political spectrum. And every time I have talked to her about anything, her approach has not been, “Roar! Blue versus red.” But rather, “Well, tell me what you’re seeing so that I can see what you’re seeing. Well, let’s talk about that.” And every single thing I have ever talked with her about, she has been very friendly, very warm, very helpful. And her approach, for example, her approach to feminism is to say, “Guys, if you want help fixing any parts of anything in your sexism thinking that you have identified as a problem, I would love to help.” And so, she’s not confrontational. She’s not angry and I’m not trying to characterize feminists that way. I don’t want to cast aspersions there. What I want to do is point out that Ashe is absolutely delightful to follow and to read. She will make you think and she will make you a better human being. So, my last pick is Ashe Dryden. Go read her blog or follow her on Twitter.
JAMES:
I want to just like…
AVDI:
And check out her appearance on the Dev/Hell podcast.
DAVID:
Yes.
JAMES:
Yeah. Just the other day, I was asking about Twitter clients, my Twitter client basically just up and died. And I was asking questions and she gave me a recommendation. And I was like, “You know, I tried that a long time ago and I had a lot of trouble driving it from the keyboard.” “Can you drive it from the keyboard now?” And she came back with a lot of detailed information, “Yeah, it looks like they’ve fixed all that,” and stuff. And I was like, “Oh, thanks! I’m trying it out,” or whatever. And then, she’s all, “And I sent an Email to them telling them they should put that accessibility information on their website.” She’s just an incredibly thoughtful person.
CHUCK:
Nice. Alright, Katrina, what are your picks?
KATRINA:
I have two picks today. My first -- so, Avdi picked all of Ara’s gems. I’m going to pick Main just because I like writing command line programs and Main is awesome. The other pick today is more light-hearted. It’s C.G.P. Grey, a YouTube channel with a little four minute sort of factual videos and they’re a lot of fun. It’s everything from, what is a leap year to, is Pluto a planet to, can Texas secede from the union, historical misconceptions, the Holland’s and The Netherlands. It goes on and on and it’s great fun. If you like Via Heart, you may enjoy Grey.
CHUCK:
Nice!
JAMES:
There goes my afternoon. [Laughter]
CHUCK:
Yeah. You’re going to be playing with Main for hours, right? Alright. So, my first pick is something that I got for my wife for her birthday which is on Saturday. So by the time you get this, it would have been last Saturday. Anyway, it is a designer habitat, it’s a floating adjustable shelf wall mount bracket. I’m reading this off of Amazon. But basically, it’s a wall mount for DVD players. You can also put like an XBox or a Playstation in it. It’s really, really awesome looking. And it also looks like -- I’m getting this for -- we have a TV in the bathroom so my wife can soak in the tub and watch the TV. And this looks like you can actually -- once you have the DVD or Blue Ray player in it, you can fold it up against the wall. And then, you can fold it back down to put another disc in and then push it back up so that it’s out of the way. So, I’m super excited to put that in and that way, she can watch her movies while she’s soaking in the tub. The other pick that I have, this is something that we do every year. I don’t know if I’m going to pick it every year but we’re going to the Parade of Homes in St. George, Utah. St. George is one of those places where it’s warmer than the Salt Lake and Utah County area where Dave and I live. And so, a lot of people kind of retire down there and they retire down there with lots of money. And so, the Parade of Homes is a lot of fun to go through because a good portion of the homes are million dollar plus homes and you get a lot of great ideas for things you can do in your house. We always go with my father-in-law who is a general contractor. So, he’s always like taking pictures and stuff and getting excited over some things and being disgusted over shortcuts that they took that nobody else sees. But anyway, it’s a lot of fun. I know that they do them in other areas as well. So, if you are in an area that does a Parade of Homes, go check it out.
Ara, what are your picks?
ARA:
I have two serious ones and one just small, small fun one. So both my picks, the serious ones are surrounding ideas of subjectivity and objectivity. I think developers consider themselves to be extremely rational and objective human beings. But in fact, almost everything we do is based on subjectivity and therefore, it’s faith-based. And because programming is inherently a solitary subjective, ultimately it’s a creative pursuit. I think it’s important that people, developers work to understand the basic mechanics and limitations of their own thought processes and world views which I do believe most developers think is quite objective. So, the first one will just be a link to a Wikipedia page about Gödel's incompleteness theorem which for those of you computer scientists that don’t know this, he’s the Father of Computer Science. But his work has deep, deep implications about the limits of rationalism and mathematics, specifically the proof that pure rationalism, pure mathematics you can prove, is actually inconsistent and it contains contradictions and falsehoods by definition. So, that’s a very interesting work to start reading about and to consider what the implications are on the limits of objective thought. The other one, sort of on the other end of the spectrum, is actually a very simple, almost like a book for idiots on Buddhism by the Dalai Lama. And the reason that I’m linking to it is for me, and I’m a very objective person, it has given the best objective description of subjective reality that I’ve found. It’s just a very clear easy to understand, for an objective person, description of subjective reality which is something that we live in all day being developers. And I don’t think we think about it at all. And the last one is my favorite. I said I don’t browse the Internet much and I don’t. But I’ve been hanging out on Artsy.net quite a bit. And it’s just a link to a picture that I’ve had open in my browser for about two weeks. I’m not sure why I’m fascinated by it but I am.
JAMES:
What was the name of the book, Ara?
ARA:
The name of the book is ‘How to Practice’ and it’ll just be a link to the book on Amazon.
CHUCK:
Awesome. Well, thanks again for coming, Ara. It has been awesome. This is another one of those episodes. We’ve had a ton of them this year, the episodes where I’m just like, I got to go back and listen to this one like two or three times. I really appreciate you coming. It’s really been terrific.
ARA:
Thanks. I really enjoyed it. And I’m glad you guys are doing this for the community.
JAMES:
Thanks, Ara. It was awesome to have you on.
AVDI:
Yeah, thanks a lot.