010 iPhreaks Show – Audio and Video in Apps
Show Notes
Panel
Ben Scheirman (twitter github blog NSSreencast) Rod Schmidt (twitter github infiniteNIL) Pete Hodgson (twitter github blog) Charles Max Wood (twitter github Teach Me To Code Rails Ramp Up)
Discussion
01:22 - Launching a UIWebView and pointing it to a remote URL
01:22 - Launching a UIWebView and pointing it to a remote URL
Autoplay Streaming over 3G or LTE
03:01 - HTTP Live Streaming
AVPlayer MPMoviePlayerController MPMoviePlayerViewController Microsoft Silverlight AV Foundation
11:24 - AVPlayer
Asynchronous Key Loading Protocol AVURLAsset Learning Core Audio: A Hands-On Guide to Audio Programming for Mac and iOS by Chris Adamson Key-Value Observing (KVO) Deli Radio AVAudioPlayer
19:42 - Use Cases
System Sound Audio Categories Playback Control AVQueuePlayer
32:21 - Core Audio
Learning Core Audio: A Hands-On Guide to Audio Programming for Mac and iOS by Chris Adamson Adding effects to audio and video AV Audio Mix Echo
38:51 - Interruption
42:04 - Network Connections
42:04 - Network Connections
Network Link Conditioner in Lion - Matt Gemmell
44:07 - .MP3, .CAF, .AIFF, .AAC
45:32 - Transcoding
45:32 - Transcoding
Zencoder M3U
Picks
Audacity (Rod) Customers (Rod) The Little Redis Book by Karl Seguin (Ben) MMDrawerController (Ben) MacBuildServer (Ben) OpenEmu (Ben) Reveal App (Pete) Snap CI (Pete) Buildozer (Pete) ThinkGeek (Pete) Commit (Chuck) Candy Crush Saga (Chuck) Mini Golf MatchUp (Chuck) Portal (Chuck)
Next Week
Web Apps & HTML5 vs Native Apps
Transcript
ROD: I'd get my Dad a Darth Vader helmet...because he's my father.
Web Apps & HTML5 vs Native Apps
Transcript
ROD: I'd get my Dad a Darth Vader helmet...because he's my father.
BEN: Yeah, I got it.
[laughter]
[This show is sponsored by The Pragmatic Studio. The Pragmatic Studio has been teaching iOS development since November of 2008. They have a 4-day hands-on course where you'll learn all the tools, APIs, and techniques to build iOS Apps with confidence and understand how all the pieces work together. They have two courses coming up: the first one is in July, from the 22nd - 25th, in Western Virginia, and you can get early registration up through June 21st; you can also sign up for their August course, and that's August 26th - 29th in Denver, Colorado, and you can get early registration through July 26th. If you want a private course for teams of 5 developers or more, you can also sign up on their website at pragmaticstudio.com.]
CHUCK: Hey everybody and welcome to Episode 10 of iPhreaks! That's right, we're on the double digits now! This week on our panel, we have Ben Scheirman.
BEN: Hello from NSScreencast.com!
CHUCK: Rod Schmidt.
ROD: Hello from Salt Lake!
CHUCK: Pete Hodgson.
PETE: Hello from thepete.net!
[Ben laughs]
CHUCK: And I'm Charles Max Wood from DevChat.tv! This week we are going to be talking about "Audio and Video" in your apps.
BEN: So this is where you just launch a UIWebView and point it to remote URL and then you're done?
PETE: I did that once.
CHUCK: All the games that I play, I have to turn the sound off on them.
PETE: I actually did do that once, Ben.
BEN: Yes, it's the quick and easy way to do it.
PETE: Yup, it was surprisingly good. I discovered, we're going to jump straight into rearcane pit of noise, but didn't let you do "Autoplay" on video; Apple doesn't want you to do that. Can you still not do that if you're using native video?
BEN: You can do whatever you want with the native stuff.
PETE: Okay. So for the web one, you can't. But this --
BEN: I think it's just kind of the Safari limitation...
PETE: Yeah [chuckles].
CHUCK: Every browser should do that. That rise me asked, too.
PETE: I think they say it's a battery issue more than anything else like they don't want you firing up the radio to download like 50 maybe, to start offering conserve concept.
BEN: Yeah, they have gotten a little bit more strict on the rules for that, and I don't remember the exact numbers off the top of my head.
Transcript
ROD:
I'd get my Dad a Darth Vader helmet...because he's my father.
BEN:
Yeah, I got it.
[Laughter] up:
the first one is in July, from the 22nd - 25th, in Western Virginia, and you can get early registration up through June 21st; you can also sign up for their August course, and that's August 26th - 29th in Denver, Colorado, and you can get early registration through July 26th. If you want a private course for teams of 5 developers or more, you can also sign up on their website at pragmaticstudio.com.]
CHUCK:
Hey everybody and welcome to Episode 10 of iPhreaks! That's right, we're on the double digits now! This week on our panel, we have Ben Scheirman.
BEN:
Hello from NSScreencast.com!
CHUCK:
Rod Schmidt.
ROD:
Hello from Salt Lake!
CHUCK:
Pete Hodgson.
PETE:
Hello from thepete.net! [Ben laughs]
CHUCK:
And I'm Charles Max Wood from DevChat.tv! This week we are going to be talking about "Audio and Video" in your apps.
BEN:
So this is where you just launch a UIWebView and point it to remote URL and then you're done?
PETE:
I did that once.
CHUCK:
All the games that I play, I have to turn the sound off on them.
PETE:
I actually did do that once, Ben.
BEN:
Yes, it's the quick and easy way to do it.
PETE:
Yup, it was surprisingly good. I discovered, we're going to jump straight into rearcane pit of noise, but didn't let you do "Autoplay" on video; Apple doesn't want you to do that. Can you still not do that if you're using native video?
BEN:
You can do whatever you want with the native stuff.
PETE:
Okay. So for the web one, you can't. But this --
BEN:
I think it's just kind of the Safari limitation...
PETE:
Yeah [chuckles].
CHUCK:
Every browser should do that. That rise me asked, too.
PETE:
I think they say it's a battery issue more than anything else like they don't want you firing up the radio to download like 50 maybe, to start offering conserve concept.
BEN:
Yeah, they have gotten a little bit more strict on the rules for that, and I don't remember the exact numbers off the top of my head. But if you're going to do streaming audio or video over 3G, then either you have to use HTTP Live Streaming, which is their recommendation, or it has to fall under a certain data per minute; it's rational. I think it's like 5 megabytes over 5-minutes. So for audio, you're pretty okay usually. But if you're going to stream video, it's a lot more difficult so a lot of apps will get rejected if you try and stream with large file over 3G or LTE. So you have to use a reachability check to see if you're on WiFi before continuing. But if you're using HTTP Live Streaming, then that's a non-issue because you're just streaming with content you're actually watching; you don't have to download the whole video.
PETE:
HTTP Live Streaming! What's that? [Laughter]
BEN:
That's an open-standard developed by some folks at Apple. HTTP Live Streaming is basically an end-point to where -- I've never implemented this so I only know what it is at a high-level, so I make at some of the details wrong -- but basically, you hit an end-point that gives you an HLS Index; and the Index tells you some metadata about the feed and the format it's in, things like that and then some chunks of video URLs that you could go fetch. The idea is that, on a Live Feed, that Index gets updated periodically. So then, all of the clients that access it just say, "Okay, where are the video files and what timecodes are they? Let me go and request them..." and then it will refresh that same Index Feed, looking for the new video. So if you had one of these Live Streaming like hardware devices, it would be putting chunks of video in a directory somewhere and your Index would pick up those new files and serve them up. It's not necessarily live, it's probably 10-second delayed or more, but it's pretty close. However, that's also useful for breaking up large static video files that you also have. You can run those through some command-line tools that Apple provides on their website to chop up -- they call them "segmenting and multiplexing" -- the video files into HLS-compatible files, and then you just point that to URL. So if you have an MPMoviePlayerController or an AVAudioPlayer -- sorry, not AVAudioPlayer -- AVPlayer or MPMoviePlayerController, you can point it to a remote URL as if it were just a file and it would work.
PETE:
What's really cool about this, if I understand it correctly, is you don't need any kind of special server technology, like you don't need a media server or one of those like old Real --
BEN:
Right. RTMP, or whatever.
PETE:
Yeah. You can just try these things up on as free or put them on a CDN and point the massive data file or whatever, and then it will just all work; you don't have to run any server site infrastructure at all, which is really cool actually.
BEN:
I think that's a really cool aspect of it as well; you just need a file server. It's just running on HTTP, which is where the name comes from [laughs]. Another interesting piece of it is it will do adaptive -I don't know what the term is exactly...
PETE:
Adaptive bitrate, right?
BEN:
Yeah, so that you'll have multiple qualities of the exact same video feed and you put a low-quality version. Basically, whatever order displayed in the Index is what order the clients will attempt to play them at. So, you can put a lower quality version first at a lower bitrate so that it downloads faster; you can start getting video sooner. And then once it gets through that segment, it will examine its performance, the bandwidth you received playing that little segment, which might be only 2 or 3 seconds long. And then, it may switch to the higher bitrate version of the video for the next segment. Previous incarnations of this on iOS would be noticeable like maybe 5-millisecond like hiccup between the two. But now apparently, it's completely smooth when it transitions from low-quality to high-quality. This was first, or at least, I first saw this with Silverlight actually, where they had adaptive streaming and they used that for the Olympics. But now, this is part of HTTP Live Streaming so basically when you create your segments files, you can just put all your qualities in there and then, again, you just hand it to one of the player components on iOS, and it will just figure it out for you, which is pretty nice.
PETE:
This is for HTTP Live Streaming for audio as well? Or, is it just video?
BEN:
Yup, for audio as well.
PETE:
Okay.
BEN:
And you can hand this stream to any player component. Again, they're all built on top of AV Foundation, and AV Foundation - AVPlayer is sort of the core element there, and that deals with any kind of AVAsset, which can be a local or remote URL and that can contain Video or Audio Assets. If you have an Audio Asset, it's just probably left and right channel audio in a specific audio codec or encoding. And then for the video files, you have audio but you also have a video stream so you'll have to hand that off to an AVPlayerLayer, and then take your AVPlayerLayer and you can add that to any CALayer in your UI. There's kind of a lot of moving pieces in there, but that would allow you to build something where you have -- say you wanted to build like iMovie -- where you have like a scrubber and you can display the video content in a little window inside of your UI. And then you scrub back and forth through the timeline and you can grab a thumbnail at a specific timecode using AVPlayer. But if all you wanted to do is play a movie back, then you'd probably look at MPMoviePlayerController and MPMoviePlayerViewController.
PETE:
There's like high-level --
BEN:
Yeah, you don't get any control over. I don't think you get much control over playback other than the rate, so you can go like 2x to playback if you wanted. But you don't get any control over like the scrubber style, you get the volume slider; what airplay looks like. There's also this fastForward and Previous buttons, which don't actually do anything. I'm starting to remember the exact reason for those buttons, but I think if you hand it a playlist, it will skip ahead through multiple tracks. But if you just hand it one video, those things don't do anything. And you can't really customize the Chrome at all so the player, controls, and stuff, those are out of your control if you're using MPMoviePlayerViewController. So if you needed something with custom, then you would drop down to AVPlayer and introduce a lot more concepts into your code, but ultimately have more control.
PETE:
Can you kind of usually start off with the high-level kind of stock component and then customize it down the road? Or, is it going to be like rewriting a bunch of stuff if you want to take --
BEN:
There's so much you get for free out of the higher level components. The
MPMoviePlayerViewController is the most recent, I guess, that came out in 3.2 I think. It's not necessarily recent, but it's more recent than MPMoviePlayerController, which only provides the player portion of it and you have to add that to your own ViewController. But if you just want to present so like a Model View Controller with a video URL, MPMoviePlayerViewController is what you'd want to use. One of the things that it does is, the scrubber, when you hold on the playhead and drag your finger across it, it scrubs at like 1:1 ratio with the point of your finger on the scrubber.
If you tap and hold and drag your finger farther away from the scrubber head, you'll see that it goes from high-speed scrubbing to half-speed scrubbing, and then fine-tune scrubbing, if the farther you go.
PETE:
I remember, things are so excited the first time I figured out you could do that.
BEN:
Yeah! It's kind of hard to explain on a podcast, but try it on iTunes App or any app that plays video and you'll see this; it's if you really wanted to fine-tune exactly where you're seeking to in a file, this is the nice thing you could use. But you don't get access to any of those components if you're not using MPMoviePlayerViewController. So, dropping down to AVPlayer means you have to reinvent all that stuff; you have to create your own slider. And if that feature was important to you, you'd have to write that yourself.
PETE:
That kind of suck. Well, I guess that doesn't suck, but it's kind of like, "Oh!"
BEN:
If they had componentized it just a slight bit more, it would be really handy, so that you could get that functionality without having to drop down and basically lose all the UI stuff you've got before. AVPlayer is not an easy framework to understand because you're working at a different level of obstruction. So whereas, before I've talked about presenting a Model View Controller and handing it out a Movie URL, usually that's the level that we want to work out. I don't care about the underpinnings; I just want to play some movie. AVPlayer, however, is model more of, if you could imagine, like a mixing board with whatever like in a TV studio, where you'd have audio and video inputs and you can mix them together. There's a lot more pieces involved and you can control these things independently. So at the lowest level, you've got an AVAsset, which can be an AVURLAsset if it's a file URL or a local URL -- sorry, remote URL. But if there are some other way that you could get content other than playing off disc or off the network, then you could hand in a byte array for instance. So this AVAsset, typically you'll use AVURLAsset, you hand in an asset but this could be like a 20 megabyte audio file or video file. So in order for you to determine how long the asset is for instance, you have to issue an Asynchronous call. So basically, the entire APIs asnynchronous, you can't really instantiate one of these things and expect it to give you any useful information because, otherwise, it might have to download the whole file.
PETE:
Right.
BEN:
One of the things you would do is you should probably take a look at the Asynchronous Key-Value Loading Protocol; this is basically the core of how you will interact with AVAsset. Once you have an AVAsset with the URL, you will say, "loadValuesSsynchronouslyForKeys" and pass it an array of keys that you're interested in - one of those is the Tracks key. Tracks key, like for a video file, you have a video track and an audio track. But for an audio track, you should just have one. And then, that will give you a callback for the KeyStatusChanged. So, it will change from unknown to loading to loaded or failed, and you have to handle all those cases. So like, "What happens if I try to load the Tracks key and it fails?" It kind of gets to callback hell, but it's a tough mental model to wrap your head around amidst trying to build a higher level component. Basically, once it's changed, you got to request the status of that key again. And once it has been loaded, then you can ask it, "How long is my audio file?"
PETE:
And you have to do all of that housekeeping yourself?
BEN:
Yes.
PETE:
Fun times.
BEN:
Yeah. So this is one of the reasons why I found -- I mentioned in previous episode -- that WWDC was so helpful for the labs because then I got to sit down and show them the code and be like, "Is this how you're supposed to do it?" [Laughter]
PETE:
Is this supposed to help this much when I do it this way?
BEN:
That's only if you want ultimate control exactly how things are loaded. If you want to take one step back and say, "I'd rather just hand an asset directly to a player," there's like a different initializer you can use with an AVPlayer, which you'd give it a URL and it will internally create the AVURLAsset and do the handling for you. But you might want to have a hook for when that fails, so it's just up to you on whether or not you need that control. And then, your AVPlayer is not yet ready to play because it still has to figure out what the format of this audio file is or video file, like what encoding is it in? What bitrate? If you look at the structure of an audio file, it's divided into packets; and those packets have frames. A frame is basically like one slice of data in the audio file. Then I'm reaching the point where my understanding on this stuff is thin it dust, but I'll try to regurge to take some of the stuff I've learned from the Core Audio book, which is by Chris Adamson. I think it's called "Learning Core Audio", I'll post the link in the show notes. Anyway, so the player has to figure out how many bytes per packet, how many packets per frame, things like that. What is the bitrate, is it big NDN or little NDN, and these all as low-level stuff that it's going to do for you. Basically, you can't do anything; you can't even play the audio until it figured all these things out. So what they've talked about in the WWDC videos is, basically, anything that you do with this framework takes time, so everything is asynchronous. So once I hand a URLAsset to a player, either by handing it directly or just giving it a URL, then I can tell it to prepare to play, and it will go figure out what it needs to do asynchronously. And then you need to observe the player using KVO - Key-Value Observing - in order to observe the player item status and the player status.
CHUCK:
When you hand it off to a player, does it give you all of the interface that we're talking about that discovers in volume and everything? Or, when you hand it off to a player, will it just play it and not give the --
BEN:
Yeah, you have to invent all the stuff yourself. You can tell a player to seek to a specific point and you can tell it your tolerance for how long that's going to take. Let me circle back, I'll get to that in just one second.
CHUCK:
Okay.
BEN:
There's one thing you would expect that's like some sort of protocol that you would implement, like a delegate protocol like a TableView has. Like, "Hey, I'm ready to play," or "Hey, this thing failed," but you get none of that. So all of the information you want to receive from an AVPlayer is done through Key-Value Observing, which is really powerful concept in Objective C. But unfortunately, the implementation of this is like always disgusting because you get one callback for some key of some object changed and you have to disambiguate that yourself. And you also can't just intercept all calls to that method because your superclass might depend on that for some functionality. So you basically have to intercept this method call and decide if you care about that changed event.
And if you don't, you pass it on the superclass.
PETE:
So you just have like a big old switch statement?
BEN:
Yeah. And I end up breaking that out into -- so it's a big switch statement and each line of that is exactly one method call to some other method; it would be much nicer if we just had blocks for this. Anyway, so you have that and you basically have one of those diagrams -- I forgot what you call it. In a diagram, we were waiting on like a number of conditions to be true, it'll only bend then you continue.
PETE:
A gate?
BEN:
Yes. So I have a gate to where it says, "Is my player item ready to play? Is my player ready to play?" and if so, I'm going to actually play the audio.
PETE:
It sounds like the kind of thing where you'd want like a state machine or something to help --
BEN:
Yeah. I've written this twice now. I wrote the iPhone App for Deli Radio; Deli Radio is a streaming radio player, and for in it are these problem I created. An initial version of this with AVAudioPlayer, this is kind of a [laughs] long side story, but AVAudioPlayer have the interface that I thought was really handy, like you had a volume property on it. So that give us like, "Oh, if I want to scale the volume myself, I can just set the volume to .5 or animate the volume over a period of time, from 01." And so I handed AVAudioPlayer URL, and it would fail most of the time; sometimes, it'd work. Later on, I found out by ringing forms that AVAudioPlayer is only used for local files; it doesn't work with remote files.
ROD:
It does work.
PETE:
I think, some of the time.
BEN:
Yeah, and I think even the docs say this now. But at the time, there was no mention of this in the docs. So then I need to dropdown to AVPlayer, which introduced a lot more complexity in my code. One of the ways I was able to hack around this to get this to work is by creating a mutable NSData, downloading the file myself - the remote file data. And then once I had sufficiently buffered enough bytes, which was some value that I guessed and put it into a constant, once I had buffered a certain amount, I decided to hand the mutable NSData over to an AVAudioPlayer and say, "Hey, go ahead and play this. I hope I finished downloading it before you're done."
PETE:
Awesome.
[Chuck laughs]
BEN:
It worked for a little while. But obviously, it wasn't a long-term solution. So, AVPlayer was really the way to go that'll properly stream it for you.
CHUCK:
I have a couple of use cases that I want to throw out and just kind of see what you say.
BEN:
Okay.
CHUCK:
The first one is, I have this idea where -- I'm not going to explain the whole app -- but basically the idea is this, when you tap a button it says a particular word or something. And so I'm assuming I would just put the MP3 here or whatever into my app bundle and then when they download the app, they get all the audio with it. How would I hook that up to a button to make it to playback?
BEN:
So you would want to use AVAudioPlayer in that case because you have a local file URL and it's got a much simpler interface. You would create an AVAudioURL -- sorry, AVAudioPlayer -- with pass at the URL of your MP3 file. And then at some point, you would just call play on that instance and it would play the audio. I forget if there are notifications or if it's a delegate callback, but you can tell when the play started or finished.
CHUCK:
Okay, that's good to know. The other use case is kind of the same; if I'm building this totally awesome game and I want certain elements in the game to make a noise when they appear or when they do something, it's the same thing; I just took into AVAudioPlayer and just take it off?
BEN:
There's differences there. Usually the sound effects, if you think of like Tapbots Apps, they all have like little click events and switches and things when you interact with their apps, those are sound effects and those are done via system sounds. I cannot remember the API off the top of my head, but it's something like create system sound with ID and you handed a file to play. That is much more stringent on the file types that it will accept, so I think I had to create a CAF File.
PETE:
I think that makes sense because you'd want it to react straight away, you don't want it to be decoded like figuring out [inaudible].
Right, exactly. And it also has some bearing on the Audio Category that your app is in. So the Audio Category, there's a bunch of different categories. One of them is --
PETE:
Annoying.
BEN:
Yeah. [Laughter]
PETE:
You have to put in the "annoying" category first.
BEN:
One of them is "Playback". The app that I work on is a music player so the category is Playback. The reason they do it that way is that, if you open a music player and you hit play, it should play out of the speakers no matter what your mute switch is set to. At least, that's their definition of that category. And it makes sense to me, like if you're explicitly requesting some audio to be played, then it should play. However, game with sound effects should never play music if that audio switch is turned off. So I think those are some of the things that you just have to play nice with the audio category; you can set the audio category of your app when your app starts or when you begin first playing audio.
PETE:
Does that relate to -- as a user of an iPhone, some of the apps that I use when they're playing, I can control the playback for like the little double or triple hold button menu thing, when you're like getting hold of the menu bar up and then you go to the left --
BEN:
Yup!
PETE:
Is that related to that? Or, is that different?
BEN:
I think it is related, honestly. Some of them have two different volume settings: one for playback, and one for like regular system stuff.
PETE:
Yeah.
BEN:
So sometimes, like you launched Netflix and the volume will be all the way down, and then you wonder. Or, if you're playing a game and while the game is loading, you're turning the volume down.
PETE:
Yeah.
But then the volume thing switches to another mode and then the volume is up again. I've had this happen again, and you can kind of tell when they're switching the audio category, which again, switches that volume setting. I don't know off the top of my head all the various modes, but there's only 3 or 4 of them and you should just go look at the table that Apple provides and figure out which category your app should be in.
PETE:
I'm assuming you did this for Deli Radio. Is it tricky to kind of plug into the little remote control thing that's outside of the application? I'm doing really bad over the explaining --
BEN:
[laughs] Double-tap and swipe to the left and you get the volume controls and it shows the app icon of the currently playing or the app that last had the playback audio category, which sometimes you'll see it as our app, Deli Radio. Or sometimes you'll see this iTunes, depending on which one the last one is I hit play on. So then when you hit "Next Track-Previous Track", those actually get delivered to your app via the, I think, it's the UIWindow and you will receive remote control events as if somebody's actually using the Apple TV remote against your app.
PETE:
Ahhh...
BEN:
That's exact same API.
PETE:
Okay. Because I've noticed that some apps do a good job of interfacing with that remote control API, I guess, and some of them are doing a really terrible job. Like some podcasting apps I've tried, it seems like that never worked so I could go over to that bit and I'll like try and play and it won't play, and then it will play and then die, or --
BEN:
Play/Pause should always work because the audio subsistently you're interacting with is the system level component so they can pause your audio for you. But Next Track, the OS has no idea what to do with your app if you want to go to the next track so you have to implement the Next Track-Previous Track yourself.
PETE:
I think the time that I've seen it [inaudible] is when I'm like streaming video from the interwebs, and then I'll leave my house so it goes from WiFi to 3G or something, and then some apps will -- I can't remember what app it was that did this -- but it would stop playing. And then when I try to play again, it wouldn't play again. I'm guessing it was just some bug in the 1 of the 55 different transitions that you were talking about it going through.
BEN:
That's what I'm saying. I mentioned that I've rewritten this thing, this is we're on the third incarnation. The first one was a botched AVAudioPlayer hack; the second one was a hot mess of code, it was 700 or 800 lines of code for a component with no UI.
[Pete chuckles]
I was not proud of it at all and eventually rewrote it to take advantage of AVQueuePlayer, which is a thin extraction over AVPlayer, which you would think would manage a queue of audio like a playlist, but it doesn't - it doesn't. [Laughter]
BEN:
AVQueuePlayer, it's only use is to queue up a track right after and it can manage when to buffer that Next Track. Fortunately, that is handy. But what that means is if you want to hit Previous Track and either have that play the same track again starting at timecode 0 or to the actual previous track, it means inserting the correct audio file, maybe even the same one, at N+1 position and then calling next track on the queue player, if that makes sense.
PETE:
You could only care forward in time.
BEN:
Exactly. And so the queue doesn't represent our playlist; it only represents the current track and the next one and I may rewrite it if you interact with it. Like if you hit previous track and you're halfway through the song, then I take the URL you're on and, actually, I think I just seek to 0 at that point. But then if you're underneath 5-seconds, then I want you to go to the actual previous track so I will insert that track at the next position in the queue player. So now, that means that we have current track, we have the previous track. So if you think of this as index, we've got 0 as the current one, -1, the one before it as array index 1, and track N+1 is at index 2 of the array [chuckles].
PETE:
Oh, man! I couldn't just see...I can't see in my mind the code and the comments in the code like, "The reason for this is because..." [Laughter]
BEN:
We actually only keep a queue of the current song and the next one, that's it. And, we wipe out the queue at any change.
PETE:
Yeah.
BEN:
After that's done, the track that we did have at the queue.dot position is no longer valid because we're at the previous track. So, we just insert the track at the right position, we call it "Next" on the queue player to start playing it, and then we'd remove that item and queue up the actual next item. Anyway, that is simpler now than it was before, but I still am not like super happy with the way the code turned out. I feel like there should be a better way. I think a state machine would probably make this a lot simpler.
PETE:
Yeah. I wouldn't be surprised if there's some kind of open source something around it that's like -- I don't know. It seems like, like you said, it doesn't have a UI component so it's kind of something that could be reusable by many different projects in theory. But it also sounds like sweating your way through all of this stuff is a non-trivial thing. And once you kind of slow your way through the
end, maybe the last thing you want to do is clean it up in open source that you just [inaudible] and you move away.
BEN:
I have kind of mixed feelings on this. Like if our app is a music player, so I better damn well know how that works.
PETE:
Right.
BEN:
That's kind of my --
PETE:
Kind of your Core Competency.
BEN:
Yeah. And open sourcing it, I don't know. Like right now, it's not useful to anybody but us. One because it is decoupled from our application sufficiently that I can use it with a different UI if I wanted to. But it still kind of works like we work. Like for instance, we need to know when somebody skips a song or when somebody plays a song through a certain percentage, so we can count it as a Play or a Skip. And we surface that information to the artist later on so you can be like, "Oh, 95% of people listening to your song skip it right at this drum solo," Llaughter]
BEN:
So we need to have some sort of threshold of like how far along you are on the audio and those are all decoupled through notifications, but we still baked that notification into the player because that really didn't belong anywhere else. I didn't want to couple that to the ViewController, which may not be playing because we might be in the background.
PETE:
Right.
BEN:
So there's lots of that type of stuff that you have to deal with. Also, the audio system can crash; not your app crashing, but the audio system is like failed and goes haywire. So, you have a callback that you can -- I forget what it's called -- but you will listen for a notification of the audio system as failed. [Chuck laughs]
BEN:
I'll look it up here because I can't --
PETE:
Audio system did suck. [Laughter]
BEN:
And at that point, you can't trust any of your AVPlayers or Assets. Again, you got to recreate them. So, we have a reset method in our player that will ditch the current set of things, like it will save the timecode, ditch the current set of things, and then rebuild it and start playing from there.
PETE:
This episode is almost more depressing than the Mac Development episode. [laughter]
CHUCK:
I have to ask with the Deli Radio you're talking about. Basically, you go and get a music file that you're going to play and then you get the next music file that you're going to play when you play the next song? So you'll actually streaming it then.
BEN:
Correct. Where you'd just hand the URLs off to the AVQueuePlayer. And when it gets to a certain point of the song you're playing, you will begin buffering content for the song you're playing next.
The idea is that, if I just listen to it continuously, I shouldn't ever hear a buffering gap.
CHUCK:
The thing that I'm wondering about there is, on the backend, I'm assuming there's some kind of security that you don't necessarily have to confirm or deny to keep people from just going and downloading the songs themselves.
BEN:
Right. You can just do the S3, like Signed URLs, for a temporary period. But it does its HTTP so if somebody gets access to the URL, they can grab the audio. If you wanted, there's something like Pandora where -- not even Pandora does this anymore. Because I remember one time, I'm using an app that would sneakily grab the audio that Pandora was playing and save it as MP3s, but you can encrypt the streams and there's some support for encrypting HTTP Live Streaming feeds. However, you have to embed the keys somewhere, so you either need to hard code it in your app, in which case, anybody can just go grab it - savvy individual code. Or, have a key exchange, like HTTP call, that you say, "Okay, I am authenticated here, I'm running, give me my key please," and then when you turn off the app, it throws it away maybe. I don't know --
CHUCK:
So it does support HTTP Basic off and all that other stuff?
BEN:
There's a specific encryption level for the actual files themselves are encrypted. So if you were to pull them out of the cache somewhere, they wouldn't be useful to you without the key.
PETE:
Is there like DRM Support at all in any of these frameworks?
BEN:
I think that's the extent of what they provide. I know that Pandora has rolled their own stuff, like they don't use AVPlayer at all; they use Core Audio. And Core Audio is, what I mentioned with AVPlayer, they consider that a high-level framework. Core Audio is dealing with -- you have to figure out the exact shape of the files yourselves using C; you get a bunch of C calls to say, "What is the type of this file? How many bytes per packet? Let me do some Math to figure out how many bytes per whatever. Is it big NDN or little NDN? And now let me create a buffer and let me start filling that buffer with audio from this network and then I would drain it on this other side with something that plays the audio." It's definitely low-level. If this stuff interests you at all, I really really recommend either going to CocoaConf and see one of Chris Adamson's Core Audio workshops. Or, just buy his book, which is honestly hard to get through. I mean you're reading walls of C code, it's really really tough. [Laughter]
BEN:
You have to love the stuff.
PETE:
I remember doing stuff like that would DirectX back in the day trying to figure out how big your buffer had to be for the frames --
BEN:
Yeah, exactly. And if you choose too large of a buffer, it can cause problems; if you choose too small of a buffer, it can cause problems. Of course, you don't want to waste people's memory on an iOS device because you may not have a lot of it. There's a lot of that type of stuff in Core Audio that you have to deal with. But at that point, you can shuffle bytes over the network, however fast or slow you want; you can do whatever you want with DRM.
PETE:
Is there support in any of these frameworks for kind of filtering or adding effects to audio, or adding effects to video, that kind of stuff?
BEN:
Yes. You can get frame by frame info on like Pixel Data from an AVPlayerLayer, I think, so you could easily apply a filter to that. As far as audio goes, you can create an AVAudioMix using the higher level AVPlayer stuff, one of the things we've toyed around with doing this cross-fading songs. Actually, I do the -- if you push Next Track instead of abruptly ending the track, I actually faded out over a period of like, I think, a quarter-second or half-second. The way I do that was with an AVAudioMix; picture a mixing board where over time, you're ramping the volume down or up from a number. I have a method that ramps the volume down from 1-0 over a period of half a second, and then calls a block when it's finished. So you can do that basically by hardcoding the timecodes of when you want something to happen so you can mix two audio streams together this way. If you wanted to do audio effects, I'm not sure of a way to do that with AVAudioPlayer, but I know you can do it with Core Audio. Core Audios got a design similar to DirectX where you have components, and the components have inputs and outputs.
PETE:
Yeah.
BEN:
So then you can add like an Echo Component or a Noise Cancelling Component or whatever and you basically just wire them up together and make sure you wire one to the speaker. It's pretty interesting how that all works.
PETE:
Can you do that low-level stuff with Core Audio? Let's say I wanted to take an MP3...no, something that I'm streaming over live at HTTP, kind of grab that, add some Echo to it, but then pass that kind of that stream to the high-level API to AV stuff. Can I do that? Or, am I starting to think everything -BEN:
Yeah, I think so. I think you can. At some point, you're just taking bytes and shuffling them around and transforming them, so the format of the data matters. But they're just bytes, so instead of sending them to the speaker, you send it to an NSMutableData, which is kind of your own buffer, and then you could pass that to an AVAsset.
PETE:
Oh, okay.
BEN:
But if you're already dealing with Core Audio, you've already done most of the heavy lifting already; you might just want to play it with Core Audio.
CHUCK:
Pete wants to write an app that detects when you're in the bathroom and then adds echo. [Ben laughs]
PETE:
[Laughs] No, it's echo cancellation. I actually had a really cool idea once. I thought that I was going to be a millionaire.
BEN:
But don't say it on a podcast, man! [laughs]
PETE:
No, I researched it and it's not possible. I thought that I had this amazing idea. You know how the Jawbone does like noise cancellation, well, both headphones did noise cancellation when they listen for a sound outside of the thing and like cancel that out?
BEN:
Mm-hmm.
PETE:
I was walking along one day talking into the little mic next to my earphones and I was like, "Hey! You could record the audio from the mic, and that would be the background noise, and then you play it back in the earphones, and you've got like both quality noise cancellation, but without any hardware." And then I looked into it, and because the mic is too far away from the source of the input and the output, the noise cancellation have to be millimeters away from each other or something. So, I gave that or I just gave away my multimillion-dollar idea. [Laughter]
BEN:
It would be possible. I just think in practice, it might be a little bit not as good as the $200,000 --
PETE:
The kind of the physics of it, there's a relationship between the microphone and the speaker and like the wavelength of the thing you're trying to cancel out, if one is not inside the other kind of thing, then you can't kind of do it in any reasonable way, apparently.
CHUCK:
Oh, don't let Science stop you. Come on! [Laughter]
BEN:
Skype is doing that right now. If I didn't have headphones on, then the audio coming out of my speaker would also hit my microphone, and Skype is filtering that out with software.
PETE:
Oh, okay.
BEN:
I know what Pete said just now, and if I see that wave form again, I'm going to cancel that out. It's probably be more sophisticated than the way I described it. But otherwise, if you've tried to have an audio, like a phone call just with your Mac laptop speakers and mic with something other than Skype, I found that Skype gets the echo cancellation better than any other app so far. Google Hangouts is pretty good, but when I try to do this with like the old iChat video interface, it just never was great for me. I also use HipChat, which has its own video integration and they don't do a good job of that either so I just don't use it anymore.
PETE:
Interesting.
BEN:
There's one other curve ball that the audio system will throw you, and that's if a phone call comes in, you'll get an interruption, which is, again, a notification that you receive when the interruption starts and when it finishes. What would happen is, we'd get a phone call, my app would get a notification that, "Hey, you're about to be interrupted by some other audio," so I will pause the audio and make sure I save the position that the user's app. And then when the interruption finishes, I will say, "Was I paused before?" then I'll go ahead and start playing. Without knowing the begins date of the interruption, then you just receive the end notification and you don't know whether you should start playing or not. So, the reason why I noticed is because in iOS 6, I think it was 6.0, they introduced a bug where that begin notification would never fire. So my app wouldn't know whether or not I was playing before the interruption or not. What would happen is, anytime any audio played whatsoever while my app was active, as soon as I stopped that or hang up the phone call, my app would start playing music, which is really annoying like if you get your alarm clock at 5 in the morning and you turn it off and then I'll start playing music and you're like, "Wait a minute!" So we turned that feature off to where if an interruption it ends up cancelling the audio. Unless we got that notification ahead of time, I'm not going to start playing audio automatically. I use Instacast for podcast listening, and they got that wrong for really long time or it would start playing the podcast that I was last listening to at any point in time when I got any interruption even if I wasn't listening to the music that time. So it was a little frustrating. I think they've since fixed that bug, but yes, it's definitely a complex world.
PETE:
Lots of little gotchas, it sounds like.
CHUCK:
Mm-hmm.
BEN:
So have you guys done anything? Rod, have you done anything with audio or video before?
ROD:
No, just very basic stuff; just plain little system sound effects that you talked about.
PETE:
Actually probably one of the biggest iOS apps I built was like a media-browser-type thing that did slide shows of images, but it also had slide shows of video. That was a weird one because we ended up using -- I guess I mentioned this earlier -- we ended up using HTML for the layout of the slide shows. So all of the video playback we did was using like the video tag or whatever it is in HTML 5 with live streaming. We've ran into at all around kind of collection of Edge cases pretty similar to the ones that Ben was going over, except I can't remember any of them work. I remember, the loads of stuff around states, like unexpected states, where we think we'd cover all of the cases and then there's this kind of case where you're at this state, and then this thing happens just before this old things happens, and it's like, "Oh, we didn't need to..." and debugging that kind of stuff is so hard.
BEN:
Oh god, yeah. I think, one of those is a low-latency or high-latency-low-bandwidth network connections.
PETE:
Yup!
BEN:
And if I'm sitting in my office in Houston, I get great LTE reception, I get great WiFi, so things are pretty nice and in basically the best conditions they could be in. But the reality is, these devices, they go into elevators and into parking garages and tunnels and subways and whatever else, so some people just have really terrible connections. One of the things that really helps out is the Network Link Conditioner that's now present on iOS. So if you have --
ROD:
Echo's one of my picks!
BEN:
Oh, sorry! [Laughter]
BEN:
I think you have to enable developer, like use your phone for development inside of Xcode before the show's up. But you've got this Network Link Conditioner, which will, you can say, simulate a really terrible 3G or Edge connection, and you can even customize them to say, "I want this present packet lost and this total speed that I want," and so you can just see how your app behaves in those conditions. But, remember to turn it off because otherwise, you're going to hate your phone. [laughter]
ROD:
Yup.
PETE:
We've had teams that are using that NCIs; that's actually just a UI over like the firewall tools that are built into OS X. IPFW, I think, is like their underlying thing that does bunch of stuff around firewalling, but it also allows you to simulate these kind of lost C conditions, and you can drop down to their command-line interface and actually set that out during CIs. So we used to launch our application programmatically in this simulator and then kind of programmatically simulate the network going down and checking to see that it handle those kind of conditions. It's kind of a little bit of a hassle to get it set up, but once you got it setup, it's actually pretty easy to do that kind of stuff.
CHUCK:
One thing that you guys talked about a minute ago was MP3s versus some other format, was that the AIFF format, A-I-F-F format?
ROD:
CAF.
CHUCK:
CAF?
BEN:
CAF is actually a container format so you can actually put whatever kind of encoded file you want inside of a CAF container. I think it stands for Core Audio File, but I'm not positive on that.
ROD:
I think so, too.
BEN:
But on iOS, you definitely have like the hardware and software as geared towards working best with Apple's formats. AIFF is one those; I think that's Apple Interchange File Format, I think [laughs]. And then there's AAC, which is Advanced Audio Codec. A lot of people think that AAC is an Apple standard, but it's not; I think that's Advanced Audio Codec. AAC is probably the preferred one for music and you'll use AAC, like you can use a 64 kilobit high efficiency AACs for music. If you're streaming, that's going to be low bitrate, but pretty high-quality in comparison. If you're using MP3, you're going to hinder your buffering capabiliy or streaming capability on iOS. However, we have an Android app as well for Deli Radio. It doesn't stream the AACs very well at all; it streams MP3s much better so we just have to provide both to our clients.
PETE:
Do you guys do that transcoding? Or, I guess maybe that's a business question that you don't have to answer, but do you do that yourselves? Or, do you use one those kind of cloud services to do the transcode?
BEN:
No, I wish we use the cloud services [chuckles]. [Pete laughs]
BEN:
But we do that ourselves, so we have an encoder that's a separate service that does all these and we have a bunch of out performance. So if you're going to purchase the music, then you get the highest quality; there's low and high quality versions of AAC and MP3. So if you're on a WiFi, then we give you the lower quality one and then we switch to the higher quality one when you're streaming on like 3G or LTE, then you get the lower quality version; if you're on WiFi, then you get the higher quality one.
PETE:
It's surprisingly -- well, it's not that surprising I guess -- but it can be a lot of work if you've got like some raw video and then you want to convert it to format so that can compatible for Web, Android, and iOS in like 4 different bitrates, that's a lot of --
BEN:
I did this for NSScreencast. For a long time, I just figured out the FFmpeg settings for encoding my videos into the right formats. It's funny because it's like, I thought that they worked, but just working is not merely enough. Like some of the settings can produce either quality loss problems or the file sizes are just too big. I just had a hard time making sure that the settings were exactly perfect. Eventually, I just decided to let Zencoder handle it. What I like about that is, they do it in probably 4x as fast bacause they do it in parallel. I have a Core I7 machine, but I'm still encoding one file at a time and it saturates the CPU for that time period. So it would take 20-minutes or so, maybe even longer to encode all of formats that I need.
BEN:
When I use the Zencoder, I upload one source format for them, that takes a long time, but then I can just pack up and go to work or do whatever else I need to do; I'm not tied to my computer and my internet connection for that entire time. And then they do it and they just send me an email when they're done and they put them exactly in the bucket on S3 that I need them to be in.
PETE:
That was cool as app thing, I think. So you just kind of throw it up to a bucket on S3 and then they'll pick it up automatically, transcode it into the format you want, dump it in another bucket, and then it immediately, if you're using cloud front or something for CDN, then it's immediately available for a featured kind of offer at most stream or whatever. And they support Live HTTP Streaming as one of the output format, I believe.
BEN:
Oh, that's it! I forgot about that. I need to look into that as well because for the clients that support HTTP Live Streaming, that's so much better than progressive download of files.
PETE:
Yeah.
BEN:
For static videos like these, there's no reason not to use it especially if it's just a checkbox setting in the Zencoder.
PETE:
I think that's true, but we never actually use Zencoder. We looked into using it, but we didn't get that fast.
BEN:
It's harder if you need to do, like we have dynamic playlist, basically, Deli Radio is all about playing music near you that's playing live. So, Pete, if you've run it, you'd get bunch of music playing in Berkeley, and there's a whole bunch of music in Berkeley. That's where our client is, actually. So, there's just tons of light music happening in San Francisco and in the East Bay, so you'll say, "Okay, what can I listen to this weekend?" and then you get a radio station that does just that. And so everybody's station is different so you can't create a big giant playlist in HTTP Live Streaming for that so we'd have to have one "playlist" for each song.
PETE:
But you could do some kind of crazy stuff and have all of the actual audio, one copy of all the audio and then write all of the metadata files kind of technical.
BEN:
Yeah. So I was thinking like a Rails end-point would dynamically give you the HTTP Live Streaming index.
PETE:
Yeah.
BEN:
Some of the those things have been bounced around, but it's hard to beat the simplicity of pointing it to a file.
PETE:
Sure.
BEN:
And having it an array of URLs, which represents a playlist. It just makes a lot of sense and it's easy.
PETE:
Fun fact for those of you who like arcane formats or trips down memory lane, the Live HTTP Streaming implementation is actually this huge hack on top of M3U, the M3U format. I don't know if you guys remember listening to like --
BEN:
SHOUTcast.
PETE:
Yeah, SHOUTcast back in the day where you kind of get like this list of MP3 files and this M3U playlist. So if you read this back for Live HTTP Streaming, it's actually like this horrific or awesome, depending on --
BEN:
Awesome? [Laughs]
PETE:
Yeah. [Laughter]
PETE:
Like hack on top of those M3U [inaudible]. It's actually incredibly simple; the format of the metadata is literally like plain text space separated or something ridiculous. It's actually pretty reasonable to write those things yourself because it's pretty straightforward. Once you understand the hackery involved, it's actually pretty elegant and I like it.
CHUCK:
Cool!
ROD:
I like hacks!
CHUCK:
Well, I think we're about out of time. Are there any things that we have to cover that we didn't to hook people up?
BEN:
I think we got it. We may have scared everybody away from writing a music player. [laughter]
ROD:
Just so people are aware of other features in the audio that iOS provides, there's Core MIDI and there's also OpenAL, which is for game audio. And we didn't talk about the Recording, so I think that's another episode.
BEN:
Yeah. Just take a stroll down the AV Foundation Programming Guide and take a look at that.
There's definitely recording audio and video using AV set recoder, I've done a little bit of that.
PETE:
Kind of be more interesting in that like all of the kind of augmented reality kind of stuff where you stick a mustache on someone in real time or whatever, I've always wanted to play around with that stuff; it looks really interesting. Maybe we should do another episode on --
CHUCK:
On recording audio and video?
PETE:
Yeah.
CHUCK:
I think it'd be worth doing.
PETE:
Yeah.
BEN:
Well, you can probably just smash together, if you get of image out of the live video feed and you take that image and you hand it to your core image to do face detection and use face detection to positon the mustache and then lay that on top of your video, it's going to be a little laggy but it will work.
PETE:
I'm assuming! When RubyMotion first came out, the Ruby iOS framework, one of the first apps that they had in the app stores was a mustachefication technology.
BEN:
But those are just a picture.
PETE:
Oh, it was? I assumed it was live.
BEN:
I don't think it was live video.
PETE:
Oh, man! RubyMotion sucks, obviously. [Laughter]
PETE:
You know why? Because Ruby is not performant enough, that's how --
BEN:
I'm sure that's a pretty [inaudible] [Chuck laughs]
PETE:
That was me churling, don't hate me Ruby! I love you Ruby!
CHUCK:
If you're wondering, the app is Mustachio.
BEN:
Yeah. That's actually by Loren Cincinnati, the creator of RubyMotion. I guess that was his attempt to say, "Can an app get approved with this toolset?" and it did.
PETE:
That was his smoke test.
BEN:
Yup!
CHUCK:
Yup, work pretty good. Alright, well, let's get to the picks then. Rod, what are your picks? ROD: First, I'm going to pick an app that whenever I want to record my own sounds and edit them or whatever, I use an open source app called "Audacity", which works pretty well. My second pick is just going to be "Customers". I love them; I wish I had more! [Laughter]
BEN:
Amen!
CHUCK:
Yup. Alright, Ben, what are your picks? I have 4 that are completely unrelated to audio. I've been really interested in Redis lately and there is an older book by Karl Seguin called -- or Seguin, I don't know how to pronounce his last name -- it's called "The Little Redis Book". I think it's only 30 or 35 pages long. It's free so you can download the EPUB, plop it on iBooks and I read it in couple of hours - I read really slow. Also, Reddis is a complicated topic, but that is pretty interesting. Also, I looked at MMDrawerViewController, which is one of the many open source like side-swiping Drawer NavigationController things. I've reviewed a bunch of these on NSScreencast a while back, and there's a lot of horrible ones, and there's only a few that are really well-done, and this is one of the ones that is really well done. And on the GitHub page for that MMDrawerViewController, there's a link to install it on you phone. So if you go staight to that page on your iPhone, you click the button and it will create an enterprise build for your device and send it to you, which seems like they're bending some rules somewhere.
PETE:
They're going to get shut down, I'm sure --
BEN:
That thing that they linked to is called "MacBuildServer.com", and it's pretty awesome. You just handed a GitHub repository and they will build an app for you and link it so you can install it on your phone. That's pretty crazy.
PETE:
Yeah.
BEN:
I'm certain that will get shut down eventually, but I think it's an interesting way to try out. Like MMDrawerController has a demo app you can install on your phone right now and see what it looks like or how it feels. And then lastly, I mentioned on the precall that I've been playing a lot of emulator games recently. So I'll pick "OpenEmu", which is an open source emulator sort of browser. It lets you have like a library of games and you can get Super Nintendo, Nintendo, Sega Genesis, Game Boy, tons of different formats and so you can enjoy all of the old games of your childhood.
CHUCK:
Awesome.
BEN:
And those are my picks!
CHUCK:
Alright, Pete, what are your picks?
PETE:
Everytime we do the picks, as people are doing their picks, I change my mind of my picks and then I have too many picks. [Laughter]
PETE:
It's quite frustrating. First pick is a recently released application called "Reveal". I think last or a couple of episodes ago, Ben picked an app called Spark Inspector, Reveal is very similar. I encourage folks listening to go to revealapp.com and check out the video; it's really really cool.
Basically, it say kind of a visual inspector and debugger, an editor-feel like live running application. So you kind of link it up with your application and you kind of fire up this Reveal App on your desktop and you can see like all of the Layers and Views in your application and you can not just see them, but you can edit them and kind of inspect the state of all of the properties of the Views. So, it's really cool technology. And for disclaimer, I know the guys that wrote it in Australia, I met them at WWDC I think a couple years ago or a year ago, and they're good guys and it's a really really awesome app. I know they've been working on it for a while so I'm really pleased that it's available for people to check out. In the spirit of self-promotion, my next pick is a tool called "Snap CI". This is actually something that was recently announced by Footworks Studios. Footworks has, mainly we do consulting, but we also have a product vision, and Snap CI is basically stupidly easy way for you to get your Rails App deployed to Heroku. So you point it at your GitHub repo and it does all of the boring continuous integration stuff and just deploys the thing to Heroku. So, if you're doing your backend in Rails for your iOS app, then you might want to check it out. And then inspired by Ben's MacBuildServer pick, I'm going to pick a tool called "Buildozer". I haven't actually used this, but again, I met the guys that built it at a conference and it's kind of cool, too. It basically build your app in the cloud so you give it your application source code and it will compile it and distribute it to your users. It's similar kind of idea to MacBuildServer, but it's not open to everyone so it's not going to get shut down, hopefully, not going to get it shut down by Apple, and I'm pretty confident that MacBuildServer will. And then my last pick is "ThinkGeek.com" because it's Father's Day. In fact, maybe this will come out after Father's Day, but Father's Day is coming out and ThinkGeek is a great source for geeky things to buy the father in your life.
CHUCK:
[laughs] Father's Day is the third Sunday?
ROD:
It's the 16th.
CHUCK:
Okay. So this will come out before Father's Day, just before Father's Day.
PETE:
So if you're panicking, go to thinkgeek.com.
CHUCK:
[laughs] Awesome. So my picks are, first off, I've been trying to build some habits, some good habits, and kick some bad habits. For example, I quit caffeine a couple of weeks ago, I was a Dr. Pepper addict, I still am a Dr. Pepper addict, I'm just recovering. I've also been trying to do some other things to get in the habit of doing them and so I found this app, it's called "Commit", and you put it on your iPhone and then you just mark off everyday that you've done it; you kind of build the streak of how long you've done it. Some of the stuff on there, I've done. I have 3 or 4 days in a row that I've done. And then some of the other ones, I've just adding new things to it all the time. I try not to add too many things at once, so I had one habit at a time every week or so. Anyway, it's really helped me beat better about certain things like taking my medication or stuff like that. So, it's pretty good. And then the other two picks I have are for games that I've been playing on my iPhone lately. One of them is "Candy Crush Saga" and it's a fun game. It's kind of like Bejeweled except there's kind of a move along to the next levels sort of thing and it's kind of a mind-bend mind game kind of thing. "Mini Golf Matchup" is another one that I've been playing with my brother primarily. It's a lot of fun, too, and so I take my turn and play through a hole and then he'll play the same hole, and whoever gets the most points wins. And finally, I'm going to pick one we were talking about before the show and I'll add it to my list here, and that is "Portal" - way fun game. If you haven't played it, go check it out. It's available on Steam, which is a game distribution network. You can get the Steam App for free, and then from there, you can either buy games, but Portal is free.
Anyway, I really enjoy that and Ben convinced me that I need to go and play Portal, too.
BEN:
Yes, go play it.
CHUCK:
Anyway, those are my picks. And we'll wrap this up, we'll catch you all next week!
010 iPhreaks Show – Audio and Video in Apps
0:00
Playback Speed: