073 iPhreaks Show - AV Foundation with Bob McCune

The panelists talk to Bob McCune about AF Foundation.

Show Notes

The panelists talk to Bob McCune about AF Foundation.

Transcript

 

[Work and learn from designers at Amazon and Quora, developers at SoundCloud and Heroku, and entrepreneurs like Patrick Ambron from BrandYourself. You can level up your design, dev and promotion skills at Level Up Con, taking place on October 8th and 9th in downtown Saratoga Springs, New York. Only two hours by train from New York City, this is the perfect place to enjoy early fall at Oktoberfest, while you mingle with industry pioneers, in a resort town in upstate New York. Get your tickets today at levelupcon.com. The space is extremely limited for this premium conference experience. Don’t delay! Check out levelupcon.com now]

[This episode of iPhreaks is brought to you, in part, by Postcards. Postcards is the simplest way to allow you to feedback from right inside your application. With just a simple gesture, anyone testing your app can send you a Postcard containing a screenshot of the app and some notes. It’s a great way to handle bug reports and feature requests from your clients. It takes 5 minutes to set up, and the first five postcards each month are free. Get started today by visiting www.postcard.es]

CHUCK:

Hey everybody and welcome to episode 73 of the iPhreaks Show. This week on our panel we have Jaim Zuber.

JAIM:

Hello, from Minneapolis.

CHUCK:

Alondo Brewington.

ALONDO:

Hello, from North Carolina.

CHUCK:

Pete Hodgson.

PETE:

Hello, from Yerba Buena.

CHUCK:

I’m Charles Max Wood from DevChat.tv and this week we have a special guest, Bob McCune.

BOB:

Hello.

CHUCK:

You want to introduce yourself really quickly, Bob?

BOB:

Sure, my name’s Bob McCune. I’m a software developer and an instructor from Minnesota. I started developing for Apple platforms at the end of 2007 and they’ve really been my primary focus ever since.

I also have a small consulting and training company called TapHarmonic, and I’m also the founder and group leader of the Minnesota CocoaHeads chapter, which I started about 6 years ago.

JAIM:

Yay. CocoaHeads!

BOB:

Yes.

CHUCK:

I know this guy from Minnesota. You might have heard of him.

BOB:

Yes. And most recently, I finished writing a new book for Addison-Wesley called Learning AV Foundation, which is all about how to use the AV Foundation framework for building media apps.

CHUCK:

Cool. To get started, do you want to give us a quick overview of what AV Foundation allows you to do?

BOB:

Sure. AV Foundation is Apple’s advanced Objective-C framework for working with time-based media in both iOS and Mac OSX, so it’s got a really broad and powerful feature set that lets you create a pretty wide variety of different applications for Apple platforms.

Now Apple of course has had a very long history with digital media and at the heart of their media strategy for the past twenty years has been this platform called QuickTime, which was certainly a revolutionary platform in its time. But when the iPhone was being developed, it clearly really wasn’t the right strategy going forward. So instead, they created this entirely new framework called AV Foundation to meet the needs of the iPhone as well as all future generation devices.

It’s a framework that’s deeply multi-threaded, takes full advantage of multi-core hardware when available, hardware-accelerated framework, and it’s additionally a 64-bit native framework as well. It provides all these great underpinnings for building audio and video applications, to – we can get more into the details of those, each one of those capabilities, if you like.

CHUCK:

Yeah, I think that’d be very interesting. So it does audio and video?

BOB:

Yeah, it provides – actually some of the earliest classes in AV Foundation were audio-only features: AV Player, AV Audio Player, and AV Audio Recorder are some of the earlier classes. Interestingly, in iOS 8, in Yosemite, it actually pulled up some functionality you would have previously had to go to Core Audio to perform. They moved that into a whole new set of classes for doing more advanced audio processing.

In addition to the audio capabilities, we’ve got basic media inspection and metadata capabilities for reading embedded metadata as well as interesting things you might take advantage of during playbacks such as duration or timing information. They've got very broad video playback capabilities, media capture for working with the camera, and they’ve built in audio devices. There’s a full suite of media editing capabilities as well as from lower level media processing features that they have available as well.

JAIM:

So you talked about this is a 64-bit library or framework. Does that limit what devices and what iOS versions that this can be run on?

BOB:

No, it’s really worked on the very first iPhone, so it will take advantage of 64-bit hardware when it’s available.

JAIM:

Okay.

CHUCK:

So has this been in iOS since the very beginning?

BOB:

Well, some early classes, like I mentioned, the AV Audio Player I think came out in iOS 2.2 and AV Audio Recorder came out in iOS 3. It really wasn’t until iOS 4 where the framework exploded into what we know it to be today. At that point in time that’s when they started adding video playback and capture and editing and all these capabilities. But that’s been around since iOS 4, and it was also then brought to the Mac on Mac OS 10.7 and now, as of Mac OS 10.9, they’ve officially deprecated the QuickTime framework so it’s now the default media framework on the Mac as well.

I think this is actually really an important point, is that now Apple’s investing all of their media engineering resources into a single framework, which I think we’ve seen on the last couple of releases, some big new changes have happened to the framework because of it and I expect really good things going forward as well.

PETE:

If I was doing game development, is this the framework I’d use for that as well if I wanted to do sounds for games, or is there something else in the game development world?

BOB:

Now you certainly could. You’ve always had the opportunity to go to lower level frameworks like Core Audio or OpenGL, but AV Foundation - even just basic audio playback - that functionality has been around for a long time. Given the new capabilities they introduced in iOS 8, I think it’s certainly where you’d want to look now because there’s both standard video playback as well as things that would’ve previously been done with using OpenAL for positional sound capabilities. Those things are all nicely within the framework and some fairly high level and productive classes that you can use.

PETE:

What does “positional sound capabilities” mean?

BOB:

Say, you have 3-dimensional sound, so you can position a sound source behind the user’s head or in front or pan around as you want.

PETE:

How does that work?

BOB:

Using things that are way beyond me. [Laughter] OpenAL was really what was used for that previously. That way you can have – if you wanted to have a dragon flying by, you could hear it coming from the right and kind of swoop around to follow its movement. It certainly adds to the realism of certain types of games and other media applications.

PETE:

I’m assuming if I wanted to just add some sound effects to an app, like I wanted to make a mail client where it makes a whooshing noise every time I send an email, then this would be what I would use for that too, right?

BOB:

Yeah, you could certainly use AV Audio Player for that. For really short tones, you might even be able to use the system sounds APIs, which are technically part of Core Audio. But if you prefer to stay on that Objective-C space, AV Audio Player will work very well for that.

ALONDO:

So will I be able to access, say for instance, I have an app that lists song lyrics, and I’ve thought about adding a feature where you could actually listen to the song being played and maybe even move the lyrics up and down through the song as the song is playing at an appropriate rhythm or pace. Would that be something that I could accomplish with AV Foundation?

BOB:

Well, yeah I mean you certainly can access that data. I mean, any embedded metadata that might be stored as part of the song is available. Actually the lyrics in particular, there’s actually a lyrics property on one of the key classes you could access. There’s a whole suite of metadata APIs that you can access any of that embedded metadata.

As far as sequencing that, and timing it with your playback, you’d need to do a little more work on your own to make that happen.

ALONDO:

OK.

PETE:

So, novo coding [crosstalk 08:11]?

BOB:

It is not yet. Yeah, that’s actually one of the limitations as far as this new audio features is they have some basic Core Audio units built in but it’s fairly small, so if you need to do more advanced audio processing, you still may need to go to Core Audio to do that kind of stuff.

JAIM:

So do you see AV Foundation working alongside Core Audio for just hailing different types of problems? Or do you see it replacing Core Audio?

BOB:

You know, it certainly can work well with Core Audio. It’s actually maybe one thing I should explain is, there’s all these lower level frameworks that you have on both Mac and iOS – there’s things like Core Audio, Core Video, Core Media, even Core Animation, and AV Foundation builds on top of those things. Most of those lower level frameworks are low-level standard C frameworks where AV Foundation is a higher level Objective-C framework. That being said, you very often find yourself working with those low-level C frameworks, so if you wanted to work with Core Audio or Core Media, you certainly could do that; I think each one is probably suited to handling different problems.

But interestingly, with the introduction of these new audio classes, they were holed up from capabilities you would have had to go to Core Audio for before, as well as Apple’s also taken some functionality that was in one of the higher level frameworks – the Media Player framework – and brought that into the AV Foundation realm for doing all kinds of video playback. It’s starting to lead me to believe that maybe Apple’s trying to make AV Foundation the central interface for all things medium on the platform. I guess time will tell if that pans out, but given some of the moves they’ve made, I think that that seems likely.

PETE:

OK. How deep down the audio stack can you go in AV Foundation versus when you have to [inaudible 09:59] into Core Audio? If you're doing DSP, things like that. Does AV Foundation give you that functionality?

BOB:

No, when you get into that level of stuff, you’re really going to [inaudible 10:09] deep down into the Core Audio framework. What they have brought is now real-time scheduling of media – a real-time scheduling of audio I should say – so you do get real-time operations, you do get real-time some of these Core Audio unit wrappers they brought up into there. So for fairly advanced things you can use these new features, but if you really need to get down into the DSP level, you’re probably going to go operating at the Core Audio level itself.

CHUCK:

So I guess my question is, does this include things like audio players or video players as well?

BOB:

Yeah, the framework certainly has those capabilities.

One of the new things that was introduced – it was first introduced in 10.8 on the Mac – this new framework called AVKit, and it’s now been brought to iOS as of iOS 8. What AVKit does for you, as far as video playback’s concerned, is it provides you a player, at least a player interface that looks identical to the native player – either QuickTime player on OSX or the built-in videos app on iOS. What it does for you, that MPMoviePlayer, that video player and the media player framework didn’t do, is it finally exposes all of the AV Foundation underpinnings. So if you want to do some of the more advanced playback capabilities that you have, you can do that while still very quickly building your user interface.

On the other hand, if you wanted to build an entirely custom UI for your player, the framework certainly provides you all of those capabilities. Actually, the project I’m working on right now – that’s specifically what we’re doing. So that way, it gives you the full control over the user interface and interaction as well as the whole playback stack as well. I’m sorry, you were also asking about one other framework, what was that?

CHUCK:

No, I was just curious. So it gives you all of those capabilities like speeding it up or slowing it down or skipping 15 seconds or 30 seconds or some arbitrary number of seconds instead of just the basic player functions?

BOB:

Yes. There's actually two different classes that you work with frequently when you're building a player. There's one AVPlayer, which is kind of the controller object for controlling the timing and playback of a media asset, and there's also a thing called an AVPlayerItem, which is really the dynamic version of that media, so it has its presentation state things like its timing and things like that.

So yeah, you're free to scrub through at any rate you want; you can skip to various points using these classes. And interestingly, the thing that they added in, I believe, iOS 7, is – as far as playing back at different rates – you can even set some different audio algorithms on the player so that you can either change the pitch as the timing speeds up or slows down, or you can keep it at the same pace.

It’s interesting, one common use of that might be for building some sort of transcription app, or recording a conference or something, or a lecture or something like that. You want to slow the video playback down but you don’t want to have the audio also going to very Barry White territory. You can do that, and then slow the playback down and listen to things normally, and then transcribe from there. So you really have all the capabilities you could possibly need in these playback APIs.

JAIM:

That’s pretty cool. I mean, if you ever tried to do any of those types of algorithms where you're speeding things up and slowing things down, it’s non-trivial, the math involved, so having that stuff available for us is pretty cool.

BOB:

Yeah, for sure.

JAIM:

[Crosstalk 13:40] in the back channel, we’re talking about auto-tuning. I was wondering if we could build an app to auto-tune this podcast, played along with some music. How do we do that?

PETE:

We don’t need to auto-tune it; we’re pitch perfect.

JAIM:

Well we can play it along with different songs, you know.

PETE:

Okay.

BOB:

That is actually [inaudible 13:58] nice that – and personally one of my favorite areas of the framework are the editing capabilities. If you wanted to build a podcast editing application, you certainly could do that and that allows you to assemble clips together in various tempo or arrangements; you can easily trim and edit and position and shift things around. You can add additional tracks to add audio tracks or sound effects – some things like that. I think that’s actually a really fun and interesting capability that’s found in the framework, and actually the last four chapters of the book deal with that topic, specifically – how do you make use of all of these editing capabilities.

PETE:

It sounds like you can almost build all of the capabilities that are in the Apple supplied apps like the video camera and iTunes or whatever; you can almost build all of that stuff out of AV Foundation if you wanted to.

BOB:

Yeah, you really can.

PETE:

It’s all everything in there.

BOB:

Yeah, you really can, and most of Apple’s apps are all built on top of it, so the built-in videos app – that’s all AV Foundation. Even editing tools like iMovie for iOS, that’s all built on top of AV Foundation. Even going up to some of Apple’s more advanced tools like Final Cut – big portions of that are all built around AV Foundation.

It is nice that Apple provides us third-party developers the same capabilities that they have.

CHUCK:

Yeah, that is really nice. And being a podcaster, I talk to a lot of podcasters too. There are definitely some things that you're speaking about that’s speaking to me as far as things that I would eventually like to build into my own app for my own shows, and I know that several other podcasters are looking at for their shows.

I'm wondering, does it do audio or video recording as well or is that a different framework?

BOB:

You certainly can do that. They have a full suite of capture APIs, and that’s what all of these audio and video recording – I guess I shouldn’t lump all audio recording, but certainly video recording applications are built on these capture APIs. This enables you to take advantage of the built-in hardware on your iOS devices, or if you're working on a Mac, it could potentially be an external audio interface or an external camera attached to it. It provides you full control over the hardware and you can capture things either directly right to a movie file or an audio file, or you can additionally capture the individual audio samples and raw video frames as they're coming off the hardware and do more interesting and advanced processing of that kind of stuff.

16:

53].

As a matter of fact, this is actually one area in particular where Apple spent a lot of time over the years investing in. I think almost – certainly the majority of the WWDC years, over the past few years anyway; they always had a “What’s new in camera capture” and that also includes the audio capture. Because these APIs in particular are so closely tied to the hardware, each time Apple introduces a new device, there's new and interesting things in that area. As a matter of fact, they just released a big, long forum post on all the new capture APIs that were added for the iPhone 6 and iPhone 6 Plus, so it’s a very rapidly evolving part of the framework.

JAIM:

So what are some of the new capture formats?

BOB:

In terms of capture formats, you mean in terms of pixel sizes and things like that?

JAIM:

Yeah, what's coming up new?



I can’t really speak to the iPhone 6 and iPhone 6 Plus though, but they’ve added certainly some really new interesting things as it relates to camera capture, generally.

A few things that people have been asking for for a few years are manual controls over things like exposure, white balance, and they’ve finally brought those capabilities. So now if you wanted to build a very advanced video or camera capture type of application, you’ve got those kinds of controls at your disposal. That was a big welcome edition that was added recently.

JAIM:

Yeah, very cool. I've worked on a number of apps that needed to capture video for whatever reason, like taking pictures of a check or something. This type of functionality, it’s really helpful versus having to do it at a lower level.

CHUCK:

Does it capture – there's audio, video and image. Does it capture at all as .m4v or .aac? What kind of formats does it support?

BOB:

There are a few different output types – I guess there are three components to a capture scenario. You get the central object called a capture session and the AVCaptureSession, and this kind of maxes the central hub to which inputs and outputs are attached. You’ve got inputs – things like your camera devices, or audio devices – and then you have a variety of different output types that you can send that data to.

You can use an AVCaptureMovieFileOutput and this will record directly to a QuickTime movie using H.264 encoding and .aac for the audio. Or if you want to have more control over the encoding yourself, you can actually grab the raw video frames and audio samples and then use an object called an AVAssetWriter. In using an AVAssetWriter, this then provides you really complete control over how you go about encoding that audio and video and writing it out to disk. So you'd have quite a bit of control in this area.

PETE:

You mentioned the ability to do real-time filtering on audio stream as it’s, I guess, moving through these sources through to output – can you do the same thing with video? Could I apply a CPR effect in real-time to the video as it’s going through the stream and then show it to the user, or is that something I'd have to drop down to something lower level?

BOB:

No, you definitely can do that, and that’s actually a very common use for the AV Foundation framework. Most of these advanced camera applications you find on the App Store make use of those things, so you can apply real-time effects either using Core Image or probably more commonly, OpenGL ES, to do really in the kind of advancing you want.

In the application, I've shown an example of mapping the images coming in off a camera onto the faces of a 3D rotating cube. I also have another one where I'm using Core Image to apply some real-time effects – sepia being one of those, if I recall.

PETE:

That’s cool. So I could do augmented reality-type things where I'm combining images coming from the camera with generated, rendered images and display that like augmented reality? I can do all that inside of AV Foundation?

Yeah, definitely. I mean, certainly the heavy lifting in that equation is the stuff you're doing in OpenGL but it provides you the access to that data, as well as it also provides a nice bridging interfaces that very quickly get the data into OpenGL. It’s very fast and memory-efficient.

You’ve seen the number of interesting applications, I think – is it Sphero, the one that has the balls that you can remotely control, if I recall?

CHUCK:

Yup.

PETE:

Yeah.

BOB:

The AVI1, I believe, projects a character into whatever environment you're in and you can kind of control that and move that thing around, but that’s using a combination of OpenGL and most likely – or I should say, AV Foundation and OpenGL.

JAIM:

So Bob, walk us through a little bit how an app like that would be set up. Let’s say we’re getting video capture and we’ve got some facial recognition algorithm so we can determine that someone’s face is on there. It wouldn’t have to be a specific person, but we can say, “Oh, it’s a face” so we can draw a moustache on the face or give him a weird hat. How would the different parts of this app fit together with AV Foundation?

BOB:

Since you're bringing that specific use case, that’s a nice feature that’s been available for the last couple releases on AV Foundation and it’s called AVCaptureMetadataOutput. One of the metadata outputs that it can capture is actually faces, so it’ll do real-time, hardware-accelerated face detection, and this is just another output destination that you would attach from your AVCaptureSession. You'd attach an AVCaptureMetadataOutput and then what that actually outputs as the data’s being captured are these things called AV metadata objects. These metadata objects will contain things like the bounding rectangle for the face; it can additionally contain things like the rotation, kind of their side-to-side rotation, as well as what's called the, I believe, the yaw angle or frame – if I'm using these terms right – showing the user moving their head towards their shoulders. You get some very advanced capabilities when it comes to that.

The nice thing about the AVCaptureMetadataOutput is that it’s not strictly related just to face detection. It really could be used for any kind of metadata that Apple may want to enable in the future. One of the things that they added in iOS 7 is they added the ability to do machine code detection, so bar codes. There's broad support for a bunch of different 1D symbologies – things like UPC codes and Code 3 – as well as three or four different 2D symbologies – things like Aztec and QR. You use that same metadata output interface for doing both face detection as well as that – as well as for barcode scanning. But certainly in the future, Apple could add any more metadata types they may want.

JAIM:

Okay, so Apple provides stock options like facial recognition or barcode scanning; it’ll recognize if there's a barcode. What if you want to have your own custom algorithm for detecting some random thing – a coffee mug or something?

You certainly could do that, and I've seen people use open source detection software that a number of people use. In that case –.

JAIM:

OpenCV.

BOB:

Yeah, OpenCV, thank you. If you wanted to go that route, you certainly could. The same basic process would still be in place, but instead of using that metadata output you would use what's called an AVCaptureVideoDataOutput that provides you the actual video frames as they're being captured by the camera, and then you can hand those off to OpenCV or any other kind of custom algorithm that would analyze that data and [inaudible 24:47] you might need.

It’s nice that they provide some of these higher level interfaces and make it really simple if you want to do something basic, like simply writing a movie file out to disk, but if you need to do some of the lower-level processing, you’ve got full access to all those audio samples and video frames as well.

JAIM:

Oh, very cool.

PETE:

That’s actually something I was interested in. You kinda touched on the ability to plug in alternative, stuff that doesn’t come in the box, but get it to play nicely. Do you know of any interesting other examples of third-party libraries that are plugged into this, like open source libraries that I can maybe download on CocoaPods to kind of power up my AV Foundation?

BOB:

Well, I think one project I certainly recommend checking out in relation to camera capture is this framework by a guy named Brad Larson called GPUImage. What’s cool with GPUImage is that he’s doing really all of the heavy lifting for you, including having literally I think over a hundred different OpenGL ES shaders available to apply a variety of different effects. I think that application’s really useful just kinda taking it as is, but I also highly recommend anyone who’s interested in how these APIs work to dig into how he’s done this. He’s made it very simple to build a very advanced camera application, but like I said, that’s because he’s wrapping all of the hard details. That’s a great one to check out, for sure.

JAIM:

So another chapter in your book that I find interesting is mixing audio. Can you tell us a little bit about that functionality?

BOB:

Yeah, there are some basic audio mixing capabilities that have been built in really I think ever since iOS 4, and these are part of the larger editing capabilities that the framework provides.

Now where this is really most useful is when you're building multi-track compositions. There's an object called an AVComposition which enables you to assemble video clips, audio clips and slice and dice those as you will. If you're dealing with multiple audio tracks, you're going to have certain audio signal competing for attention, so you might want to do things like fade audio in or out on a track. Or you might want to use a technique called ducking as you might have some background audio playing but as maybe a voice-over comes in, you want to drop that audio level down and hold it steady while that voice-over is going on, then ramp it back up when they're done.



27:

25] on a given audio track, and it provides you some basic capabilities for doing basic audio mixing. There are also some more advanced real-time capabilities you can plug into that as well, so if you wanted to do some more advanced – or audio thing, there are some audio interfaces available for getting access to the actual underlying audio samples themselves.

You really do have quite a few different options at your disposal when performing audio mixing.

CHUCK:

Now related to that, I'm curious – can you create original compositions? For example, you want to make a theme for your game or something – theme music. Can you compose it and then have it play it?

BOB:

You certainly could. I mean, you can think of an AVComposition as very much like opening up a project in GarageBand or Logic or something like that where you’ve got the ability to add multiple tracks of both audio and video. You can use that, and there are APIs to control the placement of those clips, to trim them, slice them, dice them as need be. Like some of these tools like GarageBand, for instance, they're all non-destructive – meaning you're not actually altering the underlying mediate; you're really just providing instructions for how those things should be presented and processed as a group.

CHUCK:

I'm a little more interested in, say – say I'm a musician. I want to kind of build a piano app or something so that as I play it, it records it; as I tap keys on the phone, it plays that out and records it. Can it do those kinds of things too? I'm pretty sure GarageBand does something like that, but I don’t remember exactly.

BOB:

Yeah, it certainly could now. That was something that really wouldn’t have been within its capabilities previously. It wasn’t until iOS 8 and Yosemite where these new, more advanced audio capabilities that enable real-time processing of audio have been made available. That’s really important, especially if you're building an instrument or something like that; you can’t have latency of any amount. More than a few milliseconds of latency, the performance would be all wrong. But now they’ve got these new real-time audio capabilities, you certainly probably could take advantage of using AV Foundation to build that kind of an application.

CHUCK:

I see chapter 7 is using advanced capture features. Capturing seems pretty straightforward – you turn on your microphone, turn on your camera or whatever and you capture media. What are some of the advanced features that are available in AV Foundation?

BOB:

Well the seventh chapter gets into dealing with some of the things we’d discussed such as AVMetadataOutput for doing real-time face detection and barcode scanning; it also gets into some more of those advanced capture types such as AVCaptureVideoDataOutput or how you go about processing the audio and video frames.

30:

45].

Another interesting thing that’s discussed in that particular chapter is how to take advantage of high frame rate video. That was something that was introduced in iOS 7, so that now –. As of iOS 7, at least not in iPhone 5 and 5s, you can do up to 60fps, or maybe in the 5s you can do 120fps capture. Why this is really nice is that it enables you to slow the video way down and do interesting slow motion effects, but you had to have a very smooth, fluid motion. I think with the iPhone 6 and 6 Plus, didn’t they announce that you can do up to 240 fps, I believe?

ALONDO:

Yes.

BOB:

So you can do some crazy, slow-motion effects as well.

But enabling that capability is a little awkward at this stage, that’s why I wanted to certainly cover that in that chapter on how you go about doing it.

JAIM:

So what kinds of apps have you been building with AV Foundation?

BOB:

Well, most recently I've bene working on an advanced streaming application, and I think it’s going to be kind of interesting. We’re definitely pushing the boundaries of what can be done with streaming, so I'm actually kind of excited to have this released out into the world, but we still got a number of challenges to work with.

One of the things you'll find with AV Foundation is when you start pushing into some of those more esoteric areas of the framework, you'll probably run into issues because we've found a critical bug this summer that totally shut down one potential approach we were going to take. But like I said, AV Foundation, when you get into those outlying capabilities, that’s where a whole lot of people haven’t typically gone, and apparently, including Apple, so you'll find some interesting things to be encountered in those areas.

JAIM:

Okay. Is this like streaming audio, streaming video?

BOB:

Streaming video. The application’s still in production, so I can’t really talk too much about the details of it, but one of the interesting things that we’re doing is we’re doing some pretty advanced animation and transition things you don’t typically see in a streaming application.

Getting all that to work was kinda challenging; that’s one of the areas in the editing APIs. They have some nice capabilities for doing transitions between video clips. You can either do those by using the video compositing capabilities that are in there that enable you to, say, I want to perform a crossfade or I want to perform a push or something like that, on these two video clips to transition seamlessly between them. You can also do your own custom video compositor, so you could pass that data off to OpenGL and they'd do whatever kind of crazy processing you want.

But you can also incorporate Core Animation and have that nicely synchronized with your video playbacks so that as you stop or rewinds for your playback, all of these Core Animation effects will stay nicely in-sync with your video playback.

Trying to get some of that stuff working within a streaming application is challenging, which is why I don’t think you don’t typically see those capabilities in most streaming applications. But we’re attempting to do it, and so far so good but it’s not without those challenges.

JAIM:

Very cool. Now one thing Apple has been kind of annoying about is real-time communication like video, video chat, voice-calling – that type of thing. They’ve got their FaceTime but that not part of AV Foundation; they're not really playing with WebRTC. Do they give us any functionality for that type of function [inaudible 34:20]?

BOB:

Not directly within AV Foundation – well, I should couch that a little bit. One of the things they brought to iOS 8 is they’ve now introduced the video toolbox, and this is the low-level decompression engine that has always been present, but it’s just something that developers didn’t have access till now. What this would enable you to do is actually get the raw, compressed stream of data as it’s being sent over the wire.

There is a new class – I kinda forgot the – there's a new Core Animation class, and you can take that data and pass it off to that class for rendering onscreen. You can use this Video Compression Manager class to both compress and decompress these raw bytes of data as they're coming over the stream, and I think the primary purpose for that is specifically for doing voice chat in real-time kind of communications. It’s slightly related to AV Foundation, but it’s really a level lower.

JAIM:

Do you know what protocols are supported by that, or is it general?

BOB:

Yeah, I think it’s really a roll-your-own when you're going to that level.

JAIM:

Okay.

ALONDO:

When will the book be available?

BOB:

The tentative release date I think is the end of October, so I would expect late October or early November. I'm actually just finishing up some of the last edits this week, hopefully, but I guess at this point in time it’s going to be in the publisher’s hands, but I think that’s when they’ll get it slated for release.

JAIM:

How long have you been working on it?

BOB:

Too long.

CHUCK:

[Laughs] That’s what all the authors say – forever!

BOB:

You know, I went into it with my eyes wide open, knowing that, “Yeah, it’s going to be a lot of work and it’s probably going to take me a while.” But as it turned out, it was a lot more work than I anticipated; took me a lot longer. And I think it’s – estimating how long it’s going to take you to write a book seems to be the same as how long is it going to take to write some software: pick your best guess estimate and then double it. That’s essentially what it was. I've actually started working on it about this time last year, and it just – it’s a long process.

I'm really glad that I did write the book, because I think this is a book that’s long overdue. AV Foundation is arguably one of Apple’s most important frameworks they have. If you look at the amazing number of applications and really the quality of the applications that are found in the photos and videos section of the App Store, people are using it, but I don’t think it’s a framework that’s really well-understood by the community at large. I think it’s important that we finally have a book on this topic and really gives you a pretty comprehensive overview of the framework.

It also positions you well. I think once you understand all of the topics that are covered within the book, you'll be able to explore some additional features, and you'll also be well-positioned for all of the changes that will be coming up in the next release, because the framework evolves frequently, and you see big, new advances on each release of the operating system.

JAIM:

Do you have other books that you’ve written?

BOB:

I have not; this is my first.

JAIM:

Congratulations.

BOB:

Thank you. Yeah, and probably my last as well, so. I don’t know.

I think the one thing that I certainly learned from this is that I think it’s important to love writing. I like to write; I don’t necessarily love to write, so taking on a project of this scope is probably not something I would do again. I think it’s entirely likely I’ll maybe be a co-author or do something smaller that maybe I’d publish independently or something like that. But as far as taking on a big project like this again – probably not going to happen.

JAIM:

I think every author says that after they're done with the book [chuckling] then six months later, they're like, “Well, maybe.”

BOB:

Yeah, well you know, it’s funny because this summer was really hectic because I had a bunch of work to get done by the end of July. Had you asked me, “Would you ever want to do this again?” in July, I would have said, “No chance.” But even now, being a couple of months removed from it I'm thinking, “Maybe I'd do something in the future.”

I really do have a passion for teaching. I mean, that’s been something that I've done a lot throughout my career. I like giving presentations and things like that, but I think there might be some more effective ways for getting the same kind of information out without necessarily writing a book. So, we’ll see.

CHUCK:

Alright, well should we get to the picks?

PETE:

Sure.

JAIM:

Let’s do it!

CHUCK:

Alright. Alondo, what are your picks?

ALONDO:

I have two picks this week. My first pick is a podcast episode of Justin Williams’ Cocoa Radio podcast. He had Jay Graves on, talking about Provisioning Profiles, and those have been sort of the bane of my existence for the past week, so it was very timely to hear.

JAIM:

Only a week [chuckles]?

BOB:

Well, I mean I don’t usually have to deal with them until a client needs something done, or somebody needs something done. I was doing a project for somebody and they said, “Hey, can you do me a favor and get this thing updated?” and I'm like, “Sure!” [Chuckles] So there's that.

39:

38] helpfully know things about them so I can help them with homework. So my next pick is a book called Basic Physics. It’s a self-teaching guide from Karl Kuhn, part of a self-teaching series on Amazon, and this is a great way to sort of go back through it and get all the things that I missed when I skipped class in college. It’s a good read. Those are my picks.

CHUCK:

Very cool. Pete, what are your picks?

PETE:

My first pick this week is a free conference. If you are in the Bay Area, there's a conference – or I guess it’s called a Code Camp. Silicon Valley Code Camp has been running for, I don’t know, maybe seven or eight years at this point and they get a lot of people because it’s free. It’s over the weekend of October 11th-12th. There are currently 2,470 people registered to attend, and 227 sessions, so that’s a lot. A lot of people, a lot of stuff to learn; I'm going to be speaking there as well, so that’s reason enough to attend I'm sure. So Silicon Valley Code Camp, October 11th and 12th in Los Altos Hills, California.

My next pick is, this really interesting news story broke a few days ago about hacking your phone’s gyroscope and turning it into a microphone. These crazy security researchers at Stanford, I believe, figured out that the gyroscope on our phones – the little, embedded gyroscope – is sensitive enough that it can actually detect vibrations from you speaking and snoop on you. At least for Android phones, you don’t need any permissions in order to access the gyroscope, so kind of creepy spy technology but also a pretty cool, innovative use of the gyroscope. So yeah, that's an interesting read.

My last pick is a beer pick, and I'm going to pick Ballas Point Sculpin IPA. Almost one of my favorite IPAs at the moment from Ballas Point down in sunny San Diego. So yeah, if you're a fan of IPAs, check out Ballas Point Sculpin. And those are my picks.

JAIM:

Pete, I'm impressed. You picked a beer I can pick up here.

PETE:

Oh yes, sorry. Normally I'm super – next week I’ll try for something incredibly hipster and artists know and locally sourced and make you feel bad. Alright.

JAIM:

But +1. A solid IPA.

CHUCK:

Jaim, what are your picks?

JAIM:

We talked a little bit about real-time streaming and voice chat, and Apple doesn’t really give us a very good tool for doing that, even though there's a solid standard that people are starting to use – WebRTC – but Apple’s not really playing well with that and they don’t give us access to FaceTime.

I was able to do some work on an app that needed to voice chat and they used the TokBox API. It’s pretty solid, pretty easy to get together; it handles a lot of things that are a pain, because if you tried to compile the WebRTC source for Apple, it’s a nightmare. It’s just a big pile of C and C++ code. There are some GitHub repositories which have done it, but putting that into a project’s kind of a mess. But I enjoyed working with the TokBox API; they handle a lot of the gritty details so I didn’t have to deal with it.

My second pick is, it’s about that time of the year. Oktoberfest beers are sweeping the nation, so grab yourself one of the German Munich on Oktoberfest. Great style of beer; a little heavier than your normal Lager, kinda like a Paulaner, Hacker-Pschorr – Spaten is a good one. There are some great beers that the American breweries – some of them do a decent one but the Germans are really where it’s at. Grab yourself on Oktoberfest. Those are my picks.

PETE:

You always out-pick me on the beers. You're like my archnemesis of picking [crosstalk 43:22].

JAIM:

How do I out-pick you? Everyone wins with beer picks.

PETE:

That’s true, yeah. It’s – a candle gives nothing by sharing its flame.

JAIM:

Here we go. Candles and beer picks.

CHUCK:

Alright. Well, I've got a couple of picks here. The first one is called Boomerang, and it’s a plugin for Gmail. If you listen to my other shows, I'm sorry I picked it on them too. I just have really been excited about it because what it does is it allows you to have conversation threads or email threads come back to the top of your inbox, and so if I'm emailing somebody and I want to check in with them in a week, if they haven’t emailed me back, then it’s really nice for that. If I need to do some other things with just checking in with them anyway or stuff like that – because people get busy a lot of times, and as a freelancer, I really need to do better at following up. And so it’s just been a super, super thing for getting me to follow up, because it reminds me to get back with people about stuff. So get back with me in a month or whatever is now handled. Anyway, that’s my one pick.

And then I've been reading Think and Grow Rich by Napoleon Hill. If you haven’t read that book,

it’s awesome. Anyway, those are my picks. Bob, what are your picks?

BOB:

Well, I was trying to find a great media-related application and well, just frankly couldn’t come up with one. Instead, I thought I'd go with the writing side of it.

When I started writing the book, I really looked around for a variety of different tools that would best meet my needs as I was working on the book, because ultimately the chapters had to be submitted in Word format, and there is zero chance I was going to work in Word. Instead, I wanted to find a nice markdown editor, and lots of tools support that, but they're really just plain editors and I didn’t want to simply manage sheets of markdown.

Instead, I found this tool called Ulysses, and this is a fantastic, markdown-based writing tool. It’s very lightweight; it provides all of the great things you love about markdown, but it provides some essential project management and writing management type of features. One of the really cool things you can do with it is you can apply style sheets to it so that you can output in .pdf, or you can output in Word or html and have full control over how that output is presented. I used that a lot and that made it really easy to take my markdown text and then move that into Word and very quickly move on.

And then I guess my one other pick is actually a guitar pick – not an actual guitar pick. For the guitarists out there, I own tons and tons of different guitar plugins for logic, and there's a great one that a lot of people don’t know about called Scuffham S-Gear. What's kinda unique about this is that it models some unique boutique amps, but what it does different than others is it captures a lot of little subtleties that many of the others don’t. I think it’s really one of the best sounding ones out there.

The cool thing is, too, it’s only a hundred bucks, which, in comparison to some of its competitors, is actually quite cheap. You can check that out at scuffhamamps.com.

JAIM:

At a hundred bucks compared to a vintage tube amp – a pretty good deal.

BOB:

Yeah, exactly.

CHUCK:

Awesome, very cool. Thanks for coming on the show; we really appreciate you taking the time. This is a topic that I am super interested in, so I'm probably going to pester you with emails and stuff. If anyone else wants to, what's the best way to get a hold of you?

BOB:

You can find me either on Twitter – I'm @bobmccune on Twitter, or at bobmccune.com, my website. You can contact me there. But certainly, I'm happy to help out wherever I can. This is a really fun topic; I think this is a fun framework, and I'm glad I'm the one who finally has a book dedicated to learning how to use AV Foundation and hope it will help a lot of people out.

CHUCK:

Yeah, definitely.

BOB:

One thing I should mention about the book. Right now, if you're interested in preordering the book, you can use mccune1808 as a preorder discount code to get 35% off. If you go to my website bobmccune.com, one of the top hosts you'll find there has some details about the book as well as a link to the publisher’s website if you want to preorder.

CHUCK:

Awesome. Alright, well I don’t think there's anything else, so we’ll wrap this up. Thanks for listening and we’ll catch you all next week!

[This episode is sponsored by MadGlory. You've been building software for a long time and sometimes it gets a little overwhelming. Work piles up, hiring sucks and it's hard to get projects out the door. Check out MadGlory. They're a small shop with experience shipping big products. They're smart, dedicated, will augment your team and work as hard as you do. Find them online at MadGlory.com or on Twitter @MadGlory.]

[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]

[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit cachefly.com to learn more]

Album Art
073 iPhreaks Show - AV Foundation with Bob McCune
0:00
48:55
Playback Speed: