[This episode is sponsored by Hired.com. Every week on Hired, they run an auction where over a thousand tech companies in San Francisco, New York and L.A. bid on iOS developers, providing them with salary and equity upfront. The average iOS developer gets an average of 5-15 introductory offers and an average salary offer of $130,000/year. Users can either accept an offer and go right into interviewing with a company or deny them without any continuing obligations. It’s totally free for users, and when you're hired they also give you a $2,000 signing bonus as a thank you for using them. But if you use the iPhreaks link, you’ll get a $4,000 bonus instead. Finally, if you're not looking for a job but know someone who is, you can refer them on Hired and get a $1,337 bonus as thanks after the job. Go sign up at Hired.com/iphreaks]
CHUCK:
Hey everybody and welcome to episode 129 of the iPhreaks Show. This week on our panel we have Alondo Brewington.
ALONDO:
Hello, from North Carolina.
CHUCK:
Andrew Madsen.
ANDREW:
Hello, from Salt Lake City.
CHUCK:
Jaim Zuber.
JAIM:
Hello, from Minneapolis.
CHUCK:
I’m Charles Max Wood from Devchat.tv and this week we’re going to be talking about WebRTC. Jaim, you seem to be the expert on this. Do you want to explain what WebRTC is and why iOS people should care about it?
JAIM:
Yeah, I don’t know if expert’s quite the right word but a little bit farther from the jungle.
So WebRTC is a peer to peer video format. So if you have an application that wants to share a video between their users or just streaming for something else, the couple of apps I’ve worked on that do it or generally communicate in video or audio between two different users. Now, WebRTC is something that’s neither started by Google or – right now, Google maintains it. It’s an older Chrome browser so if you’re doing a web, you’ve got all the stuff in there that you need to do it. If you do it for iOS, it’s a completely different setup; it’s pretty confusing and there aren’t a whole lot of resources out there, at least documentation. There’s some scattered things and there are some things that are a few years old but getting started is pretty confusing.
CHUCK:
Yeah. My understanding of WebRTC is it’s not just video but any data can be sent between peers. And essentially what happens is if somebody knows – that they can talk to get a list of some of the peers, then they could start talking to those peers, sending them data and receiving data from them as part of the peer to peer protocol.
JAIM:
True, yes. You can also do peer to peer data transfer. That’s also part of it as well.
CHUCK:
One of the applications that I see this being used a lot for though is the video and that’s what Jaim is talking about. So there are a couple of – I went to Podcast Movement in August, July. It was worthy of the summer and there were a couple of companies there that were doing basically peer to peer calls like Google Hangouts and it would record your end of the call on the server. So the server was one of the peers in WebRTC, and then everybody else was also a peer and so would send your video to them and to the server and it would send their video to you and the server. Then you could see and talk to everybody and it would upload all of the video and audio files when you were done, so that’s just one example of a real world thing that people are actually doing with it. I think Google Hangouts has some – I don’t know to what degree but I know that some of Google Hangouts actually runs over WebRTC.
ALONDO:
So just to be clear, this is an – is this an alternative to just using some of the existing Bluetooth connection or something like that for transferring data from one peer to another? Is there some distinct advantage with WebRTC?
CHUCK:
So WebRTC is designed to run over HTTP over the internet. So it’s – in that sense, it doesn’t have the location or restrictions that Bluetooth is going to have.
JAIM:
Unfortunately, WebRTC is ddefinitely much in the Google Android world. If you have an Android device, it works out of the box even in the browser. Apple has not exposed that or allowed that to be on mobile Safari at all so we’re kind of out of luck. And Apple hasn’t really given us any access to any FaceTime functionality which is their own little [inaudible] of similar technology as for video and audio sharing or peer to peer. We’re in a weird spot but there are ways to get through it.
CHUCK:
The other this is with FaceTime you’re limited to devices that support FaceTime which are usually Apple devices; and it’d be nice to be able to talk to the rest of the world who weren’t Apple people.
JAIM:
There are other people that aren’t Apple people?
CHUCK:
I know. Unfortunate but true.
JAIM:
Weird.
ANDREW:
Well, it reminds me when Apple announced FaceTime they said they were going to release it as an open standard that anyone could implement and just silently died and they never did it.
JAIM:
Someday. Some ww – all those WebRTC’s are going to get sherlocked by the upcoming FaceTime standard but we’ll see. It wasn’t this year.
CHUCK:
So if I’m going to actually put WebRTC into an application, what are the steps for that?
JAIM:
So there’s a number of approaches you can take. From a basic thing, there are browsers you can download and use on iOS that will support this. There’s an open – the Bowser project which is done by OpenWebRTC which is open source or alternative implementation of the WebRTC spec. But they’ve got a Bowser; you can download it and you can do WebRTC things on your iPhone. So if there’s a web application that you need WebRTC for, you can just integrate Bowser to what you’re doing and go from there. So that’s the most basic level that you can do. So you just download the specific browser that’s not mobile Safari that actually supports this. You can go from there.
If you want to do it yourself, Google has their libraries that you can build. They’ve got native iOS libraries that you can do. That’s tricky and there are blog posts out there; avoid wasting ten hours of your life building WebRTC [chuckles]. Building complex multi-platform C++ code bases tend to be a little difficult at times. So if you’re [inaudible] for punishment you can actually download the source and run the build yourself.
Luckily, there are people that are also doing things like that. It’s not very obvious because if you search for WebRTC iOS, the links you want to do – actually want to look at if you want to do a native application going – are a bit far down Google but I would check out the Pristine IO team; they’ve got pretty built libraries and they have them for various versions and that’s [inaudible] like CocoaPods.
It took a while to find that because it’s not the first thing on the Google search. The ones that are above it tend to be outdated. They got it building at one time and move on to something else while all the specs changes and the apps – the sample apps don’t work with the main sample app or the main server. Because there’s a main server that you can build an app, get it running, get into a room that actually verify it, that can actually work on your device. But if you go through one of the older builts, they change the server interface so that no longer works. So I just saved you about four or five hours right there.
CHUCK:
Thank you! [Laughter]
JAIM:
But go to Pristine IO, the WebRTC build scripts. If you want to get up and running quickly with CocoaPods, you just get in there, put on you pods and go from there. That’s the simple answer.
That’s still a difficult implementation because it handles signaling which we’ll probably get into a little bit later with all the back and forth if you want to set up your own application. And there are services that do this for you. If something’s hard on the internet, people do it for you and charge you for it so there’s some service based companies that will do similar implementations on top of
WebRTC.
The first peer to peer video app I worked on last year some time was we just used OpenTok library from TokBox. That worked really well; I had peer to peer video running and iOS app in probably 8 to 12 hours fully running. So that’s a good implementation at least for developers who want to get things up and running quickly. They tend to charge so if – this is actually a product and you want to get built for a lot of things, you might want to go with the more hands on way but OpenTalk is definitely a sound approach if you don’t want to get mired in the weeds of implementing everything for WebRTC.
CHUCK:
So I’m curious before we get too much further into the weeds, what were you building this for, Jaim?
JAIM:
Not going to get too specific but a previous client. They were both essentially ways for iOS to communicate with other people, video and audio. Some are across the world need some tech support, we would create an app where they could reach the person on their iPad or iPhone and they could talk to each other and manage the process a little bit more than you would with just a FaceTime app so it’s kind of a glorified tech support app.
CHUCK:
Huh, that’s really cool. Yeah, the ups I was talking about in particular are PodClear which was actually acquired by blab.im so I don’t know if they’re actually doing podcast stuff anymore. The other one was Zencastr, and that’s zencaster with no ‘e’ before the ‘r’ in ‘caster’.
JAIM:
Okay, I’m not familiar with those.
CHUCK:
But yeah, essentially they are. They’re just video systems. I know some of the other video systems out there also use WebRTC for a lot of the stuff that they’re doing. So it is as standard as being picked up by more and more people.
JAIM:
Definitely, it’s out there but you really want to have a [inaudible] part of your business being dependent on that to get into it because there are a lot of edge cases; there’s not a lot of help out there on Stack Overflow so there’s not a whole lot of people doing this. But there are some so you can definitely find what you need if you dig but you're going to do a lot of digging.
CHUCK:
So what are some of the gotchas that you're going to run into just to get it up and running?
JAIM:
So one problem I spent almost a week to debug, we had a version working that actually worked in an iOS simulator and it worked when we [inaudible] to have a test web application that did everything right; the same JavaScript backend that one of the devices we were going to try and run on was doing. So we had the different version of WebRTC and they didn’t talk to each other. The connections just sadly failed so we’re going to have a verbose mode and you dig through tons and tons of error messages; something was sawing out the SSL layer, trying to get a handshake with DTLS.
This is the type of stuff you're going to have to figure out, to step back a little bit. This is a problem that only happens with some devices, worked on an iPhone 4s, worked on iOS 8 or iOS 9; did not work on my 5s for whatever reason. It was like, “Oh no, this is a 64-bit thing?” Possibly, but it turns out the libraries changed their SSL or the WebRTC library changed their libraries at one point and they didn’t talk to each other and either rolled back to a different version. That’s a painful one.
Another step, the gotchas are just figuring out how to set up the endpoint. You get – you start by opening a session, you make a request and the libraries will make the request for you then you’ll start getting back all these ICE candidates. I’m not sure what ICE candidates stand for; I’m sure there’s Google somewhere, you can look this up. But the ICE candidates – interactive connectivity establishment – since we’re dealing with peer to peer connections –. I’ve got my mobile device in my home connected in my WiFi, so it’s behind a net so there’s a number of devices and the same IP address and someone else might be behind some firewall in a corporation.
So getting you ICE candidates, this just spits out a bunch of different ways for two devices to talk to each other and you just get out there and if they find one that matches, they try to pick the best match and they go with that. If those are accepted then you have a connection. But understanding the whole workflow and getting that set up with your server infrastructure, you probably not [inaudible] ICE candidate directly to the device because you don’t know how to talk to it yet. You’ve got some of the server in the middle that knows how to talk to a device by HTTP or web sockets are pretty standard for WebRTC recommendations but you have to walk through the whole process of sending the request, getting the ICE candidates and passing it into your library. At that point, if everything is done correctly, they have all the information, then it works, then it has a successful connection and you can actually start streaming video from it.
CHUCK:
So basically, you send the ICE –.
JAIM:
ICE candidates.
CHUCK:
Candidate, and then it actually establishes a connection and then you can start sending the data back and forth between the peers within the system that you’re talking.
JAIM:
Right. You would add up image source, a video and a camera added to it and from there the library just takes it and so does the frames. You don’t have to do much at that point.
ALONDO:
So does it support multiple peers and is this a situation where I can have – so this is just more than just a bi-directional communication where I can have multiple peers. And when you’re talking about setting up those ICE candidates, I’m imagining it can be a more difficult process if you're trying to get multiple connections established.
JAIM:
Right. The ICE candidates themselves are creative library; you need to send them off to whatever they need to be sent, to send it to your main server. And yeah, you can establish any number of connections that you need to.
Chuck was talking about an application earlier where you had a bunch of different people you could be connecting to or you receive data from so you're not limited by how many connections you can have.
CHUCK:
Sometimes, you don’t need a server in the middle, but a lot of times there’s a server in the middle that will broker a lot of that. So it’ll actually say when you connect, it’ll first establish the connection to that server, and then from that server the server will tell it you also need a broker connections with these other members of the peer to peer network. In that way, all the data gets sent to the right people who have an established connection.
You can do this also – you can set it up so that it’s one way. In other words, only one person’s cameras and microphones are enabled. Or you can set it up so that everybody who hops on, it prompts them to enable a camera and/or a microphone or both and you can get things going that way.
One example that I saw, and I’ll put a link to this in the show notes, it was a talk from JS Remote Conf just last year. The speaker – I think his name was Thomas – he actually showed us; he had two computers and his phone or something and they were all connected via WebRTC and he had that up on a webpage where it was displaying that that he has shared with the crowd over the screen shared that he had to do given his talk. And so it really is interesting; you can add as many nodes to this peer to peer network as you want and do some really interesting things with it.
One other example that I’ve seen with this and we’ve talked with Ferros Aboukadijeh on JavaScript Jabber about Webtorrent. This is another way that WebRTC is being used where instead of sending actual video packets or audio packets, it’s sending chunks of files and it basically mimics the BitTorrent protocol.
JAIM:
That’s interesting.
CHUCK:
So there really are a lot of things you can do with this. One other thing that I see with this is since it doesn’t need to be a centralized server or the centralized server may only be used to establish communication and then everything else may happen away from it as more peers are added to the group is that this could be used for different types of anonymous communication where there’s not centralized place where you can sniff and know that people are on Skype or on some other chat service or something because there’s no centralized server for it.
I’m wondering Jaim, do you know if there’s a way to capture the data that’s coming your way? In other words, it strings it back together as a video but is there a way to record that?
JAIM:
There is. It’s a bit tricky and there’s really no good examples out there so there’s not CocoaPod for this. If you go to Stack Overflow, you might have some half-baked implementation you could look at. But you can add a different renderer to the stream that will just get the frame data. From there, you can actually take the actual literal frame there which is bit by bit – I’m actually going through this. I’m trying to get an image capture of one of the frames; that’s what I’m currently working on. And there’s no real, direct way to do it other than reforest, getting the bits out, put in to UI image and saving it.
CHUCK:
Now if I were going to use WebRTC to provide not only video and audio but also text chat, would I have to establish multiple connections or can I send all of that data over the same RTC connection?
JAIM:
So for the same session, you can have different – you can have audio data and video all together. They’re different connections and you’ll receive a call back from the library saying you’ve opened the data channel and you can send things back and forth.
Now when I created – because I have wired up the video layer first and when it got time to start doing data stuff, the connection was already there, was wired up. I could just send stuff to it and it just worked without having to do anything else. In that case it’s pretty simple.
CHUCK:
So you can have multiple channels on the same connection – one’s video, one’s audio, one’s file transfer, one’s whatever else?
JAIM:
Right. You have complete control. Audio is set up separately from video, just different connections.
CHUCK:
Hm.
JAIM:
But I haven’t gotten into the real, heavy data communication. 90% of what we’re doing is just letting people talk to each other.
CHUCK:
Right.
JAIM:
And have some in a small set of signs and commands so I haven’t gotten into real complex data interaction.
ALONDO:
Are there any challenges in handling multiple people sending audio data at the same time? Is there an attempt to manage that or is it just – you get all the communication, read yourself for whatever working the tool?
JAIM:
I haven’t gotten with that. I’m not sure. If you had one person you’re – if you're going to allow the one person to quiet, I haven’t reconciled that work in the user. You build it, it turn up and it turn down.
I hope it’s possible. I haven’t looked closely enough at least the iOS systems to see how it’s added or how you can modify it.
CHUCK:
Yeah, I would guess that you can probably pull in audio streams and then just play them all on top of each other. So if two people are talking at the same time it would just play them both over each other. And then with your renderer you should be able to add filters like a volume filter that allows you to turn the volume up or down.
ANDREW:
I’m curious to know how you deal with problems like network connection errors, network connections that are too slow or drop or packets that are lost. How does that get handled?
JAIM:
So for things like packets being lost, that’s all handled by the system. With RTC, they’re probably the one to TCPIP under the hood so if that’s all done and if you lose packets for a second, you just lose them. They get dropped so there’s some failure that can happen on bad connections.
As far as things disconnect, you just get notice from the system that the session has ended, you couldn’t do anything and kind of out of luck at that point. Doing things that degrade gracefully, I don’t know how much support you get out of the box with WebRTC. If you use something like OpenTok they’ve done quite a bit of this to degrade the image quality, degrade the audio if you have a bad connection and you can’t get audio through it, shut down the video and just do audio just to get something going through. I’m not sure how much support you can get out of the box of RTC; I think you have to do a lot of customization to get a graceful fall back.
CHUCK:
Does that make it difficult to debug because you don’t know necessarily if you're problem is on the network or on the peer or on your own device? Or is that pretty easy to identify?
JAIM:
Of course, we’re testing on a WiFi and al the device that we’re testing are also on a WiFi so it had never had any of those problems. I’m not sure what you're talking about. [Laughter]
The same type of thing you get with any connection, you might have something like a flaky connection and I haven’t tested the case where you're talking in your WiFi and you walk outside and you switched to the cellular network. I’m looking forward to see how that works and what’s the best way to handle that but it’s open so you just do your best to simulate bad things happening. Most of it don’t have an [inaudible] chamber that blocks all those signals in their house; I don’t either.
CHUCK:
I’m having one installed next week.
JAIM:
That’s right.
CHUCK:
I wish.
JAIM:
You just do the best you can.
CHUCK:
The other question that I have is encryption. So since this is all peer to peer and the data packets are probably pretty straight forward as far as this is video, this is audio. If I don’t want the NSA pulling up outside my house and pointing an antenna at my house and sniffing the traffic going across the error, the wire, what do I do?
JAIM:
I believe all the communication from an RTC is done through secure sockets.
CHUCK:
Uh-huh.
JAIM:
I hope so. I don’t know for sure but there’s quite a bit of – having gone through all the log files, there’s quite a bit of SSL handshaking going on.
CHUCK:
Yeah, that makes sense. So do you have to establish those certificates or is the handshake just part of the protocol and certificates generated on the fly?
JAIM:
I don’t know off hand. I haven’t had to do any [inaudible] certificates. Whatever ahead of my device talks to whatever the other devices were and they figure they’re both good so they were able to establish a connection that the SSL library trusted. But if you look into getting started with WebRTC, the easiest thing to do is to go to CocoaPods, get the Pristine IO CocoaPod, download it and from there, within the WebRTC source code, the one that – there’s a sample app that you can just run and it’ll run to your device pretty quickly and it’ll give you the option to put it in a room number. There’s a main test application you got there and you could go to your browser and create a room and just enter it in and that’s how you can see how the signaling works from the application so you can actually connect it from your device to your browser.
Most of the things I’m talking about – creating a session, dealing with ICE candidates, connect them to the server – you can see how that works and probably most of the WebRTC code bases started from that sample app, just started modifying things. Like when I see a code sample in Stack Overflow it’s always pretty clear where it came from, that sample app.
CHUCK:
Alright, sounds good. Any other questions or thoughts about WebRTC?
ANDREW:
I’m curious to know what some alternatives are or if there are alternatives to WebRTC. What other technologies are out there for accomplishing some of these things?
JAIM:
I’m not sure about [inaudible]. There are a number of libraries that do more server based indication between two different peers like your peer tossed to the server or tossed to something else so all the traffic goes through there.
CHUCK:
So standard IM or standard VOIP?
JAIM:
Yeah, something like that. There’s a number of companies providing libraries for that. I don’t have any names off hand but – I forgot peer to peer communication for WebRTC is becoming pretty standard. It’s not as common in the iOS world; the support is just not as strong as it is in the Google and Android world but it’s possible.
One thing to consider if you're looking at this and thinking this might be the right fit for your application is which library to use. You probably want to use either the standard Google libraries – have the ability to build through Pristine IO or you build it on your own. If you want full control over what’s happening and where the data goes and where it’s stored and we had to pay to use it, that’s good.
If you want something simple, there’s OpenTok which makes it very simple to set it up and they handle a lot of edge cases for you. They're not cheap so you have to think of what our company does. Does it just do some video and work with it or do you want to invent your own custom solution? If you pick your own custom solution, you might be looking at the – not the native libraries but definitely look at the [inaudible] solutions or the service solutions first if you can get away with it.
CHUCK:
Alright, cool. You could also use the dog barking network from 101 Dalmatians.
JAIM:
That’s right. Fortunately, I can’t mute and talk at the same time.
CHUCK:
Well you can but it’s less effective. [Chuckles]
ANDREW:
You just have to mute the dog. [Chuckles]
JAIM:
Did you get that? We didn’t talk at the same time. Did that work?
CHUCK:
Probably not as good as you hoped.
ANDREW:
No, no, no. I still heard the dog so he can even go through your mute.
JAIM:
Damn.
CHUCK:
[Chuckles] Alright, let’s go ahead and do picks then. Andrew, do you want to start us off?
ANDREW:
Sure. I’ve got – I’m going to call it one and a half picks today. My first pick is a library called Observable-Swift. This library is kind of an implementation while it is an implementation of the same ideas, key value observing except that it’s done in pure Swift so it makes it so one object can observe changes to properties on another object. You can use KVO from Swift but it’s a little bit gross partly because it’s really an Objective-C API and partly because KVO’s API has some rough edges anyway and etc.
Along those same lines, I actually gave a presentation at our local Cocoa heads last night about this whole topic of KVO and Swift and related observer techniques in Swift where I talked about some of these in my slides and some playgrounds that show really simple techniques for dealing with this issues on Swift are up on GitHub so that’ll be my 0.5 part of my picks. Those are my picks.
CHUCK:
Awesome. Alondo, what are your picks?
ALONDO:
I have only one pick this week and it is for Karma which is a WiFi hotspot. I actually met some people in the Karma team. I think it was two years ago at AltConf and I got many of hands on the device. I’ve been using it for months now and I like it because it has a [inaudible] go or our [inaudible] you can keep all the bandwidth that you purchased so that I don’t have to worry about losing it month to month. When I’m travelling across the country, I tend to have to use a separate WiFi hotspot.
The nice thing about Karma though is if people jump on your WiFi, you actually get all the extra bandwidth available to you in your bank and it’s basically karma. I guess you should share to them that networking courses right [inaudible] for them but it’s really useful and it’s allowed me to downsize my carrier data point because that doesn’t carry over. It’s been pretty useful; no matter where I’ve been in the country so far, it’s been a really good connection so that is my pick for the week.
CHUCK:
Alright Jaim, what are your picks?
JAIM:
I want to do one pick today. So I, for the past few months, I’ve pretty much weaned myself off caffeine. I was a daily coffee drinker for years and years almost since I was fifteen. I wanted something to drink in the morning, something hot and a lot of that herbal teas are fine; decaf coffee’s kind of gross. I’m not against coffee. Every once in a while that day with caffeine – it doesn’t give you energy, it just borrows it from the future so I’m liking having a little bit more energy later on the day when I also need it. But for drinking – having something to drink in the morning which has a little bit of caffeine – it has nothing but actually has nice, hearty flavour.
It’s a Kukicha tea which is a twig tea. They boil twigs somehow and I get it from a company called Eden and there’s a couple of tea bags. It’s a hearty tea that doesn’t have a lot of caffeine so doesn’t get me all bounced up. So I’m going to make – I get it from Eden so I’m going to make a pick for organic Kukicha tea.
CHUCK:
Alright, I’ve got a couple of picks. The first one is I’m actually going to pick something that I’m trying out for the shows. So far it’s been pretty well since I’ve passed along the word to a few people who listen to this show and that is I’ve set up a GitHub repository for people who want to suggest topics for the iPhreaks show. You can get to it at gitthub.com/devchattv – there’s no dot on the devchattv – /iphreakstopics and that’s all in one word all run together. And then there’s a link there that says @topic or suggested topic. And you can also click read topics.
They're just GitHub issues; there’s nothing fancy going on there but then you can actually put in a plus one for the topics that you want to hear from, hear about and things like that. You can suggest people to talk about this topic or you can just suggest people we should get on the show – anything like that would be awesome. So I’m going to pick that.
The other thing that I’m going to pick –.
JAIM:
So Chuck, do we have a style guide for a pull request for this?
CHUCK:
It’s not a pull request; it’s just GitHub issues.
JAIM:
C’mon.
ANDREW:
No, there’s no style guide. [Crosstalk]
CHUCK:
Yeah.
ANDREW:
Just do whatever you want.
JAIM:
Just cowboy code it.
CHUCK:
That’s right.
JAIM:
Alright, fine.
CHUCK:
And then the other day I was chatting with one of my friends and we’re both Doctor Who fans and he had just gotten his sonic screwdriver. And so since I had it out and I was playing with it the other day, I thought I’d pick it. I got it on Think Geek. Mine is the Eleventh Doctor’s; it’s Matt Smith’s. You might be able to hear me playing with it here. [Laughter]
ALONDO:
Indeed.
CHUCK:
But yeah, it’s got three buttons that will turn it on and make different noises with it. Anyway, this one has the little grabber thing that comes out on it so it has a little button that makes it come up and then you can still –. It doesn’t turn on though when it’s up, by one of the buttons just the other, but yeah it’s pretty awesome. I really have been enjoying it.
Anyway, I’m just going to pick geek toys and Think Geek and this particular sonic screwdriver. I got a sonic sfork from Loot Crate and it looks like the same sonic screwdriver I have except it has a sfork at the end of it so I thought that was pretty funny. Anyway, there I am geeking out about dumb, mobile toys but there you go.
I don’t think there’s anything else to announce or talk about so we’ll go ahead and wrap this up and we’ll catch everyone next week.
[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]
[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit cachefly.com to learn more]
[Would you like to join a conversation with the iPhreaks and their guests? Want to support the show? We have a forum that allows you to join the conversation and support the show at the same time. You can sign up at iphreaksshow.com/forum]