Christian Wenz:
Hi and welcome to the latest episode of Adventures in.NET. My name is Christian Wenz and fellow panelists friend Mark Millers here as well.
Mark Miller:
Welcome everybody.
Christian Wenz:
But of course, we are not alone. Would be less funded, probably, arguably. So we are very, very happy that Ian Griffith is joining us today. Hi, Ian.
Ian Griffiths:
Hello, thank you for having me.
Christian Wenz:
Of course, Ian is a household name in the.NET community. But would you like to introduce yourself to those who don't know you yet?
Ian Griffiths:
Sure. Yeah, my name is Ian Griffiths. I am technical fellow at NGIN, a small UK consulting firm. I'm also the author of O'Reilly's programming C sharp book. And before that I wrote programming WPF. And before that I wrote windows forms in a nutshell. I am also a plural site author and conference speaker and Microsoft C sharp MVP and general loud mouth.
Christian Wenz:
Perfect. So do you have a ballpark number, how many people were taught programming or have a career just because of you?
Ian Griffiths:
I don't know if anyone has a career just because of me. I did once try to work out how many people I taught back when people used to actually turn up in a room for training. I think it was a couple of thousand. You don't reach people that quickly when you're teaching face to face, whereas as soon as you do it on something like Pluralsight, then you suddenly meet thousands of people. So I have no idea. Pluralsight never told me the numbers.
Christian Wenz:
Yeah, I think you can look it up to some degree, but then or the other hand, you never know, right? And I mean, with books, it's even harder, right? Because a book is maybe read by several people or a whole department or just bought to put it on the shelf, right? But I think with IT books, it's probably rather the former than the latter. Excellent. So how long have you been an MVP? I mean, nowadays, I think everything is developer technologies if you're kind of related
Ian Griffiths:
Oh
Christian Wenz:
to programming, right?
Ian Griffiths:
yeah. Yeah, it shows you how long I have been one, because I called it C sharp. So
Christian Wenz:
Yeah, I also
Ian Griffiths:
I've.
Christian Wenz:
talk about me being like an ASP.NET MVP, although there is no real ASP.NET minus core anymore, right? So it's like, we are old
Ian Griffiths:
Yeah.
Christian Wenz:
man.
Ian Griffiths:
So I first got my MVP award in 2003,
Christian Wenz:
Wow.
Ian Griffiths:
but I had a gap. So I have had 15 awards, and I had children in the middle of that and stopped traveling and speaking and so on, so for a few years, and so I stopped getting the award, but I'm back on it again now.
Christian Wenz:
Also fantastic that family was a priority and that they let you back in, right? So that's I think many, many try,
Ian Griffiths:
Yeah.
Christian Wenz:
few achieve.
Ian Griffiths:
Yeah, well, I think, you know, if you do enough, you get the award again. I think it's easier to renew, but it's definitely possible to get back in if you put the work in.
Christian Wenz:
Yeah, fantastic. Great, great to have you on the show.
Ian Griffiths:
Thank you.
Christian Wenz:
So any specific topic you've been working at recently.
Ian Griffiths:
Yes, so the thing I wanted to talk about is some of the language features that have been appearing in C sharp Really over the last few years that are performance orientated so C sharp has become more and more suitable for Things that used to be the preserve of languages like C plus and rust
Christian Wenz:
and assembler.
Ian Griffiths:
And assembly language. In fact, you might not know this about me. I used to write device drivers for a living. That was my first job. Actually, it was my first full-time job. I had a student job writing in assembly language. So when I was at university, I was writing assembly language, but the thing I was writing, oddly enough, was a word processor, which tells you how old I am. The fact that people were still writing word processors in assembly language. I mean, it was a little unusual, but that was on an old ARM CPU-based desktop system that had no hard drive and no virtual memory. And so everything was incredibly tight. And so we used assembly language to write stuff. So
Mark Miller:
Wow.
Ian Griffiths:
yeah.
Christian Wenz:
So it was faster than word, I guess.
Ian Griffiths:
Oh, it was unbelievably fast, yeah, but it was a hard work to work on and it had some quite odd code in it. So yeah, I have always had an interest in low level stuff. I loved electronics as well as development. And like I say, my first full-time job, I was writing kernel mode network device drivers. So the C-sharp language features that are gradually been creeping in are a particular interest to me. So that's why. I thought it might be fun to talk about it, because not everyone knows what they are or why they're important and how they work.
Mark Miller:
Yeah, that's me. I want to know what these are. Tell me, because I got some slow-coded, I got to make fast right now.
Christian Wenz:
Also, I mean, aren't we told, I mean, oh no, let me rephrase. Aren't younger developers basically told, I mean, you know, you can throw hardware at it, right? You can throw money at a problem. And then, I mean, and that's, I mean, we are all in IT for decades, or have been for decades, right? We've been told different things, right? Like, oh, it doesn't fit on a floppy, or
Ian Griffiths:
Ha ha.
Christian Wenz:
oh, whatever, a RAMs full, something like that. These are issues that seem to have disappeared from modern development. So you're saying, no,
Ian Griffiths:
Wow,
Christian Wenz:
it's still kind of
Ian Griffiths:
I
Christian Wenz:
there.
Ian Griffiths:
don't think they have. And actually, one of the things I wanted to talk about is the fact that Moore's law isn't necessarily going to save you from problems. So people talk about Moore's law, about the fact that transistors counts double every two years or whatever it is, that the exact time span has varied, but this exponential growth in computing power. And it's been there for the entire history of computing. But the thing a lot of people miss is that although it is true, that CPUs continue to have twice as many transistors on them every few years, they haven't got twice as fast for about 20 years now. So the first half of my career, every two years, you bought a new machine and it was twice as fast as the one that you had before. Whereas that kind of stopped around about 20 years ago. 20 years ago, you got one that went exactly the same speed as before, more or less, but it now had two cores rather than one. And then the next one you bought had four cores rather than two that
Christian Wenz:
Sounds
Ian Griffiths:
were still
Christian Wenz:
like cheating.
Ian Griffiths:
about... Well, yeah, I mean, technically you've got more computing power, but it's, you know, it's not quite the same thing as it used to be. It really used to be that it got faster and faster and faster. So the first computer I learned to program on was a BBC Micro, the home computer in the 1980s. It had a 6502 processor that had a two megahertz clock rate. and it wouldn't run one instruction per clock as well, so you were sort of hundreds of thousands of instructions a second, and then I got to work on 8086 machines that had like a eight-ish megahertz I think, it was a four megahertz to start with, and then I had an eight megahertz one, it would double every so often, and yeah these days they're measured in gigahertz, but I bought a four gigahertz CPU in 2003, and my desktop today will ramp up to a whole five gigahertz. That is not exponential growth. That is not exponential improvement of the speed at which instructions can be executed. So there are certain kinds of tasks which Moore's Law isn't going to help you with. And actually there's another thing here that there's a thing called Armdale's Law which comes up when you're analyzing the performance of concurrent code. Because if you want to take advantage of the... 16 cores my current desktop has, you've got to find 16 things to do at once. But actually,
Christian Wenz:
Yeah, run
Ian Griffiths:
Armdale's
Christian Wenz:
16
Ian Griffiths:
law... Well,
Christian Wenz:
browser windows, no problem. No problem.
Ian Griffiths:
yeah, the browser's perfectly happy to cover my CPU, that is true. But to do work I actually want done rather than whatever the heck my browser is doing when I'm not watching, then it's not easy because often the amount of compute that can be parallelized is a subset of the whole problem. And eventually, it doesn't matter how many CPUs you throw at it, you are limited by an inherent lack of concurrency in the problems that you're trying to solve very often. Not always true. There are some problems known as embarrassingly parallel problems. So graphics rendering is the classic example of this. You can chuck almost limitless numbers of transistors at rendering problems and they can all work on their own little bit of the screen. And it's fine. Or maybe they can get working on the next frame. But for... computations, it's often not the case. So if you're trying to do analytics over data, often the next value, it's gonna take the previous computation as its input. And so you can't paralyze. You have to do things in a certain order sometimes. And so if the computers aren't getting faster, I mean, they're getting a bit faster, but they're certainly not getting faster at a rate of doubling every two years like they used to up until about 20 years ago, you got to do something else. And actually it's worse when you start to look at the whole computer architecture. So... You know as a device driver writer you think not just about the CPU But you also think about the various places that memory lives You've got the CPU cache which is the only thing that can actually run anything like as fast as the CPU can and You've got outer caches that are less fast than that and then you've got the main memory which is by comparison pitifully slow and With the device drivers you're thinking well I've got to get data off the network card into main memory first and then into the CPU You're actually limited by the speed at which data can flow over the circuitry on the motherboard. And we're pretty much close to the limits of how fast that can go and have been for quite some time. So the CPUs get faster and faster. Maybe they grow more pins so you can plug in more banks of RAM. But there's not a whole lot of scope for making that part of the system run faster. And so we do need to pay more attention to efficiency now. If we're going to get higher volumes of throughput and this has a cost implication so
Christian Wenz:
Oh, absolutely.
Ian Griffiths:
one...
Christian Wenz:
Sorry for interrupting, but isn't there also Moore's second law, which basically says that I think the semiconductors get cheaper for the consumer, but the plans to produce them get exponentially more expensive.
Ian Griffiths:
Yes, that's right,
Christian Wenz:
So
Ian Griffiths:
yes.
Christian Wenz:
yeah, you get faster, but the price is also climbing on an exponential scale.
Ian Griffiths:
Yes, yeah, there isn't a free lunch. It costs twice as much to make it go twice as fast. And actually, yeah, there was a limit for the market for that, it turned out. Not everyone wanted a machine that was twice as fast every year. So I did, but it turns out there weren't enough people like me to carry on driving that. And in any case, the CPU manufacturers ran into some limits a bit earlier than they were expecting to in terms of CPU leakage currents that were just making things run red hot. So they all thought they were gonna get to 10 gigahertz with no trouble. And then suddenly quantum tunneling turned out to be higher current throughput than anyone thought it was going to be. And everything just melted. So yeah, we're stuck at about five gigahertz. So, yeah. So the thing I was driving towards was the fact that fundamentally the rate at which data can flow in and out of memory is always gonna be a bottleneck. And this was always the focus. the device driver writers. The fundamental rule with device drivers is that you do not copy the data more often than you have to. So you've got to copy data. Data comes in through a piece of cable or a piece of fiber into the port on the back of the computer. It's gonna go through the networking hardware. It's gotta go into the memory. So you've got to copy it into the memory of the computer. What you don't then want to do is go, ah, that's a ride, but actually we'd really like that data to be somewhere else now. So let's copy it to where we'd really like it to be. And likewise with transmission, if you're sending data out, you don't want to go, okay, well the data is here, but that memory belongs to the application, so I'm going to copy it into some operating system-owned memory and then send it from there. Some devices do work this way. This used to be quite common back in the day, but as networks got faster, it became impossible to actually deliver data at the rate that networks were capable of doing so without just basically saturating the computer. So with a 10 megabit Ethernet card, I mean, you never got 10 megabits anyway back in the old days when it was all actual physical shared bits of networking. So you could do that without having to try very hard. But the company I was working at in the early 90s were building 150 megabit network cards. And it was right at the limit of what you could actually reasonably get in and out of the computer. If you wanted to build a server that could receive and send data at 150 megabits, you were pretty much using all the bandwidth that the memory interconnects had. So you really couldn't make extra copies of things. You had to go, right, well, this data is going to land where it lands. And then we have to use it wherever it is to process it. And maybe it gets pushed into memory. And then you program the disk controller to read it straight back out of the same memory and store it on disk if you're writing data. And likewise, if your file server is sending data to the customer, it's going to read data out of disk into memory and send it straight out there. So you end up with a copy in and a copy But it's actually hard to do that because when data arrives on a server that's serving a hundred different customers, it doesn't know who it's from until it looks at it, right? Because it's just a packet comes in of the network, the server has to inspect it and go, okay, where did that come from? What was the source address? What do they want? And so it's not totally straightforward to make things work that way. And there's all sorts of interesting stuff that's nothing to do with what I'm talking about today that I could go into how operating systems actually manage that. But basically, the fundamental rule... is you do not make more copies of data than you absolutely have to, because every copy tends to be using up the budget that you've got for throughput fundamentally. So this brings me on to the changes that have come into C Sharp. So there's a thing called span in C Sharp, span of T, and this type is essentially concerned with this idea of avoiding copying data. because the idea with a span is it gives you a way of saying, I've got some information that is somewhere in memory and you don't need to know where it is in memory. You can just read it from wherever it is. You can process it in place. And that used to be hard to do in.NET. If you had some data that had just arrived, like in IIS, the web server, it had its own buffers. It would put data in its own memory and you couldn't really directly access that in.NET unless you wrote unsafe code. You could use the unsafe keyword, get a pointer to the raw memory, but now you're throwing away a lot of the benefit of using C sharp in the first place, right? The whole reason we use languages like C sharp is that they are type safe. You actually can't make certain kinds of programming errors that the lower level languages like C++ will happily let you make. You know, if you put one foot wrong in kernel mode, you just blow the whole machine up. twitchy whenever I heard the sound of an old-fashioned monitor switching resolutions. I don't know if you remember this, the old hulking great big CRT monitors.
Christian Wenz:
I get shivers, I get shivers.
Ian Griffiths:
Every time they changed, you know, like between graphics mode and text mode, you'd hear a click as they rearranged their internal electronics to deal with the change in scan frequencies. And that used to make me feel physically sick because that was usually the precursor to a blue screen of death. And as a device driver writer, the blue screens of death were usually my fault. So it's like, oh God, I've got a problem to fix again. So.
Christian Wenz:
Wasn't that the presentation of Windows 98 or Windows XP where they wanted to show plug and play?
Ian Griffiths:
Uhhh... Oh, I don't
Christian Wenz:
So
Ian Griffiths:
know
Christian Wenz:
they
Ian Griffiths:
about this.
Christian Wenz:
plugged
Ian Griffiths:
Mm-hmm.
Christian Wenz:
in the thumb drive and then the BSOD. And then
Ian Griffiths:
Yeah.
Christian Wenz:
the only person in the room who kind of kept his calm was the person presenting, Bill Gates, who said, and that's why we don't ship that version of Windows yet.
Ian Griffiths:
Hahaha!
Christian Wenz:
But
Mark Miller:
Ah,
Christian Wenz:
of course, the person next
Mark Miller:
that's
Christian Wenz:
to
Mark Miller:
right.
Christian Wenz:
him probably had a very unpleasant post-mortem
Ian Griffiths:
Oh yes.
Christian Wenz:
after the presentation, I reckon.
Ian Griffiths:
Yes, he's not alive anymore. So, yeah, we use C-Sharp or languages like them because they have a type system that actually doesn't permit certain kinds of mistake. But that was thought for a long time to be incompatible with this high performance notion of, I just need to play the data where it lies. I need to get a pointer to where it is so I can work with it. But a few years ago, Microsoft launched this research project called Midori. which was an effort to write an operating system entirely in what was essentially managed code. It wasn't quite C-sharp. They did a bunch of customizations to it and had a totally different exception handling model. Sadly, you can't see any of this because it was back before Microsoft embraced open source. And so the whole thing was sort of internal. However, certain bits of that project have leaked out in the form of new features in the.NET runtime. And my understanding is that span of T was one of those features, span. was a thing that was originally designed for Midori to enable them to write the sorts of code you have to write to make efficient device drivers and yet still be able to exist within the rules of the type system. So Span was designed to give you a type safe presentation of some arbitrary piece of memory so that it didn't matter where the memory was. You didn't have to be looking on the garbage collected heap. You might be looking. a block owned by some other subsystem. You might be looking at the stack, but it didn't matter. And they had a whole load of rules around exactly how you could and couldn't use span of T, the guarantee that you couldn't get it wrong. So you can't pass a span of T up to your caller. You can't return one as a return value of a function in general, because you might actually be returning a pointer to your own stack frame. You might have done that by accident, and that will be a bug. someone's now got a pointer to a piece of memory that has been repurposed for something else. And that's a classic cause of security flaws. And it's really easy to make that mistake in C and C++. So Spanity had a whole host of really carefully worked out rules that were designed to prevent the problems and yet still give you the performance that is associated with the braw pointers. So it makes a huge difference. So one of the projects I worked on a few years ago we helped a customer go from 45 servers to a single server partly through this technique. And there were a few other things that we helped them with as well. But.
Christian Wenz:
Yeah, I was just about to ask. That is pretty impressive.
Ian Griffiths:
Yeah, I mean, their original thing was written in Python, which didn't help. So, maybe the C Sharp instantly sped things up, but that alone wouldn't have got us the factor of 50 speed up. So, and actually out of that project, we produced an open source library. There's a library called AIS.net. AIS is the automatic identification system. It's what ships broadcast their location with. So if ships have a GPS receiver in, then they actually transmit their locations so other ships in the same water can see where they are. There's an international standard for this, and most large ocean going vessels are essentially legally required to operate these things to be able to sail in certain waters. And it's a binary format. And we ended up writing a.NET library for parsing these messages that makes extensive use of span, read-only span of T, so that we can just read the messages exactly where they are without having to copy data out of them. So if you want to see an example of this, the kind of code I'm talking about, you can just search for AIS.net. So it's on GitHub, AIS-DOTNET. So,
Christian Wenz:
The link
Ian Griffiths:
and
Christian Wenz:
will be in the show notes.
Ian Griffiths:
okay, great, fantastic.
Christian Wenz:
Absolutely.
Ian Griffiths:
So that's one of the projects I worked on that uses these techniques.
Christian Wenz:
Yeah.
Ian Griffiths:
And it's an interesting example because it illustrates the evils of serialization. So one of the great enabling features of.NET and Java and JavaScript is that they make the problem of converting between objects in memory and data in byte form, whether it's say to disk or to a storage account or sent over the network, byte like data, byte streams of data. mapping between objects and byte streams is now pretty straightforward. We just expect to have a method we can call to say here's an object please spit out some JSON for example or here's some JSON please turn it back into a dictionary or an object or whatever and that's great really convenient and for those of us who had to write that sort of code in languages that didn't have reflection of those sorts of things you realize just how much work is being done for you there. But there is a fundamental problem with serialization if you really care about performance. So what happens when you do serialize a message? Let's say you've got a message that's telling you the name of a ship and its dimensions and let's say the destination port that it's heading for. There's a message in AAS that tells you this. Your classic approach to serialization is going to take that data in whatever binary form it's in and return you a.NET object that has some string properties, right? It's gonna have the name of the vessel as a.NET string, a system.string, because that's how we do text in.NET. and it's gonna give you the name of the destination port there as well. And some of the things will be more efficient. Yeah, the dimensions of the vessel will probably just use, you know, int for that because binary, why not? But the strings, we've allocated data for that. So if the classic serialization techniques that we're all used to using are incredibly allocation heavy, you allocate the object that represents the thing that's coming back and usually a whole load of other related pieces of information, each of which need their own space on the heap. So you've caused garbage collection overhead, you've created a whole bunch of stuff that the garbage collector is eventually going to have to clear up, but also you've made copies of the data, which as a device driver writer, I can tell he was not going to help you with performance. And so this technique of not copying the data is kind of incompatible with the way we're all used to doing deserialization. So if you look at that AIS library, it will look weird. You'll get why on earth do you have to use it like this? It's nothing like a normal serialization thing. It doesn't work the way you would sort of expect it to work naturally as a.NET developer, where you just go, here's my message, give me back an object that represents it, please. It's not quite like that. It's all a bit sort of inverted. You have to get it to call you back with a thing that you can then use to ask questions about the data. And the reason we do that is that it enables us to completely avoid any sort of copying. And that's part of what enabled us to get to this far, far more efficient. way of working. And you see similar stuff in the system.txt.json API, right? So, for years, how did you deal with JSON in.NET? Well, you use
Christian Wenz:
Newton's
Ian Griffiths:
Newtonsoft
Christian Wenz:
oft
Ian Griffiths:
JSON, right? That's a brilliant library. It does everything you could want it to do. It was pretty fast. It was faster than most of the ones that came before it, and it was fully featured and easy to use. Great library. However, it made a lot of allocations, and it copied everything. If you asked it to deserialize an object, you're going to get a lot more data. So there's a talk I've done at a couple of conferences now where I show the performance difference between deserializing a simple JSON document with Newtonsoft JSON compared to how fast you can go with the newer libraries. I'm just going to refer to this. I've got a separate screen here. I have a benchmark in which we search through 10,000 JSON elements in a JSON array in a single byte array. And we do that, I think, a few thousand times over for the benchmark. And the Newtonsoft approach took about 40 milliseconds to run the benchmark. And the AssistantX JSON one took four milliseconds. So
Christian Wenz:
And the
Ian Griffiths:
10
Christian Wenz:
stuff is also
Ian Griffiths:
times
Christian Wenz:
very
Ian Griffiths:
faster.
Christian Wenz:
impressive, yeah.
Ian Griffiths:
And also, if you look at the memory allocation, allocated about 22 megabytes of data in order to deserialize the data and all we're doing is searching for an object that happens to have a particular property with a particular value. That's all we're doing. So find the first object in this array whose name property is equal to this. That's all we're doing and allocated 22 megabytes of data to do that because it's created an object every element in the array and it's created strings for all the string-like properties. whereas the system text JSON version allocated 219 bytes.
Mark Miller:
Yeah, that's
Ian Griffiths:
To,
Mark Miller:
better.
Ian Griffiths:
yeah,
Christian Wenz:
That
Ian Griffiths:
and
Christian Wenz:
is...
Ian Griffiths:
it's
Christian Wenz:
Sounds...
Ian Griffiths:
like,
Christian Wenz:
A tiny, tiny difference. Tiny difference.
Ian Griffiths:
bearing in mind we're processing 10,000 objects,
Christian Wenz:
That is crazy.
Ian Griffiths:
how many bytes per object is 219
Christian Wenz:
Yeah.
Ian Griffiths:
if you've got 10,000 objects in total? It's not allocating anything in the actual inner loop. And it's because it's doing this thing I'm describing, it's playing the data where it lies, it's not making copies, it's just saying, well, the data's right there. we'll just inspect it, we'll interrogate it. Rather than saying, give me a string that tells you the value of this property and then doing a string comparison, you can say, I wanna know if that thing has this string value, or if you need to, you can try and get hold of the data in its raw UTF-8 form if you need to do something more complicated. But the point is, you don't need to make copies. And it has a phenomenal effect on performance. And actually, it's even bigger than I've described. I mean, I've talked about this factor of 10 performance increase. That's just in a single threaded benchmark. Another feature of this is that it's going to do far less collateral damage when it runs because one of the things about allocating code and stringly typed data, as sometimes people call it, is that it doesn't just slow you down, it slows everyone else down because you're now causing, you're using way more CPU cache, other threads are going to be contending for these resources, whereas if you can keep things narrow and efficient, you're going to be a better citizen and the whole thing runs faster. Everybody wins.
Christian Wenz:
I mean, I faintly remember that when System Text JSON was introduced, it didn't have feature parity with the mutants of JSON. I think the level of devs or something like that. Was that because it was new, or was it because the approach didn't bode well with some of the more advanced, more specific features?
Ian Griffiths:
It was a bit of both. There's actually several different ways you can use System Text JSON. So if you want to, you can use it in a way that's less efficient and easier to use. It will actually deserialize an object in the old-fashioned way if you tell it to. And then, I mean, it's faster than Newtonsoft JSON and more memory efficient, but it's only about one and a half times faster, and it's only about twice as many memory efficient. That's still pretty good, right? But it's,
Christian Wenz:
it is.
Ian Griffiths:
and the main reason, by the way, is that it works directly with UTF-8 data, whereas Newtonsoft JSON has to turn everything into UTF-16 before it would even look at it. And that's
Christian Wenz:
So
Ian Griffiths:
at,
Christian Wenz:
even more copies.
Ian Griffiths:
yeah, even more copies. Yes, exactly. And so this is actually where a lot of the wind from Newton's off JSON comes from. But if you choose to use the APIs that don't make copies, then you can get that full 10 times speed I'm talking about, but it's not straightforward. You have to program against it differently because if you just ask it for, what's the string value of this thing here? It's like, well, you've now. forced it to give you a string and it has no choice, right? So you have to think about what you're doing if you need that performance. I mean, you don't have to program it in the sort of ultra high performance way if you don't need the performance, but if you do, it's there and you can have it. And actually, I'm quickly gonna mention another library that's useful if you're doing this. If you search for Corvus, that's C-O-R-V-U-S, corvus.net and corvus.json schema, There's an open source code gen library written by my employer, engine, that I'm free to use, that generates wrappers for using these low level APIs that give you strongly typed access to the data, but in a way that you can use without allocation. So again, it looks very different for natural serialization. If you're used to normal serialization, you'll go, this is weird. Why is it like this? But you can get phenomenal performance increases if you care about it. So that's a thing that's worth going and finding out. So yeah, I think it's really interesting that we've got used to these very easy ways of doing things, but they have sort of painted us into a performance corner.
Mark Miller:
Hey, Ian, I'm interested to know, how do I shift my mindset from a traditional way of developing code where I'm favoring readability and maybe even speed of getting a prototype out? How do I shift it over to focus a little more on this reusing memory? Can I get gains if I'm not doing reading a lot of data in and out? I guess I am doing. If I am doing like using Newton's off, I imagine I got gains a plenty I can get right you've already described that I guess I'm interested. What's the step? What's the mindset shift and what can I look forward to? Once I move into that mind shift in terms of I would guess More traditional kind of programming I guess
Christian Wenz:
And if I can quickly add to that, because it's really a great question. And the one thing that came to my mind is, remember back in the days when kind of StringBuilder came about
Ian Griffiths:
Mm-hmm.
Christian Wenz:
and then suddenly everyone was using StringBuilder even for concatenating two strings. And I was always wondering, wait a minute, isn't there eventually MSIL and can't that be kind of optimized in that step? Or maybe not, I mean, I don't know, right? So... Should I use span of T now everywhere because I can? Or what's the scenario? As Mark brilliantly put it, the mindset shift, basically, which is required.
Ian Griffiths:
Yes, okay. So, just to quickly answer that last point, no, don't use Spanner T everywhere. And with these, with performance, you're never going to get the wins by just saying, just do this thing, just use this type and everything will magically go faster. I mean, actually, it's a little bit true with System Text JSON. In most cases, if you plug System Text JSON in as a like-for-like replacement, as Newton solved, it will actually go a bit faster because
Christian Wenz:
Yeah, so you
Ian Griffiths:
it can
Christian Wenz:
said
Ian Griffiths:
work.
Christian Wenz:
1.x, right? So
Ian Griffiths:
One
Christian Wenz:
1.5.
Ian Griffiths:
and a half times maybe
Christian Wenz:
Yeah, yeah, that's
Ian Griffiths:
for this particular benchmark
Christian Wenz:
still, right?
Ian Griffiths:
is
Christian Wenz:
Plus
Ian Griffiths:
pretty
Christian Wenz:
one
Ian Griffiths:
good.
Christian Wenz:
less external dependency you are
Ian Griffiths:
Yes.
Christian Wenz:
responsible for keeping up to date.
Mark Miller:
What if I just use span of T everywhere I use string builder before? Can I just do that? Can I replace
Ian Griffiths:
No.
Mark Miller:
string builder? No?
Ian Griffiths:
No, no, so...
Mark Miller:
Oh, I thought I had it.
Christian Wenz:
And
Ian Griffiths:
it's
Christian Wenz:
you have
Ian Griffiths:
so...
Christian Wenz:
to put async in front of every method. I think these are the two things that are just automatically applied, right, without thinking
Ian Griffiths:
OK,
Christian Wenz:
and
Ian Griffiths:
so.
Christian Wenz:
then magically hoping for performance gains that might never come in the given scenario.
Mark Miller:
I
Ian Griffiths:
Yep.
Mark Miller:
got the magical Hoping Town, Christian!
Ian Griffiths:
So I am going to get to answer your question. It's going to take me a while to get there, because it's a really great question. It's really important, but it doesn't have
Christian Wenz:
Fair enough,
Ian Griffiths:
a simple
Christian Wenz:
we
Ian Griffiths:
answer.
Christian Wenz:
got plenty of time.
Ian Griffiths:
There is no magic bullet. That's the first thing to say. There's no magic bullet you can use. If you want to improve performance, in a way, the only answer is you have to think about what's happening, what's actually happening down at the level of the computer architecture. So if. you need better performance. That's a huge if, right? Because you said, well, maybe I'm used to, maybe I want to prioritize readability. Maybe I want to prioritize the rate at which I can get code developed. For a lot of projects, those are the most important things. They're way more important than Perf for a lot of things. The places where I've applied this, it's tended to be because we are dealing with very large volumes of data. So one customer was dealing with years of historical data, reminding it for patents. And so the... the need to process it efficiently really mattered because just the volumes were huge. So if you're doing big data type stuff and if you need to customize it and you can't use, for example, a Spark cluster to do it all for you, then it really matters. But if you're just writing a one-time tool that's gonna load some data off disk and be done with it, it's like, well, maybe it isn't worth doing anything more than just upgrading to the latest libraries to get that small performance increase. But if... you do decide, well, okay, I do need this to go faster than it is going now. I have some non-functional requirements that I'm not meeting, or maybe I am meeting all my requirements, but my cloud compute bill is too large. I would like to pay less money every month. How can I get the same performance at reduced billing? So if that's where you are, so then I would say the first thing is to work out where you're spending the money, right? That's actually always the first job. Profile your system, understand. why it's burning CPU time or why it's running slowly, why you're not getting the performance you want. Because the answer is often not what you think it's gonna be. Last year I was debugging a problem in one of, well not debugging, I was addressing a performance issue in one of our internal tools. And it turned out the thing that was actually slow was reading the list of system fonts. That was killing our startup time because on Windows, it's always cached to the list of fonts. ahead of time because in order to boot Windows, it has to show you fonts. But if you're running on an Azure function, even if you're running on Windows, it doesn't show a UI, so it never has any use for the fonts. So the first time you ask it for a font, it goes, oh, hang on, give me five seconds, I gotta go and load all the fonts off disk, and it was killing our cold start performance. And so it's often really unexpected where the slowdowns come from because it wasn't really doing anything with fonts, this code, it just happened to hit this code path. And so you need to understand where the problem is, but supposing you have identified it, you've profiled your code, you've understood Okay, this piece here. This is actually where the bulk of the time is going Then how do you approach it? Well again, you've got to understand what are my targets? You know, how much money am I looking to save? How much time and money do I have to spend on actually fixing this? Because you can always push performance a little bit further a little bit further But there comes a point where you've spent more money developing it than you will ever save operationally and that's
Christian Wenz:
Absolutely.
Ian Griffiths:
obviously stupid, right? So you need to understand what the potential savings are. But let's imagine you've worked out, okay, well, I am actually processing a billion records a day, and so it is actually worth me doing this. So then, like I say, at that point, you need to think how information flows through the computer. Where is it actually going? How does it arrive? What happens to it next? Does it go through multiple stages of processing? Where are the copies? That's really the crucial question. What point are you copying data? And could you avoid copying that data? That is the mindset shift. It's looking at it in terms of how information flows through a machine, which is a quite different way of thinking about the world than we are used to with object-oriented design, for example. Now, if you're a functional programmer, actually it's probably a more natural thing to do, interestingly. Functional programming often. we think in terms of relatively abstract processing of information flowing through chains of functions. And people often think functional programming is a bit sort of academic and not for real world stuff, but ironically it can actually perform a lot better than more conventional imperative code for certain kinds of applications. So, but that mindset of understanding the flow of information is likely to be key to understanding where the opportunities are to optimize things. and then you might need to change the manner in which you approach actually reading information. So rather than
Christian Wenz:
So that's the tricky
Ian Griffiths:
say...
Christian Wenz:
part then, right? So not just replace classes or something.
Ian Griffiths:
Yeah, you actually have to you often end up inverting things doing things a little
Christian Wenz:
Mm-hmm.
Ian Griffiths:
bit inside out So rather than saying rather than calling a method that's gonna hand you back a nice easy to use object that has everything fully populated Instead you're gonna call the method where you say look the data is here and you handed a span and here's a call back Please invoke that call back when you found the thing I'm interested in and it will invoke your call back and say, okay I found it here it is and then you can go from there. It doesn't always have to work like that, but that is one approach for it. That's particularly important if you are dealing with memory that might actually be on the stack because you can't really do it any other way because once you've returned it's gone, right? So if you're doing that sort of in-place processing you sort of have to do it that way. It's not actually how the sort of middling... System Text JSON API works. So System Text JSON has several different APIs you can use. So you can do classic serialization. You can go right down to the lowest, lowest level of a kind of streaming API. But the thing that gave us that 10 times speed up was actually in between. It was the JSON element API. And that's a really interesting approach. With that one, you have a big block of data. So let's say you've got a byte array and you wrap it, you wrap it in a thing called a JSON document. And that JSON document is really just a pointer to the array and a little index so it can remember what's where in the document. And then you can say, it could give me the root element. And it hands you back a struct, a JSON element struct. So that's not gonna allocate anything. It's gonna return you a value, which it copies that, but the struct is tiny. The struct is basically an index into the document. So it says, I am, I start here and I end here. That's more or less all in it. So it's very cheap to copy that around. It's not copying any real data. And then you can ask it for, okay, tell me how many properties you've got or give me the property with this name and it will return you another struct that is another JSON element, which is just the property you asked for. And then you can say, okay, I believe that property's value is an int and you ask it for that. And so you basically have these value types that can be wrapped around the underlying data and they're all just pointing to different bits of it. You could have done the same thing with an array. integers right. You could have had you could design an API where you pass an array around, it's the same array every time, so there's no copying, and you say okay right now I'm dealing with the bit that starts at byte 907 and goes on for 12 bytes. You could do it that way, but it would be extremely painful to do that. JSON elements presents a more friendly wrapper around that, but the mindset is still there. The basic mindset is there's a thing that I must not copy, I have to walk up to it and work out which piece I'm interested in looking at next, rather than I want to convert that thing into a.NET style object of some kind. That's the difference. I've got a tool for inspecting a thing that's already there versus I'd like to build a description of a thing. It's that second one is the classic approach, but the first one is the mindset that will get you the high performance. It's quite a subtle thing. It's critically, the things that you get back are just pointing into something that was already there to start with. That's the fundamental difference with JSON elements and things like it.
Christian Wenz:
Are there any potential downsides? I mean, if whatever, if you work with callbacks, as you mentioned before, like the heap getting full, or is it just that it's the, I don't know if less intuitive is the right term, but the less used to approach, is that maybe the hardest kind of shift or change you need to do? Or are there common
Ian Griffiths:
Um,
Christian Wenz:
traps or pitfalls?
Ian Griffiths:
there are a couple things that are hard about it. One of which is you keep running into things you just can't do. So you go, oh, I'll just await this. Oh, no, I can't await this. Cause if I await it, I've now disqualified this method from using this thing, because we're never using await. C sharp, just rewrite your method and goes, okay, I'll just put that in an object that can live on the heap. So
Christian Wenz:
Yeah,
Ian Griffiths:
it'll still
Christian Wenz:
yeah,
Ian Griffiths:
be
Christian Wenz:
yeah.
Ian Griffiths:
there when we come back. It's like, well, you've now got an object on the heap. Span won't survive. Span cannot survive. that kind of asynchronous boundary. So you just don't get to use it in an async method. It's just off limits. So that's
Christian Wenz:
Do you
Ian Griffiths:
a big
Christian Wenz:
get a
Ian Griffiths:
deal.
Christian Wenz:
warning in...
Ian Griffiths:
Oh yeah, it tells you, just
Christian Wenz:
Okay.
Ian Griffiths:
says no. It just refuses to do it because it's all type-safe. This is the
Christian Wenz:
Yeah,
Ian Griffiths:
point.
Christian Wenz:
yeah, exactly, exactly.
Ian Griffiths:
It won't let you break the rules. And as for the callback-based thing, now the callback thing I should clarify, that's kind of the last mile of performance. That's the kind of, you wouldn't do that unless you really were pushing it. So I wouldn't go straight for that approach, but if you need to, you can. The thing with that one is it's really easy to accidentally throw away all the benefits by accidentally introducing an allocation. Because when you start using callbacks, if you use lambdas, it's really easy to accidentally capture some variables and get the compiler
Christian Wenz:
Mmm.
Ian Griffiths:
to allocate something for you. You've got to be really careful to avoid that. Now it's easier now that you can stick the static keyword on a lambda and tell it, I don't want to capture anything and it will tell you if you accidentally capture something. So it's a bit easier than it used to be. But that's a classic mistake. But often, the hardest thing is flow control becomes nigh on impossible. It's actually really hard to understand what this sort of code is doing sometimes. It takes you back to what it was like to write asynchronous code before we had async await. You often end up having to build things as little state machines, for example. And actually there's a great example of this that I was working with just recently. So. As some people may know, I recently took over as the main maintainer of the RX.NET project, so System.Reactive originated inside of Microsoft and then was one of the first internal Microsoft libraries to be open sourced. It was the Atica Foundation originally and then it
Christian Wenz:
Mmm, I think so, yeah.
Ian Griffiths:
moved into the.NET Foundation when that was created. It got... sadly neglected for a couple of years and then my employer agreed to, well asked if we could take it over because we love the library and they were prepared to pay for some of my time to actually do the work on this and so
Christian Wenz:
That's
Ian Griffiths:
we have
Christian Wenz:
fantastic.
Ian Griffiths:
basically resurrected it. So it last had a release just before.NET 5 came out and then nothing since then. It basically all ground to a total halt. We've just released a new version of it in the last couple of weeks now so and we have plans for what we're going to do in the future but anyway the reason I'm bringing this up is... Someone very kindly provided some example code illustrating a feature they would really like. Technically it was actually in the interactive extensions, but it's part of the same family of libraries. And it was, they wanted an operator that's in Rx, but it's not available for IAsync and Numerable. They said, well, could we have this? And they provided an implementation saying, look, I've written a thing that does this, so maybe we could do this. Very, very kind of them to actually provide code for this. So I felt a bit bad. when I had to point out kind of why we probably wouldn't be able to accept the code in the form that it was in, because he was using asynchronous methods and modeled the whole thing in a way that made total sense. It's like you can really easily see how the code he'd written did the thing that he needed to do. But I looked and thought, yeah, nothing else works like this. I'll bet there's a reason for that. And benchmarked it. And it's like, yeah, this is 10 times slower than any of our other operators. and wrote a version that was basically doing the sort of techniques I'm discussing, where we don't allocate things, we just look at the thing that's there and go around in a loop to do the work, and it was ten times faster, and allocated a fraction of the memory.
Christian Wenz:
Did it look like anything from what that person turned in?
Ian Griffiths:
It looked absolutely nothing like
Christian Wenz:
Um...
Ian Griffiths:
what this guy had written. I basically wrote the whole thing completely from scratch. It wasn't a thing you could have turned the first, you just basically go, I'm gonna have to rewrite this
Christian Wenz:
Yeah.
Ian Griffiths:
with a different paradigm. And if you're used to state machines, it
Christian Wenz:
Mm.
Ian Griffiths:
looked like a state machine, because that's what it
Christian Wenz:
Okay.
Ian Griffiths:
was. And that was how I approached it. And because I find that much easier than thinking, well, much easier to get right than multiple concurrent processes. and it avoided having any multi-threading locking code in there. His one was concurrent. He actually had a really subtle threading bug in it. And so, yeah, it's... You need to change the way you approach the problem. So often it's not just a case of some tweaks or changing the types you use. It's often a case of, I'm actually going to have to restructure this and just make it approach the data in a fundamentally different fashion.
Christian Wenz:
So going back to the question, or kind of adding to the question, so when should you start doing that? Because isn't it like, I don't want to say if you have a hammer, every problem looks like a nail, right? But isn't it when you're slated to say, okay, we need this simple CRUD website that you start thinking about, hey, so I need this span here, and I send some data there, and I don't want to allocate anything. So is there like this, I don't know.
Ian Griffiths:
That's a gr-
Christian Wenz:
This rule of thumb doesn't come from experience.
Ian Griffiths:
I think it has to come from experience because
Christian Wenz:
Mm-hmm.
Ian Griffiths:
there's a really important point here. I wish I could remember who. Someone posted a blog post years ago which was all the timings developers need to know. How long does it take to do this? And it was just a list. How long does it take to add a number together? It was like a fraction of a nanosecond on a modern computer. How long does it take to fetch a value from memory? It's like, well, that was more, but it's still measured in nanoseconds. And it kind of went through. What if it's actually not in the cache and I had to go out to the actual memory chips on the motherboard? So, whoa, that's a whole load slower. What if it's on disk? Wow! That's, you know, civilizations will rise and fall in the time it takes to
Mark Miller:
Ha!
Ian Griffiths:
get a response from the disk controller.
Mark Miller:
Nice!
Ian Griffiths:
And you need to contextualize the thing you're doing. And you're example of a crud-type website. Well, what do you do? You are dealing with requests that come in over the network. You're probably going to go and make a request off to a data store. to actually handle that, that's gonna take forever. And it's gonna be asynchronous. So you're gonna have to park everything. So you're going to have to allocate. You just can't avoid it. So there's absolutely no point implementing the sorts of stuff I'm talking about in that example. And how do I know that? Well, I know that it's going to allocate anyway because of the async list, but also I just know how slow these things are. And actually, just to give you a different, an example from a different domain, back when I used to teach Windows Forms courses, someone asked me about event handlers saying, oh is this way of attaching event handler or this one more efficient? I said well that second one you've shown might save you about 10 nanoseconds but consider this. What are you actually doing here? You're handling a click event. What has to happen for a click to occur? Well the user has to press the mouse button. That's going to take a good 50 milliseconds or so for the button to move down and to actuate the physical mechanism inside the mouse.
Christian Wenz:
And they have to release the mouse button, right? So even
Ian Griffiths:
depending
Christian Wenz:
more delay.
Ian Griffiths:
on what yes if it's
Christian Wenz:
Yeah,
Ian Griffiths:
if
Christian Wenz:
yeah,
Ian Griffiths:
it's a proper click yeah
Christian Wenz:
yeah.
Ian Griffiths:
and their mouse has to send a message to the computer back then that would have been the ps2 port that's a serial connection so it's going to have to send a certain amount of data before the pc can even begin to process it that will then sit in a chip somewhere on the motherboard until the cpu handles the interrupt that was generated which could be another you know that could be it'll be a few microseconds it then has to work out what nerf to do with that so okay well the mouse just got clicked the kernel what application to send that to. So it's gonna hand that off to the Windows subsystem that goes, oh, okay, the focus was here. So we're gonna send, we're gonna put it on that application's message queue. Now the scheduler has to run the message queue of that app, the message pumping thread of that application. It has to get barred up through the message queue. So, you know, it's like, you're looking at a nanoseconds difference, or maybe 10 nanoseconds difference. It's gonna be totally lost in the noise. But until you actually think, well, how long do things take? How long does everything I'm doing here take? you don't have the context to answer is it worth optimizing this. So you have to contextualize before you even think about should optimize.
Christian Wenz:
So not start,
Mark Miller:
Yeah, I
Christian Wenz:
but
Mark Miller:
call this...
Christian Wenz:
bring builder from the start, right?
Mark Miller:
Nice. I call this UI time, right? When we're dealing with UI time, right? User interface time,
Ian Griffiths:
Mm-hmm.
Mark Miller:
everything slows down in terms of your rules. You get basically about 130 milliseconds to be able to get your stuff together, finish it all up, get that display up in response to the click or to the button press, that sort of thing.
Ian Griffiths:
Yeah, in the 1960s, IBM said it was 100 milliseconds, but standards have slipped since then.
Mark Miller:
What?
Ian Griffiths:
Computers aren't as fast as they were in the 1960s.
Mark Miller:
Yeah. I'll
Christian Wenz:
Exactly.
Mark Miller:
give you 130 milliseconds. Yeah. What's interesting about this is that it's kind of a very focused version of what I normally do when I'm worried about performance. And in that scenario, the mantra is always do less, right? No matter what you want to do less because, you know, first of all, it's the same thing that you're talking about, right? Where you're... getting a sense of wait, where's we're finding the bottleneck first. And then let's just do less work. What's fascinating to me, Ian, is that you're, you're focused on doing less copying of memory.
Ian Griffiths:
Mm-hmm.
Mark Miller:
Right. And it's also really fascinating to me to this idea of the mind shift, right, in terms of okay, how do I take I've got existing code that's got like, you know, lots of objects. Each object has lots of data, and that data is being analyzed continuously as it's being changed. Right?
Ian Griffiths:
Mm-hmm.
Mark Miller:
And how do, and that's using standard.net. And how do I take that? And how do I, you know, what's my first step? I guess that would be my, my new question is, what's my first step? Is it simply learning about span of T and trying it out in small projects? Is there a document or book or training I should be focusing on? Where should I go to become the best, like even better than you, Ian?
Ian Griffiths:
Ah, that can't be done.
Mark Miller:
How do I get better than you? Because I'm incredibly competitive.
Ian Griffiths:
So, well, the first thing you need to do is buy a copy of my book because the final chapter is all about this. So there's a whole chapter on Span of Tea. Or you can read the docs online, which are free, but that's obviously not as good for me. So the, but I would totally experiment and measure, try and write benchmarks that illustrate a performance difference. And if you can't do it, you haven't understood it yet. So you should be able to think of ways well this way ought to be faster than that way. Let me see if that's true. So it's science, really. You hypothesize about what you think your understanding is, build a theory, and then build an experiment to test that theory. And that is how we learn. That is the way, in my opinion, to get better
Mark Miller:
Ian,
Ian Griffiths:
at this.
Mark Miller:
when you write a benchmark, do you use a standard set of tools? Are you using a stopwatch class in.NET? Or how are
Ian Griffiths:
I
Mark Miller:
you?
Ian Griffiths:
mostly use Benchmark.net because it just does everything I want right away with very little fuss. It's really, really good. I think actually it's one of the most significant contributions to the.net community out there. It's an amazing tool. I've never used anything else. Occasionally I'll use Stopwatch if I actually want to measure in context how long is this thing taking in this program when I'm doing this operation. Then I'll use a Stopwatch. But... if I want to benchmark something to get a general idea of what the performance characteristics are, benchmark.net every time.
Christian Wenz:
Excellent. I really love this discussion. I could go on forever, but I think the only thing we can hardly optimize is the time we have for an episode of Adventures in.NET. So I'm afraid we have to move it over. We have to pivot over to picks now. So Ian, would you like to start with your pick?
Ian Griffiths:
I would, yes. There is a library that almost no one knows about in.NET called, let me try and pronounce this in a way your audience will understand, Z3. Being British, I call that Z3, but Z3. So the final letter of the alphabet and the number three. If you search for Z3 link, you will find the source code for a library called z3.link. Z3 is a solver. It solves certain kind of constraint. driven problems. It was developed by Microsoft Research and it's a very, it's a particular kind of solver called Satisfiable Modulo Theory and I don't have time to explain what that means. But
Christian Wenz:
Actually,
Ian Griffiths:
it's basically...
Christian Wenz:
I wrote my thesis about satisfying a three set. So when you have
Ian Griffiths:
Oh, okay.
Christian Wenz:
three of them, so that is super amazing. I wish I had that back then. So unfortunately, it's
Ian Griffiths:
So
Christian Wenz:
too late.
Ian Griffiths:
it's neat. The thing we found it actually really useful for is operational research type problems, where you have organizational constraints and you're trying to optimize for something like, where should I send my oil rig now? What's going to give me the maximum return, bearing in mind these costs, these constraints, and these sorts of things? But you could use it to solve a Sudoku puzzle as well. It's perfectly able to do that too. So you describe a bunch of constraints and it will tell you whether they can be satisfied. And if so, it will come up with a solution that satisfies the constraints. And you can use it for all sorts of things, timetable planning, although that's actually a surprisingly complicated problem and it might take forever to solve just because it's inherently difficult, but it's great. And so that's the Z3 library. That's a thing Microsoft developed over several years. Z3.link. is.NET bindings over the Z3 model. So you can use link query syntax to interact with the thing. So you don't have to learn its own slightly strange language for expressing constraints. You can express them as lambdas in.NET using expression trees. And it's just, it's so cool.
Christian Wenz:
You just
Ian Griffiths:
It's a
Christian Wenz:
have
Ian Griffiths:
brilliant
Christian Wenz:
to be careful
Ian Griffiths:
piece of technology.
Christian Wenz:
that you don't allocate anything in those lambdas, right? That's
Ian Griffiths:
Hahaha
Christian Wenz:
the takeaway I got. That is actually an awesome pick. So Sean isn't here today. So I thought I do what he's doing. I will have a streaming pick. So actually, since it's Friday late evening when in my time zone we were recording this, I wanted to watch Tetris on Apple TV+. But I was overruled. So we were watching Ghosted. And it's also on Apple TV+. And I would say it's mindless fun. Like guy falls in love with girl who happens to be a C.I. spy. And so it's like an action comedy. Very, very enjoyable. Unfortunately, I had to walk out of the room to do the podcast. So I have no idea how it ends. The suspense is killing me, but no, I think we're good. But it was great mindless fun. But yeah. After that, I think I'm absolutely going to watch Tetris. I don't know if you've seen it yet, but I've thought great things about it, about the inventor of Tetris.
Ian Griffiths:
Sounds good.
Christian Wenz:
and mock. I'm happy can't
Ian Griffiths:
I can't.
Christian Wenz:
hear you.
Mark Miller:
Sorry So I'm not sure if I've said this before on the show, but if I have it's worth saying it again.
Christian Wenz:
No, very.
Mark Miller:
I Here here it comes. It's bullet train. I this
Christian Wenz:
We
Mark Miller:
movie
Christian Wenz:
talked about
Mark Miller:
have
Christian Wenz:
this, yes.
Mark Miller:
a
Christian Wenz:
I love
Mark Miller:
I did
Christian Wenz:
it.
Mark Miller:
I talk about it.
Christian Wenz:
Yeah, but
Mark Miller:
I love
Christian Wenz:
I mean I also thoroughly enjoyed it. Again, mindless fun, maybe even on a higher level than Ghosted, but...
Mark Miller:
Yeah. All right. So this was one of the first things I did because it seems so far away from time. All right. I got a second backup one. If I've already said bullet train, if you haven't seen it kids, you better see it because I'm going to bring it up again on the show. Number two is we started watching Breaking Bad again, the series, and I loved it when we first went in. In general, I thought it was really well written, really good acting, and we're just going into it again. So,
Christian Wenz:
So does
Mark Miller:
you
Christian Wenz:
it
Mark Miller:
know,
Christian Wenz:
help if
Ian Griffiths:
I
Christian Wenz:
you...
Ian Griffiths:
knew
Mark Miller:
yeah.
Ian Griffiths:
you'd be reminded of something with your current look.
Mark Miller:
Ha ha
Christian Wenz:
There is some resemblance
Mark Miller:
yeah
Christian Wenz:
actually. So does it, do you have a different view on the show now that you know how it ended and the development? So
Mark Miller:
Yeah,
Christian Wenz:
do you kind of notice
Mark Miller:
well.
Christian Wenz:
whatever some events being foreshadowed or... Okay.
Mark Miller:
Well, I think I enjoy it more. Like, I think we were watching tonight, episode two, and Jesse Pinkman and Walt are in an argument, and they both conclude that they're never going to see each other again. And I
Christian Wenz:
to
Mark Miller:
turned
Christian Wenz:
a sense
Mark Miller:
to my
Christian Wenz:
of
Mark Miller:
daughter
Christian Wenz:
spoiler
Mark Miller:
and I said,
Christian Wenz:
there.
Mark Miller:
Yeah, I turn to my daughter and I say, they're in it for about five more seasons together. Easy. But, yeah, it's great because one of the things that was so endearing about that character, Jesse Pinkman, is that he just kept messing up spectacularly. It was part of who he was, right? And he was this weight that Walt both loved but carried as well. And seeing it from the beginning in that space as it's developing is, I find, really delightful as we're going in. So
Christian Wenz:
Excellent.
Mark Miller:
yeah, I liked it.
Christian Wenz:
Great picks for everyone. Thank you, Ian, for being on the show. It's been a fantastic episode. I really enjoyed this. Please come back. Thanks everyone for tuning in and hope to see and hear you again next week here on Adventures in.NET. Bye.
Ian Griffiths:
Bye, thanks very much for having me.
Mark Miller:
Thanks.