Performant Applications using the Actor Pattern & Akka.NET with Aaron Stannard - .NET 206

Aaron Stannard joins the Adventures in .NET team this week to discuss Akka.NET. He digs into the Actor model, the reasons to use it and what gave him the impetus to port Akka to .NET.

Special Guests: Aaron Stannard

Show Notes

Aaron Stannard joins the Adventures in .NET team this week to discuss Akka.NET. He digs into the Actor model, the reasons to use it and what gave him the impetus to port Akka to .NET.

Links


Picks

Transcript


Hello, and welcome to another episode of adventuresin.net. I'm Shawn Clabough, your host. And with me today are your cohost Caleb Wells. Hey, Shawn. Hey.

Hey. How's it going? Good. Good. And Joel Sjobert.

Hey, Joel. Buddy. This is Joel. Hey. Hey.

That is Joel. Yep. Nice to have you here this week, Joel. I know you've been real busy, so it's nice to have you. And I guess I'll start things off with my tip of the week.

One thing I was working on this week ended up flagging up some things for me. So I don't know if you have guys ever compare things to either empty string or string get empty. This is variable equal to string to empty. Do you know that that actually creates more instructions when it gets compiled than than using either the string is dot is null or empty function or comparing it to the length equals 0? Did not know.

Yeah. So if you're one of those performance nuts and just wants to make sure everything is is as good as it can be, don't compare things to string dot empty or empty string. Use the dot length equals null or equals 0 or use the function for string dot is null or empty. Oh, that's cool. Does the does the, length equals 0?

Does that still work with null? It doesn't, crash? You do have to check it's different behavior if it is null. So if you use try to use length, you've got also check for null if that could be a possibility. Of course, with the new null checking, you know, that should be a different experience depending on what setting you have for, null reference checks.

Great. Alright. Our guess is all about performance. Right? Is that is that right?

I think you are right. Okay. Yeah. Our guest today, Aaron Stennard. Welcome, Aaron.

Hi. Thanks for having me. Yeah. So are you all about performance? Oh, yeah.

You know, my my sort of background with what I do, I'm the, founder and CEO of a company called Petabridge. We help dot net businesses build highly scalable applications. And so my background is largely in performance from a distributed programming space. So the idea of being able to process as many requests per second, in sort of a horizontally scalable fashion, but, of course, part of, part of accomplishing that also means performance optimizing our framework. So it introduces as little overhead as possible.

We want, we want our customers' application code to, to basically be where most of the performance issues come from, not anything that we build ourselves. Nice. You know, I've been around so long that I still have that every bite and every little bit of performance thing counts even though processes are surf are so fast today that it's not such a concern. But it's just been my mindset since I ever got into development to to try to be as conscious about that as I can. You know, the last part of so before I moved to Texas a few years ago, I was living in Los Angeles and running sort of a a large scale analytics and marketing automation SaaS startup down there.

But some of the folks we had in our space, you know, because, like, Activision headquarters was just down the street from me, for instance. We had a lot of video game developers. And there was a story some old timers taught about performance, specifically, coming within the memory constraints of old consoles, like the SNES, Atari, all that sort of stuff. Constraints of old consoles, like the SNES, Atari, all that sort of stuff, where, you know, one of the issues you'd run into is having, having your game be be bigger than could fit into the memory on one of those old, cartridges you'd have to install in there. So one of the things the veteran programmers started doing, and this is like a your late eighties, early nineties, is day 1 of a brand new project.

Let's say they're building a Final Fantasy game or something like that. The, senior developer would go ahead and allocate a, let's say, a, 1 megabyte block of RAM or maybe a 4 megabyte block of RAM at the very beginning of the application. And so all the performance profiling and everything else that happened across this team is several 100 developers would, you know, try to basically get all the game assets and everything else under that sort of hard memory limit they were subject to. And inevitably, at the very end of the project, they're always racing to try to fit under this limit. There's a couple 100, let's say, kilobits you just can't seem to get under.

And that veteran programmer, once that cons once that, complaint would come up, would go ahead and just comment out that block reallocated 4 or 5 megs of of memory at the very beginning, and all of a sudden the game would come under its memory limit. You know? It kinda reminds me, yeah, it reminds me of one of my first computers, and I've I've said this on the show a a few times. My first computer had 4 k of RAM. Oh, wow.

And I wrote a hangman game for it. I could only fit 25 words before it would crash because of out of memory. So, you know, that's where, you know, my memory constraints first started. So I, yeah, I I get that so much so much, but everybody was only gonna need 640 k. Right?

Right. Right. You know, I'm kind of on the other side of that whole thing is is I was never into the memory side of things. But as far as CPU speed and power, I've had the fortune or misfortune to work a lot of jobs even until very recently where CPU throughput was a huge factor. One of them was like a big online stock trading company where you're just getting buried after market open, and there's just never enough CPU to go around.

And then a second one was I did some consulting out in Palo Alto for the airline industry trying to, deal with all the traffic you get for trying to book airlines and looking up what seats are available and what flight, what price the ticket is gonna be. So the look to book ratio is about 500 to 1. 500 looks for actually booking a flight. And so the amount of load that comes in is just crushing. And so we're actually I've worked in some segments even till, like, last year still where CPU and optimizing things and measuring performance, like, over and over is still, like, a reality.

Those two segments you mentioned right there, let's say doing a real time flight search or doing things like algorithmic trading or even running an exchange are some of the types of things that my customers at Petabridge do And they use, aka.net, which is this distributed actor programming framework that I, that I co created. I think like, gosh, must be 6 or 7 years ago now, 2013 sort of when we started on it. And, yeah, those are those are exactly the types of spaces that the ideal in on a day to day basis. You know, when I get asked, you know, sort of what does my business do? My technical answer for that is we help customers with distributed applications that require massive amounts of concurrency.

You know, I helped 1 I've helped one customer, an airline, build that sort of inventory search, which has all sorts of fun problems with it, namely that all of your flights and if you're bundling it with things like hotel rooms and rental cars, all of the inventory is highly perishable, meaning that the sort of, you have to be able to run these sort of real time searches that can be pretty broad in terms of the criteria. So you might wanna, you know, say, I wanna find the cheapest possible vacation package for a family of 4 between any of these different possible destinations, but under this price range. The combinatorics space, meaning all the different parts of the information graph, you have to search to complete that. And also because you're subject to ecommerce sort of constraints, we have to get all that done in, let's say, 5 milliseconds. So that way you can actually begin serving up the response and getting it rendered because you wanna have the total end to end time for that be in the neighborhood of maybe 250 milliseconds total.

That way you get the best possible conversion rate on the site when people are looking at booking something. There's a sort of a direct correlation between response times and conversion rate. But, that's the that's the type of thing people have used aka.net for in the past where, with these so so Akka dotnet's an implementation of the actor model, which we can get into. But what they do is they basically segment all the different parts of their domain into these logically independent actors, actors, which all kind of function like their own little individual processes. And they'll go ahead and stick the entire search graph into those actors.

Those actors can receive updates whenever a a flight gets booked or a hotel room gets booked, and that'll go ahead and reflect some of that inventory getting diminished. But the searches go ahead and run-in memory inside a single process that might have, like, an entire copy of the search graph inside of it. And so they can handle hundreds of thousands of parallel queries, all in one box, and they can also replicate that across many boxes simultaneously. That's some of the types of things that that we help, help simultaneously. That's some of the types of things that that we help, help our customers design using, using Akka dotnet to do it.

Yeah. That looked great. Looking through your one of your reference materials, it had a really great rate up on that. So kind of at a high level, it looked like Akka is tell me if this is correct. It looked like Akka dotnet basically had actors and then probably message queuing between the different actors, and then each actor is protected by a serialized event queue.

So you can control serialization and and threading that way. Yep. Every every actor has its own so, you know the way people tend to think of sort of messaging systems are these really broad based queue centric systems where you dump all these events into end service bus or RabbitMQ or whatever And then you have n number of sort of arbitrary consumers that pull messages off the queue and then put a little read receipt on the queue when that work's finished so that message can be reliably delivered. Exactly. Alka dot net is a is a decentralized version of that concept.

Where rather than one big queue that all the messages pull from each actor has its own independent in memory queue so it's really like 100 of 1,000 or even millions of really small queues And the backing implementation for that in Akka dotnet is just the concurrent queue structure from systems dot collections dot concurrent. It's tough to beat that in terms of performance. And in dotnetcore, I think it was 2.1 and then again in 3.0, they they actually made that data structure much more efficient in terms of the the internal sort of stuff it uses for managing different segments and growing the queue and that sort of thing. But, yeah, each actor is is a it usually a very important word is serialized. And when we say that we're not talking about JSON serialization we're talking about the sort of serial processing nature of those actors each actor can only process one message at a time and the really big impact of that from an application design standpoint is it gives you a much simpler model for managing state concurrently inside your system.

All of the state that is held inside private fields inside an actor is by gate by default always thread safe because the actor can only do one thing at a time when it's processing a message. It can only process that one message. And when it's done processing that next message, it moves on serially to the next message in the queue. So all those state transitions don't need to be protected with locks or synchronization mechanisms or anything like that inside the actor. So rather than having to work on this sort of complicated shared state concurrency model, which even experts screw up on a regular basis because it's it's difficult and not really, being able to figure out what's the right way to protect a critical region or whether or not this read is really thread safe or not.

Or is that other piece of state I'm calling in this external library thread safe is just a notoriously complicated problem. By fragmenting all that work into many different actors, each one owning a small portion of the domain, you're able to go ahead and essentially guarantee, look, I don't care what's happening inside the other actors, both in this one actor that I'm looking at right now and I'm doing development. I wanna make sure all those operations inside that actor are just working on, we're working sort of on one message at a time. And therefore, all those state transitions when I'm, let's say, adding items to a collection inside that actor, only one item can be added to that collection at a time because concurrent access is not possible in that design. So that's one of the sort of really important kind of paradigms behind the actor model.

This idea of serial access and private state that is not accessible via method calls or anything else. The only way to get state out of an actor is to send a message and the actor will copy its state into an immutable response object. It'll go ahead and basically make a copy and send it back. That way you don't get any side effects or anything else like that inside a, an actor programming paradigm. There's nothing worse than trying to debug a parallel execution problem between, objects on multiple machines.

That is literally the times that I've been stuck on one problem for up to weeks at a time. And and having this alleviate that is fantastic. One question I had from one of your descriptions was if I've got another actor I wanna talk to, you said it's a decentralized version of queues. Is there a centralized middleware that I put it on to then to get over to that actor, or I'm actually trying to manage which actor's queue I'm talking to from my actor object? So tip so there's the answer is you can do both.

Typically, most actor communication, you basically get what's called an actor reference, which is essentially a handle for being able to send a message to another actor. That actor reference could be local, meaning that the actor you wanna talk to is hosted inside the same process that you're in right now. And that just gives you a pointer directly to that actor's queue. It's what that does. But we also have the possibility of remote actor references where the actor you wanna talk to is actually hosted on another machine or maybe just a different process on the same machine and you don't have access to its memory address space.

So the only way you can send messages to it is through some sort of inter process transport. And the default that we use for that is a TCP connection. Using, one of the modules in the Akka dot NET framework called Akka dot remote makes it transparent for actors between multiple processes to communicate together. So there's a couple of different actor messaging paradigms for how you talk to other actors. One approach is you can use what's called an actor selection, which is a way of essentially typing out an actor's address.

Every actor gets its own globally unique URI for the network address space that it's in. So you can kind of look up an actor by its by its address and communicate it with that way. But there's also a number of different abstractions built into aka.net that make it so you don't have to do that. K? Actor selections are usually a tool of last resort more often than not.

We have what's called routers, which is these actors that are able to find routeees based on some of their criteria, such as where they might live in the actor hierarchy. Actors are organized in sort of a family tree structure using parent child relationships. So you have your sort of topmost actor at the top of the hierarchy. That actor usually owns sort of a top level of a domain. So for instance, if you were building an IoT application, you might have a couple of different sensor families that you work with or maybe different protocols in the IoT application.

So for instance, if you're using Akka dotnet to automate, a factory floor, which is, actually a pretty common use case, you might have one actor that owns the family of all of the pickers that are used for picking things up off one assembly line and moving it. And you might have another actor that owns the camera protocol and the camera protocol might get used to figure out where is this object, on the conveyor belt because you only want to move the picker to go and get it when you know that object's in the right spot. Right? So you might have a parent actor in the actor hierarchy that represents those 2 different protocols, and those that actor itself might have a hierarchy of children underneath it that decompose all of that complexity down into smaller parts. So for instance, if you have 4 cameras, that parent actor might have one child for each of those actors for each of those cameras.

Excuse me. And then if those cameras themselves need multiple actors, maybe because they have one actor that's responsible for adjusting the focus, another that's responsible for doing an optical scan to figure out where on the Cartesian plane this device is. And then all that information might get communicated to the picker actor via a router of some sort. And that router might say, you know what? You should route all your messages to the actors that are in the picker part of the actor hierarchy.

Now inside Akadot cluster, which is how you build, like like SaaS applications, really highly available applications that span multiple computers tend to be built using Akadot cluster. There's a technology in there called distributed publish and subscribe that essentially uses a topic broker where an actor can say, I published to this topic and another actor on a totally different machine can say, I subscribed to this topic and those subscriptions will get propagated throughout the network. So publishers and subscribers can talk to each other indirectly using that, pub sub mechanism as a bit of a broker brokering system for that. Now the, the actor pattern has been mentioned a lot here. Let's make sure that our listeners have an understanding of the basics of what the actor pattern is.

The the actor pattern is very old actually from a computer science concept. It's, it's showing up a lot more now in sort of everyday, sort of, you know, let's say enterprise software development because cloud computing has made the actor model, cost effective again. The actor model was something that was originally proposed in the early seventies. I believe the actor model white paper was originally in 1970 3, which makes it only about 2 years younger than the relational database, just to kinda give you a good conceptual frame there. Was this one of the gang of 4 patterns or not?

Oh, no. This just predates that. Predates that. Wow. But it's a it's a very old idea that came from some of the early early thoughts on how to do concurrent programming.

So computer scientists at this point in the early seventies envisioned that the way we would get large scale computing is through these living room sized machines that had tens of thousands of low powered CPUs. So, yeah, these are the you know, your old you know, I'm actually, I'm blanking on the name of the Intel specs they even had at that time, but, well, I'll put it this way. Those those microprocessors they had back then aren't as powerful as the kind you can get in a calculator today. Right? So their vision for how you'd build concurrent software back then because you also gotta remember there wasn't really a concept of multithreading yet either.

That was an invention that came about 15 years later. Their conception was you'd go ahead and break up a program into different actors, and each actor would inhabit its own core. And these different cores would work together by sending each other messages through shared memory since that was something all the cores could plug into as a common communication layer. And that was the original idea behind the actor model. So it was meant to be a way of doing highly concurrent programming when we thought the way computers would evolve was these big living room sized machines with thousands of of CPUs in them.

Moore's law kind of eliminated the entire impetus for doing that. We ended up getting a bunch of computers with a small number of really powerful CPUs instead. But where the actor model came back into use was in the late eighties with the emergence of the Internet and networking. And specifically, it was Ericsson who brought the actor model back to life They were building some of the first digital telephony exchanges and they invented an actor runtime that is now known as Erlang to do that the Erlang based actor model which is kind of the original reference implementation for for all the different actor models that have come since, is this idea that your application gets broken up into hierarchies of actors. The actors towards the top of the hierarchy own the biggest part of the domain.

And then as you move down through their children, they basically own smaller and more tightly bounded parts of the context. So from a domain driven design perspective, if you, you know, have any, users who are familiar with that, the top level actors represent the aggregate root and the leaf node actors at the bottom are the most tightly bounded context of all. That's sort of how the actor hierarchy works. Now the way actors do work is you don't invoke methods on them. You send them messages.

That's what basically causes an actor to get scheduled for execution, and that actor will process a burst of messages out of its queue. And some of the work that actor can do while it's processing messages can include, you know, doing normal object oriented programming stuff like writing to a database or calling a web service or whatever. You can do that. You can spawn other actors. You can go you basically can create new actors and delegate work to them.

If you want, you can send messages to other actors. You can also do things like change your behavior. You can change the way you process a message while you're in the middle of processing 1. See, that allows you to build things like finite state machines using actors. And so the actor model makes a few basic promises using this infrastructure.

The first promise is that every actor has a globally unique address, meaning that if you need to send a message to an actor over a network, once you know it once you have a reference to that actor or you know what the actor's address is, you can reliably communicate with that actor so long as you know that it exists. The second guarantee is that every actor will process exactly one message at a time. Now there are some actors in Okada.net that break that promise, such as routers, where they can process multiple messages concurrently, but that's because they don't have any state. Those actors are essentially static. It would be a way of thinking about it.

They're static actors, so therefore, they they don't need to be subject to that same guarantee. And then the last guarantee that actors make is that they will process every message in the order in which it's received. So by default, they're gonna use 1st in, 1st out processing. So if you wanna send, let's say, a control sequence to an actor where you say, you know, I want this device this actor's talking to. If we go back to the IoT example, I want, this camera to go ahead and adjust its focus, give me an x y coordinate reading, and then I wanna move the picker to go ahead and grab the object at those coordinates.

Well, if you wanna make sure those messages execute in that operation and you have one actor that kind of sits on top of all those devices you just send that actor those messages in that order and that's the order in which they'll get executed so those are kind of the basic promises of what the actor model is but the overall goal of why do we have it in the first place why is it something we should care about, is because concurrent programming is too hard doing it any other way, for for applications that are, let's say, highly dependent on state and state management. There are some there's some other concurrent programming models you can use when you're not as dependent on on state, but the active model is designed to provide you with a clear and understandable, methodology for separating your state and reasoning about it in a way that even a programmer who let's say doesn't have a master's degree can, can manage. So the active programming model is designed to really provide a much more user friendly way for solving some of those problems Joel talked about of Having to be able to go ahead and have 2 different processes running in parallel and being able to understand Predictably what each one of them is going to do given the messages that they were sent.

That's one big reason for it the other big reason for having actors aside from just concurrent programming is they're very good at building highly fault tolerant and self what are called self healing systems actors exhibit a very high degree of fault isolation meaning that if one actor throws an unhandled exception let's say because it received a I don't know a malformed message or maybe there was a network failure when it was trying to talk to remote service. 1 actor crashing has no side effects on any of the other actors, and that actor when it crashes will automatically be restarted by its parent actor. So the behavior the actors have when they fail is also designed to be highly predictable. So that's another one of the big benefits behind the actor model. And that's one of the reasons why Ericsson was so insistent on using that methodology for building their digital telephony networks with Erlang originally.

They wanted to have the ability to recover from fail from errors without dropping phone calls or messages. They also wanted to have the ability to scale horizontally. They predicted quite accurately that as cellular networks began to, you know, increase in popularity, the amount of strain that would be put on the software and hardware they deployed would go up, and it'd be much less expensive for them in the long run to have a horizontally scalable software system, meaning that if I want to double the capacity of my network and let's say a big city like, Detroit or New York, a much cheaper way of managing that is rather than having to redesign it, you know, every n number of years to be more and more efficient would be just to double the amount of hardware that's there. And if I can accurately predict that if I double the amount of hardware, I'm actually gonna get double the capacity with no diminishing returns, that gives you a scalable model for being able to support growth in an application like that. And that's one of the other driving factors behind adoption of the active model today.

It's used in a lot of popular consumer and enterprise facing applications that exhibit those types of growth behavior. A good example would be those airline, search search engines. We're looking for prices, but other applications that we can all relate to are things like multiplayer video games. You know, how many times have you fired up a brand new Blizzard game or something else and that game has day 1 launch issues because they couldn't scale the multiplayer system system to support all of the day 1 demand. Right?

They're talking about the unemployment systems right now. Yeah. There you go. You know, that's I guess that's one of those systems that you would hope you'd never have to scale it up that much, but but Blame it on COBOL. Right?

Life has a way of happening. You know? Right. Ain't that the truth? Yep.

Aaron, I I actually, I have a question kinda goes back to how you got started with Anchor dot net. What was the what was the impetus for actually developing the the, the project? Well, that's a that's a good story. So I good stories. Yeah.

Well, as a 24 year old, I wrote a blog post about how difficult it was to start a technology startup using dotnet as your technology of choice. So this would have been around 2010, 2010. And that article, made it onto the front This is like 4th July weekend 2010 that article made it onto the front page of hacker news and about 40% of the IP addresses that looked at it came from Redmond, Washington and so someone on Microsoft's evangelism team Eventually read it and forwarded on to the people there who had eventually become my co workers They they reached out to me because of that blog post and hired me to be a startup developer evangelist for Microsoft's biz spark program, which was designed among other things to get people to start using Azure back in 2010, a full decade ago. So I got started. I I sort of really so I've been a dot net developer for a while.

I had a startup that I tried to launch doing social media measurements was the name of it. Was kind of meant to be like a, Twitter analytics before Twitter analytics was a thing. Right? So I tried doing that back in, 2010 just on my own. I realized I was out of my depth.

I didn't quite know enough, about, large scale software programming to to do that But I got a job with Microsoft shortly thereafter and I spent 2 years Working with venture backed startups in Los Angeles and you know, even though people tend to think of startups mostly coming from Silicon Valley there's quite a few big ones that were being developed in LA at that time, Snapchat and Tinder being 2 that I can recall off the top of my head. But, anyway, long story short in 2012, we're getting ready to launch windows 8 and Microsoft. The windows store is supposed to be the biggest software developer opportunity of a lifetime, the app store for for windows. And I have all this great intelligence from Microsoft about how big that's gonna be and what we're gonna do with it and have the size and economic opportunity. And I say to myself, this is a fantastic opportunity to quit Microsoft and start a venture backed software company selling services to the software developers building for that store.

And since I had a background in analytics, I wanted to, go ahead and build the 1st real time analytics service for Windows store developers. We launched, an analytics service built on top of Amazon Web Services, ASP. NBC. Originally we used Raven DB, but we had some scaling problems with that. So we migrated to apache Cassandra and That first product did fantastic.

We we had a few days where we experienced 600% growth 3 days in a row In terms of the amount of traffic on the system, but small problem. App analytics is a highly commoditized space. And even though we offered a bunch of additional value that you couldn't get from tools like Google and everything else, no one was willing to pay for it. So we decided that we needed to come up with a way to add more value to our product. Otherwise, we weren't gonna be able to raise more money and we were gonna lose all of our jobs.

And I was gonna lose my life savings that I put into this company. So we decided the way to go about it was introducing a real time, marketing automation, the ability to send users targeted push notifications based on what they did or did not do inside the application. So we could send someone like a discount offer to buy something, some add on in app purchase or something for their for their software. But in order to build a real time system like that, a real time event driven system, you cannot and I'm I'm saying this with a 100% certainty. You cannot build a product like that using CRUD.

It's not it's not technically feasible, and I'm sure there's a theorem out there that can prove it. And the reason why is the latencies involved in well, the latencies involved in receiving requests, turn them into database queries, sending that over the network, getting that turned into execution plan, committed, getting the act back and all that stuff, becomes insurmountable at even a pretty small amount of volume I mean, we were doing a 100,000,000 transactions a day and most of those would occur in a 3 hour period So that's that's a lot of that's like 500 megabytes of event data per second at peak hours That's a that's a lot for a a 3 man startup to handle. So we ultimately came to the conclusion that we needed to be able to solve 2 problems in order to build this marketing automation product. 1st is we had to have a very rapid fire way of consolidating all of the state around whether someone was qualified for a campaign or not. We had to have a way to make sure that state could be found in a single location inside our application, and we had a reliable way of knowing where that was.

The second thing we determined that we needed was that state needed to live inside the application rather than the database. In other words, it had to be in memory state. That way, during a period where a user was live and doing things inside the app, our telemetry SDK would send information back to our services, and that data would make it into whatever our application object was and it would go ahead and test to see which campaigns that this app developer is paid for does this user qualify for. And you know what? Turned out the actors are the perfect solution for that type of problem because actors live forever.

They're cheap. And once I and I can basically figure out even inside a distributed system where nodes are being spun up and spun down using auto scaling, I can reliably figure out where the actor is that owns state for that one user inside my system using a technique called consistent hashing in that case. But, that allowed us to go ahead and build that product. And that was really what inspired us to port the Akka framework from Scala, which is where it was originally written and ported to c sharp. And then lo and behold, that project got adopted by a ton of other users other than my company, and it became the aka.net project, which has been going now for, yeah, about 7 years.

So that's that's kind of the the the backstory behind where it came from. Aaron, in that example that you just gave, When you've got all those objects representing users in their state and you want to know like what campaigns are eligible for and what they might eligible you for when they're not on the site those get spun down and taken out of memory so you don't Consume copious amounts of memory or what's the strategy on memory management? That's an excellent point since actors can live forever. If you continuously spin up new actors, you're going to eventually run into a problem if you don't kill some of them, which is that you run out of memory. Right?

It's a harsh world. It's a harsh world. Dude, the analogies around actors are are the human analogies around actors are terrible. Like, in our trainings, we we talk about killing children all the time. You know?

And so you you gotta be careful not to take any of that literally. But the, yes, please. Audience, not literally. The gist of it is is that one of the patterns that we tend to employ is called pacivization, which means that when a resource inside your application has completed persisting its state, let's say so Akka dot net that allows actors to use this event sourcing model to automatically journal their state to some append only log essentially. And the backing store for that could be SQL Server, could be Azure Table Storage, could be Redis.

There's a lot of different a lot of different vehicles for that. But what you ultimately do is once the actor has completed persisting all of its state and let's say it hasn't received a new message for longer than 10 seconds, 30 seconds, maybe 10 minutes, kinda depends on your use case, at that point, you have the actor shut itself down and so that all the references that were pointing to the actor get invalidated, so they can't send the actor a message anymore. If you try to send an act a ref if you try to send a message to a reference to an actor that's died, that message will appear in the sort of special dead letters collection, which basically means the message was undeliverable, and you'll see that show up in your logs in aka.net. And so you wanna make sure your actors that are state driven are essentially always going through the process of passivating themselves. Otherwise, you'll you'll end up running out of memory eventually.

So a good rule of thumb, there's a there's a piece of code that's built into aka.net called a receive timeout, is to go ahead and always use those on actors that are that track the lives of entities specifically. There's other types of actors that are kind of more utility players. For instance, an actor that might process database queries or something like that. Those actors, you don't need to kill because you only need a finite number of them, and those actors are basically just command processors. They just do stuff when you tell them to.

If an actor doesn't have anything to do, it doesn't use any CPU. But an actor, if it's alive, will always use a bit of memory. Right? And then, related to that, I know you talked earlier about a couple principles for these actors have a GUID. If you do get pacifies and kinda taken out and then you get brought back up, do you have the same GUID, or is the GUID unique to that instance?

Well, so the actor has a u a sort of a URL. That URL will be the same each time, although it kinda depends on where in the network that actor gets created because it has the host address appended to it. Mhmm. But the actor's position in the hierarchy will be the same. Gotcha.

However, we add this little, like, random long integer, the kind of good bit to it at the end, and that's gonna be different each time. That's what we use for detecting different incarnations of actors in the event that an actor gets killed on one machine or recreated on another. There's a a piece of technology that's built on top of Akadot cluster, called cluster sharding, which gives you a way of sort of resurrecting actors on demand by sending a message to them, where it'll go ahead and automatically recreate that actor from scratch, and it'll route the message to it. And the cluster sharding system will also be responsible for doing things like making sure there's an even distribution of actors across the cluster. That's another thing that it's responsible for.

I have a question as far as Akka actors are concerned. Right? It's a it's a different way of of managing your your data distributed and and, right, fast response time based on needs. How does that how do you factor in the database then? How does how does that change how you read and write to your database?

So this is a fantastic question. When I'm doing one of my trainings, you'll often the you'll often hear me say that the biggest cost using the actor model isn't learning the syntax. It's the it's the paradigm shift that it introduces when it comes to reasoning about how you deploy and build applications. If the actors are your so the the reason why, by the way, most of us use most of us use a database for doing 2 things, and we we don't often think of them as distinct activities, but they really are. The first activity is persisting data, meaning that in the event the application shuts down and restarts, it has a way of referencing all that old state again.

But the way we really commonly use databases is a bit of a cheat if if you want to think about it this way. We also use databases to create the source of truth in our application. That's the second way we use them. So in the actor model what's different is the database is no longer the source of truth. The database is only meant to be a parking spot for your state when you're not using it or a place you can go and recover it later.

So databases really primarily get used in the actor model as a place to journal, actor state when it's being modified and then to recover that state again when the actor restarts. Now that's not to say there's not some applications using actors where you'd wanna do traditional sort of like SQL queries and that sort of thing. There's plenty of those. But in general, the whole part of the idea behind the actor paradigm is to decentralize state and to be able to spread it out across a cluster. That's what allowed like, I used that example of Ericsson with Erlang earlier.

That's what allows for horizontal scalability is the fact that because the source of truth is distributed throughout these actors and those actors themselves can be distributed across multiple machines, that's what makes an actor system horizontally scalable because if I double the amount of hardware that is running inside the Akka dotnet cluster, I now have double the number of different locations where those actors can live inside the network, right? So the source of truth gets distributed throughout there when it comes to sort of scaling large scale software, you know, you really run into, kind of a handful a handful of sort of fundamental design problems, but the big one that gets you in trouble is centralization. When you introduce single sources of anything inside your system, you're introducing natural bottlenecks and single points of failure that can form inside your system. So the kind of chief idea behind the actor model is to push a lot of that business logic and state out to the edges of your system because you can always grow the number of edges, but you can't grow, for instance, let's say, the number of replicas of your database that you have all that easily if you're using SQL Server.

Is that paired does that does that sort of paradigm shift that I explained that well? Yeah. No. It's it's it's, it's a similar shift to one of our other recent episodes, CQRS. It's Mhmm.

You kinda gotta flip things on its side throughout some of your your known conventions, I guess, especially coming from a kind of, like you said, a dot net world and and how we tend to develop. So That's that's absolutely right. And CQRS gets used CQRS, domain driven design, and event sourcing all tend to get used in combination pretty frequently, and the actor model is a very good fit for implementing that. Cool. So what what's the best way for somebody to get started with learning the actor model and using akka.net and, just get familiar with it?

Well, we do have a step by step course, that's available, for free. It's hosted on GitHub. You know, the the URL, if you wanna get access to it, is just learnakka.net. And that's kind of a a sort of a 3 there's 3 different sections, and I think the total number of lessons is, like, 17 or so. It's kind of a learn by doing course, that'll teach you some of the actor model concepts, and that'll also teach you the syntax working with Akka.net.

But that's probably the best way to start learning it. It's a good learn by doing exercise. And then, the other, the other good ways to go about learning it, there's a ton of Pluralsight courses on it, at least for Akka.net alone, there's at least a dozen up there. That's another good way of doing it. We also have a samples repository in the, petabridge organization that has a bunch of, like, full blown Kubernetes examples and that sort of thing where we show you how to go and run aka.net, like, in kind of a production grade environment.

That'd be another good one to look at. So there's plenty of resources because the the framework's been around long enough now where there's kind of a lot of, a lot of resources that that have been developed, independently of the project itself developed just by people who are fans of it and enjoy using it. Right, Aaron. When you were going over that example of not having the, database be the source of truth, I thought of, like, a great example from when I was working on that massive online stock trading system is one of the things we always wished is that when a user came in a log somewhere, we had some idea what a session was, what the user was, just a few key facts about them, and then we could get the rest by going all the way down to the database, kinda rehydrate, you know, what positions they had open in the market and all that. But what we couldn't do is easily float between different web servers because there wasn't this nice floating concept of state that could kinda tie the whole mesh together.

Mhmm. And so the cheat for that would be go up into the routers and say, once a user comes from a certain IP address, he's sticky, and he can only go to that same machine. Well, that's pretty good. I mean, it's it's okay, and it kinda works. But as a developer, you really wish you could solve that problem instead of that being a networking issue.

Actors are perfect for doing that. And the reason why is because if you're using Akadot cluster, which is the the right tool you would use, in that type of use case in Akadot cluster, every single node in the cluster can talk to every single other node. In other words, you have full awareness of what your network topology looks like. So a request hitting server a can go to the cluster and say, find me the the actor that owns this user's identity. And you have some way of essentially mapping that to an actor identity.

That's one of the things that cluster sharding can do or a consistent hash router might be another way of doing that inside Akka dot net. And that'll let you go ahead and make sure all those messages, regardless of which web server they arrive on, all get routed to the same actor inside a single location. Fundamentally, that whole methodology that actors use for being able to make sure all the state can be consolidated in one location in memory, that's all reliant on a very old, mathematical methodology called consistent hashing ideally. And, that's so that's been around since I think the seventies as well, that that sort of technique. Because this is in hashing is basically a technique where any node in the cluster, as long as it knows who the other nodes in the cluster are, can compute the hash of, let's say, some entity ID, and one of the nodes in the cluster will own the hash range that hash value sits inside of.

And any of the other nodes in the cluster can independently, compute that same hash value based on the same hash key, and all the messages get routed to that same place inside the network. And that's great. Like a computable URL to some degree. Yeah. Exactly.

And if one node leaves the network, all those hash ranges get recomputed essentially, because the the let's say the, the numerator and the divisor, you know, so the the divisor in this case shrinks when you go ahead and take one node out of the cluster, so they could recompute those hash ranges and, in the case of Akadot cluster, they'll actually go ahead and redistribute, where the entities are and they'll say, okay, we need to hand off this entity from this node to this one in order to guarantee even distribution and while that's happening, it'll go ahead and pause message delivery to those actors until they've had a chance to move from one location to another. How does how does AKA handle exceptions or error states in in actors and then as a whole? Oh, this is a great question, because the way the way actors do it is pretty different than you know traditional oop for sure So since actors are organized in these hierarchies and these hierarchies are just parent child relationship So it looks like a family tree. If I if I went ahead and did a printout of a running actor hierarchy, in fact, we've we do have a a tool that'll let you do that.

But if I did a printout, you you'd go ahead and see that when you visualize it. The way actors handle failure is through a model called as the name implies parental supervision so if a child actor crashes and throws an unhandled exception what'll happen is that actor will be suspended meaning that its mailbox will get will be well a little bit will get flipped on the mailbox saying you are now on the off position It's in a timeout. It's on time out the corner. Yep The actor gets put on timeout and a message gets sent to the actor's parent saying this child failed, and here's the exception that they threw. And the parent can decide based on which child failed and what the exception was, how it wants to handle that exception.

By default, Akka dot NET uses and this this all goes all the way back to Erlang, by the way, this the strategy I'm talking about. The user strategy called just let it crash where when an actor throws an unhandled exception, the parent will go ahead and just reboot that actor in place. So what will happen is all the actor references and the actor's identity, the that stuff's all still valid. None of that changes, and the actor doesn't lose any of its messages inside its mailbox, the queue that it processes from. But we're gonna take the current instance of that actor.

That that's where all the actor's internal state and its properties and fields are. The actor is the class that you code yourself as a dot net developer. We're gonna go ahead and take the current instance of the actor. We're going to kill it. We're gonna go ahead and recreate a new one, hook it back up to its to its actor reference and its mailbox, and have it resume processing messages from there.

And that act of having an actor restart is transparent to everybody else. No one else needs to know the actor crashed and restarted. That's kind of the default way actors get supervised. And the reason why we do that why would you wanna restart a piece of code? Well, the reason is is that if you understood why the actor was going to throw an exception, you'd probably handle it and try catch block.

Right? If you get a SQL timeout exception, I think you can figure out why that actor failed. But if you get an invalid operation exception, who knows what that means? So that means there might be something wrong with your actor state. Like this actually might be a real programming error, not like a transient runtime error that you expect to happen from time to time.

When that happens, do you wanna let that actor that through the invalid operation process the next message without restarting? Probably not. It's much safer and more predictable to reboot that actor back into whatever its its last known safe state was, and we do that using what's called the props. Props is basically a formula you use for defining an actor's type, its constructor, and its arguments is gonna be fed into its its constructor when it starts. We'll go ahead and recreate the actor from its props.

The actor will go ahead and run one of its life cycle methods. This method, it runs before it begins receiving any messages. And if the actor needs to retrieve state from the database, whatever, it can do it in that method. And then once the actor starts processing messages, it picks up from where it left off again. So the idea of rebooting that piece of code back into its, last known safe state is seen as a a much more predictable way of managing faults inside an application than what we normally do as OOP developers, which I call the dig out method, where we have a bunch of try catch blocks, and we try to claw our way back to back to back to finding a safe mode to run-in.

Now when an error gets thrown, actors have a lot more flexibility than just restarting a child. You can also, a, kill that child permanently. Let's say if an actor failed so badly that there is that restarting wouldn't even fix it, you might just wanna go ahead and choose to just terminate it permanently, which means the actor is gonna dump all of its messages it didn't process into the dead letters queue, that sort of thing, you could do that. Or if let's say your actor threw an exception and the exception meant that let's say that entire area of the domain needed to be rebooted. So for instance, we've used this IoT example.

Let's say, we get a notification back that we've lost our our feed for being able to talk to a camera. Let's say we're using, oh, I don't know, an an analog socket or something like that. We can go ahead and use what's called an escalate directive to say, I wanna propagate this. I wanna treat this exception like my failure, and I wanna ask the grandparent to restart me so I can trigger a rolling restart of that entire part of the application if if if I if I want to. We use that inside Okta dot remote for for closing sockets, for instance.

When the, the read side of the socket blows up, it goes and reports that exception to the right side of the socket, and then the right side of the socket escalates that to its parent, which ends up restarting both parts of the actor hierarchy. So that way we can go ahead and make sure that we don't accidentally leak socket connections, inside our inside our clusters and that sort of thing. I think, I think, rebooting the actor, I think, is known as the control alt delete pattern. They have they have a much more boring name for it in Erlang, the error kernel pattern, but control alt delete's better. I, I I do like, the idea of just letting it crash and burn.

Right? You got you gotta let your kids fail at least in programming. At least in programming. Fair enough. I've got a question about that.

So one of the things that I find that I run into, it's a little little bit, difficult, is you get your whatever code it is. It wouldn't have to even be actor based code, but you get it done, you get it ready, you've passed a bunch of your tests, and then you get it out there, and you run to what I call a data driven bug. So a new kind of data hits one of your objects or your actors, and it's just gonna crash it. And then queuing theory, this would be called, like, the poison message problem. Mhmm.

So if you get that to one of your actors and every time it's restarted and wakes up, it gets that same message and dies. What ends up happening then? When an actor crashes and restarts the message that it was currently processing, what that caused to throw that exception doesn't automatically get queued back into the mailbox. What happens is is there's a method called on restart. It's a life cycle method for the actor that gets called on the incarnation of the actor that's about to be destroyed and recreated.

Basically, you feed the exception and the message that threw it into this method and the actor can decide what to do with it. The actor can send that message back to itself so it can reprocess it. It can ignore it. It can just log it and then not do anything with it, or it could try sending that message to maybe its parent if it wanted to. So what's the licensing like on aka.net?

Well, aka.net is a dot net foundation open source project, and it's licensed under Apache too. So anyone can just pick it up and start using it, any anytime they want, really. The way we make money on aka.net is I guess we have a couple different business models that PediBridge has. We sell trainings, so online or pre coronavirus. We did a lot of on-site trainings at, at company's offices.

Hopefully we'll get back to doing that again, the near future. But that's one thing that we do. We do a lot of consulting, so architecture reviews, that type of thing. And then we have some, developer support plans that we sell for questions that come up on an as needed basis from some of our developers. So I was handling a troubleshooting call earlier this morning, with a company that had some problems when their firewall went berserk and spiked the CPU and all their machines and dropped about 80% of their packets for 2 hours.

Yeah. That was fun. So we help with things like that. And then we also sell some, application performance monitoring software on top of aka.net, and that's called Phobos. And that allows you to do things like trace sort of end to end message entering the system from, let's say, ASP dot NET, a bunch of messages being sent around the cluster and eventually a response being sent back as a HTTP response, or maybe it might even result in a message being sent, you know, to Azure service bus or something like that.

Phobos will help trace all of that. And then Phobos will also do metrics, so keeping track of important, runtime statistics for her applications performing. So we sell that as kind of a a proprietary add on on top of aka.net. Nice. So, yeah, it really makes it so that there is no excuse for somebody that's interested in this to to get started and try it out.

No. Absolutely not. It's, yep, aka.net. Like I said, it's a part of a dotnet foundation. They're the ones who, hold the, copyright on the, on the source.

So that means that even if our business was to to suddenly disappear, we've been around for 5 and a half years, so I don't think that's gonna happen. But in the event that there was an issue, there's always some the the the the there's always a foundation there that backs up the source, and everything's licensed under Apache too, which is very commercially friendly. So anyone can pick it up and start building a business application with it anytime they want. You know, speaking of getting started with it, Aaron, as, like, as you're going through this, if I was gonna paint a picture for someone who, like let's say they're fairly strong with object oriented design, it seems like at the first blush, a lot of the actor thing is gonna seem like pretty similar. Maybe some of the differences are gonna be the idea that things live in a hierarchy.

That'd be a bit new. Like, that your your example where one object owns all the cameras and then has sub objects. That's not necessarily come up with in just straight o o design. And then probably the other addition is just awareness of concurrency, like, whether you're gonna have something run on the same process or the same machine or a different machine. It seems like there's gonna be a little bit of the concurrency design thrown in that maybe would require somebody that at least understood the basics of parallelism and multi machine issues You would have at the very least want to try to understand parallelism because actors are by design intended to be run-in parallel with each other So that would be one thing you don't necessarily need to get into a lot of your crazy computer science level sort of complexity on that to really grasp it.

But the idea that this actor will send a message to another actor and that other actor might process it. It might process it, you know, several milliseconds later potentially that there's the the the messaging is all asynchronous by default on on Akka dotnet. So that'd be one thing. But otherwise, yes, actors, still are very much within the realm of object oriented programming. But I would argue there is one very important thing they do that is directly from functional programming.

And as fate would have it, this is a brand new language feature they added to c sharp 7, which is actors use pattern matching to do all of their message processing. So when an actor receives a message, that message is going to be initially, you know, untyped. It's an object, and the actor is gonna go ahead and use either you can use a receive actor, which is this sort of syntax that aka.net uses that's strongly typed using, generics, Basically, having a bunch of generic type handlers for methods, or you can use what's called an untyped handler or, excuse me, an untyped actor, which will just use a switch statement to determine what this message's type is and how to handle it. So, that that's one sort of functional programming paradigm that you'll get a lot of exposure to in Akka.net is this notion of pass of, pattern matching of message types. And frankly, since that's kind of becoming part of the the lingua franca of dot net, now that c sharps really begin, using that heavily and Microsoft's using that feature in a lot of their reference samples and everything else, that's something that probably should be or at least will be a lot more familiar to Akadana users than it has been historically.

Great. Great. You know, another thing I was really interested in, you're when you're talking about sort of the history of of how you sort of fell into this and talking about the company that was doing the analytics and processing and just found that you just couldn't meet the concurrency. I mean, I've been there on different projects where the concurrency is literally the roadblock. When you were back at that point kinda looking at that, how did you, make that decision to choose whether or not to just use something existing like Erlang?

Oh, boy. That is a that is a a very fun question. So our development team was small at that company, which is me and and really 2 other full time engineers at the time that and I was also the CEO responsible for fundraising, working with customers, payroll, you name it, wore a lot of hats. But I was also the lead architect for this product. We had had a lot of experience developing our tooling in across different languages, our SDK for gathering analytics and metrics, in other words.

So we had, you know, I man, this is gonna be this is gonna be gruesome, but I'll just I'll talk through it. So I had to build to support Windows desktop applications. We we basically moved some of our marketing automation from, like, just the marketing automation from, like, just the Windows Store to supporting, like, old school Windows desktop, like, Win 32 applications. One of our sort of, pilot customers was a Java shop. And so we had to go ahead and build a Java library equivalent or a telemetry library.

We had to go ahead and build a wrapper in Java and then we also had a bunch of customers shipping c plus plus library, applications as well. So the way we decided to go ahead and try to support them was our core instrumentation library that we've been using in, the dot net, excuse me, the the Windows store was all written in c sharp. So I had this sandwich going. I had a c sharp library at the very bottom that did all the real hard work. I wrote a c plus plus CLI wrapper.

For those who don't know what that is, that's kind of, c plus plus syntax that allows you to still call into the CLR directly. So it's kind of a bridge between managed and unmanaged code. So I had a c plus plus CLR wrapper talking to c sharp and then I had a native c, not c plus plus a c wrapper that talked to the c plus plus CLI wrapper that allowed our native customers to use it. Well, in order to expose all of that functionality to that Java customer, I wrote what's called a jni library. Jni is the native c interface in Java.

So it's kinda like the equivalent of p invoke for Java. And then I wrote a Java wrapper on top of that. So the Java library called a jni library, which called a c library, which called a c plus plus CLI library, which called a c sharp library. And we managed to package all of that into a jar and get it and we were able to get it to work on 100 of 1000 of machines that our customers deployed onto. Wow.

That's awesome. Honestly, that's a very underrated feat of in of software engineering there, but I'll save it for save it for another day. The the point being that we were comfortable shipping production quality work in other platforms. The question I think you're really asking is why go through all the trouble of porting something like Akka to dot net when I could have just used Erlang or Scala. Right?

The answer was, and this was a sort of an unfortunate CTO choice that I had to make, and it's one that I hope I never have to make again. I had to choose between the lesser of 2 evils. Do I take a run time whose properties I don't fully understand because I've never used it production and immediately throw it into a high traffic system that's performing under enormous loads without hiring any additional resources to do it because I couldn't afford to or do I pour a framework whose complexity I don't fully understand, but I put it onto a run time that I do understand And I decided it was ultimately better to understand one thing than to not understand 2, basically, even though it meant all this extra expense of porting this framework from Scala to c sharp that, ultimately the the uncertainty of how to manage all this technology at large scale under such a short time frame was was a risk I was not willing to take. So I I was willing to bet that our ability to port that framework and get it into production grade quality was ultimately going to be a faster and less risky exercise than trying to rewrite, let's say, our all of our back end systems that were that would need to do the marketing automation to use Java or scholar or something like that.

That's great. That makes complete sense. I've had the opportunity to work in some some larger companies that had a lot of resources, and that is actually a very common decision if they've got enough money to have an architecture team is to choose to write some of those pieces so they have full control and understanding of them. So I totally get that. It's one of those, apps it's one of those things where if you were working on a stable piece of legacy software that's not being put under a ton of strain by, like, external customer or market conditions, that's not necessarily a a decision making pattern you're all that familiar with as a developer.

But once you get into sort of more the architect and, like, leadership role and you have to start thinking about, like, systematic risk inside the system, that's where that type of ugliness rears its head a lot. But ultimately, Hey, everyone in the dot net ecosystem has benefited from it. You know, we've had like over 50,000 people go through the aka.netbootcamp I mentioned earlier. So that's a lot of people using that tool based on that decision we made, you know, years ago. And, this company, petabridge kind of got started by accident.

So that company that the previous startup, it was called marked up analytics is the name of it marked up died around Thanksgiving, 2014 after two and a half years, roughly. We just were, we just ran out of money. I was, was the long story short of that. I took about 6 weeks off and then I turned around and I founded Pedabridge, like the company that I'm still running today in January, 2015, because I was getting hit up by early Akka dot NET users on LinkedIn for consulting and help getting Akka dot NET up and running inside their systems. One of our really early customers, we were based in Los Angeles at the time.

One of our really other early customers was based in downtown LA and they were responsible for building software for managing transit systems for large cities. Now if you've ever installed like the LA metro app and, got a push notification when your bus was running late along one of the metro lines, it was their software doing this. And they had rewritten their sort of really detailed analytics and notifications back end, to use aka.net to go and process all this data sent by the the sort of cell modems that are plugged into those those fleets of buses for keeping track of their real time position, how late they were along the routes and all that sort of stuff. And so they were really kind of one of our very first, like, pilot use cases for making a real business out of this. But, that's ultimately what inspired Petabridge.

And, you know, since then, our our user base has grown a lot. The actual technology itself has matured a bunch too. And, yeah, we're working on trying to, add more and more sort of tools that are designed to support our users. We just released, for instance, Akka dot at 1.4 back in February, and that added the ability to do, like, in memory replication inside a cluster in a in an eventually consistent way using a special type of data structure called a CRDT, which is a, conflict free replicated data type. And then, we also added, a module called Akana cluster metrics where you could get CPU and memory and also custom, sort of performance metrics about all of your nodes inside a cluster.

And so we could create, like, routers in Akka dot net that could say, send these messages to the nodes with the least busy CPU and that sort of thing. Nice. Nice. Well, thanks, Aaron. I think we're just about out of time.

We're gonna move on to PIX. And if you're not familiar with PIX, it's just anything that, you're interested in, nowadays. It doesn't have to be technology related. I often pick TV shows or movies or books or games, anything like that. So we'll go first while you can think about what you wanna have for your pick.

And why don't you start us off, Caleb? What's your pick? Yeah. So, right, I picked the switch and switch games for several of our podcasts. My family is now playing the switch more than ever because roughly 2 weeks ago, I got animal crossing new horizons for my wife.

And at first, she she wasn't spending a lot of time on it, but then my son and I started playing, and now she's playing it every day. So Animal Crossing. It's it's a good distraction from, you know, what we're dealing with day to day. Alright. Nice.

Yeah. You do like that switch. Yeah. Yeah. Alright, Joel.

What are you gonna have for your pick? Yeah. I've been, learning a new instrument this year. So I picked up bass guitar. I've been playing guitar for many years in garage bands.

And, my pick this week is the Fender PJ Bass. Is it sorry. Fender PJ Bass. And Fender took their old precision bass pickup, and and they, have the, precision bass pickup towards the bridge and then or sort of towards the neck, and then towards the bridge, they have the jazz pickup. And so you can actually switch between them and get 2 very different tones or hit that mid position and mix them.

So you get that classic, like, p bass rumble or kind of the more of the jazz bass sharper tone. So gonna be the next John Entwistle? Sorry. What was that? Are you gonna be the next John Entwistle?

Well, I mean You look a little like him. I guess that's a start. Right? Anybody that doesn't know John Entwistle, it's a bass guitarist for The Who. So alright.

So, my pick this week is another Netflix show. It's one I just started watching. I found it fairly interesting. It's called Lock and Key, l o c k e and key. It's about a family that moves into an old family house and they start finding these magical keys around the house that do various different things.

As you open a door or open some object with the key, it does various magical type of things. So if you like, fantasy and things like that, do check out on Netflix, lock and key. And it's actually based on, a comic book series. Nice. Nice.

If you like a TV show Yeah. They got season 1 out, and they are working on a season 2. Cool. At least enough people liked it to make another season. Alright.

Aaron, do you have anything that you wanna let our listeners know about that that interests you nowadays? Oh, absolutely. I'm just gonna drop a link chat here. So I'm gonna talk about whiskey. Whiskey.

So I'm a Scotch man primarily, but recently I've been introduced to some, really superb bourbons, actually. So I'm primarily a Scotch drinker, but I happened to stumble into, at our local liquor store here in Houston. They were carrying a Eagle Rare, which is a rare bourbon from Buffalo Trace Distillery. So it's, 10 years old and costs about $29 a bottle, and, it's one of the best drinks I've ever had. Absolutely.

Okay. Bought a bottle for my for my father-in-law. And so I, it's tough to it's tough to find them. But when you do find one of these, they're they're not that expensive and it's a really good buy. And if you're interested in interested in bourbon or whiskey at all, this is one of those things.

You'll just be delighted if you stumble across it at your store. And there's a couple places you can buy it online too. Definitely, recommend you check it out. Nice. Nice.

So if people wanna get in touch with you and have questions, is it best to go through the yakadotnet website or Twitter or, the akka.net website will drop you into our getter chat. There's about 1600 akka.net users in there. We'll be happy to answer your questions. So if you're interested in talking about akka.net, I definitely recommend doing that. You can also drop us a line on our Twitter handle, which is just aka, d o t n e t, on Twitter, or you can, message me.

I'm, Aaron on the web on Twitter. Great. Great. And if our listeners wanna reach out to the show or get in touch with me, they can find me on Twitter. It's at.net superhero.

Follow me, learn about dot net and, also make any suggestions that you have for the show. It'd be great. So thanks, Aaron, for your time today. Yeah. Thank you.

Thank you. Thanks for your time, folks. And, hope hope you guys enjoy the show. Yeah. We did.

I definitely did. It's definitely something that's that's new to me, but I can already see definite uses where this could could, be applied for some of the projects that I'm working on. So great. Thank you. Excellent.

Alright. And that's it for this episode. And please join us on the next episode of adventuresin.net. Bye y'all. Bye.
Album Art
Performant Applications using the Actor Pattern & Akka.NET with Aaron Stannard - .NET 206
0:00
01:08:08
Playback Speed: