Demystifying AI Innovations - ML 165

Today, we have a special guest Abi Aryan, an accomplished founder of Abide AI and a seasoned expert in machine learning. Joining us are your hosts, Michael Berk and Ben Wilson, who bring a wealth of experience from Databricks.

Special Guests: Abi Aryan

Show Notes

Today, we have a special guest Abi Aryan, an accomplished founder of Abide AI and a seasoned expert in machine learning. Joining us are your hosts, Michael Berk and Ben Wilson, who bring a wealth of experience from Databricks.
In this episode, Ben shares his journey navigating the intricacies of deep learning and the surprising effectiveness of simpler solutions over complex algorithms. Abi lends her insights to the balancing act between innovation and practicality in tech adoption, influenced by career stability and venture capital demands. They also explore Abi's passion for recommender systems and audio speech synthesis, and the potential these fields hold for e-commerce and inclusivity.
Abi also gives us a glimpse into her research methodology, her approach to autonomous agents, and the challenges she faced with bias and imposter syndrome. As they dissect consulting strategies, experiment design, and the art of fostering a collaborative environment, this episode is packed with valuable lessons for any tech enthusiast.
So, get ready to tune in, take notes, and be inspired by the fascinating stories and insights from our expert guest and hosts.

Socials

Transcript

Michael Berk [00:00:05]:
Welcome back to another episode of Adventures in Machine Learning. I'm one of your hosts, Michael Burke, and I do data engineering and machine learning at Databricks. I'm joined by my cohost.

Ben Wilson [00:00:14]:
Ben Wilson. I evaluate agent frameworks at Databricks.

Michael Berk [00:00:21]:
What can you elaborate, Ben?

Ben Wilson [00:00:24]:
That's just what I'm doing this week. Okay. Cool. Thank you. Part of part of the work the work this week is looking at what people are using and what are potential cool things for MLflow to integrate with.

Michael Berk [00:00:35]:
Awesome. Cool. Well, today, we are speaking with Abby. She, studied mathematics in university, both at the undergraduate and at the master's level, and she works in a variety of ML and software roles after graduating. Currently, she's writing a book called LLM Ops, Managing Large Language Models and Production, and also is working at abide AI, an ML services org that she found. So, Abby, I was gonna tee you up with a question, but I'm completely gonna go off the rails. You worked at UCLA as a researcher under the legendary Judea Pearl, and you were trying to do emotion recognition in the speech and video space. And if, listeners are not familiar, Judea Pearl is a legend.

Michael Berk [00:01:17]:
He won the Turing Award, and he worked in probabilistic and causal quote unquote ML, what has become ML over these years. So why did you choose this application to research?

Abi Aryan, [00:01:29]:
So I didn't choose it. I was assigned to work on basically, what he told me to work on was how how you define what autonomous agents are. So that that was the thesis. And I spent couple of months, which is I looked at things from, like, a causality and going there are 2 kinds of causality. There's one causality, what Julia status. And then there's, Joseph Herb I have a helper Joseph helper. He studied, casualty as well. We both have different flavors of casualty.

Abi Aryan, [00:02:04]:
So I looked at all of their, research. I looked at the research in philosophy. I looked at more research in economics than in computer science. Eventually, I realized, well, I don't think I can, sorry, define autonomous agents the way I'm going about this. So I said, well, if you I if I can't define it, let me try to break it down into 3 things. For any agent to be autonomous, it has to be able to do few things. 1st is it needs to be able to pick its own algorithm, which is it needs to know based on the task itself what kind of machine learning model does it need to use. So that, to me, falls in the category of auto ML.

Abi Aryan, [00:02:43]:
The second thing it needs to do is it needs to be able to recognize emotions, to be able to switch context within a certain conversation as well. So that led me to work in emotion recognition. And the 3rd part was multi agent systems, which is how the how do different agents interact with each other within, like, an environment. So I researched independently in all 3 areas, which is I've worked in AutoML, I've worked in motion recognition, and in multi agent systems for those 3 years.

Michael Berk [00:03:15]:
Alright. Fifty questions coming right now first. In the algorithm selection process, does that include the loss function? Yes. Woah. Okay. That's blowing my mind. Because I I one of the things that I always thought was super fascinating about agents as a concept is they can they have their own prerogative of what is success. It's not just a fixed definition.

Michael Berk [00:03:42]:
And so, yeah, it's a very it's a very interesting paradigm. Okay. I'll pause the 50 questions, actually. What was it like working with Judea Pearl?

Abi Aryan, [00:03:55]:
You know, there's there's one usual thing which is said. Most of the professors in academia, they're sink or swim kind of person. Either they'll throw you at the program and make you think, or they will teach you how to swim. Is one of those persons who just gives you the confidence maybe because he's a little bit senior as well. And I was asking him this question, which is I especially went to him because I wanted to win a cheering award. I was like, well, you need to teach me how to win a cheering award. And he was like, Abby, you've worked on the problem that I've worked on for 20 years. Every single day, I've thought about that problem itself.

Abi Aryan, [00:04:31]:
And then one day you make incremental you're making incremental research and then one day you realize, okay. I I think I have a solid framework which sort of works. And to me, it was more like nobody can teach you how to make incremental progress at something. You just have to have your own intuition on, a, how to identify the right problems. B, just because there are a lot of people saying reinforcement learning is the right way to be able to do what I'm at, that doesn't mean that it's the right algorithm or the right learning system to be able to do what I'm at. All of that comes from experience. So for me, it was more like one of those professors who just to rehab the program, then I would meet with him every 2 or 3 weeks. And I explained to him everything that I've read, my thoughts about problems, and he just sat there pretty much like a therapist listening to me.

Abi Aryan, [00:05:25]:
He took notes, didn't say anything, and that

Engaged Participant [00:05:28]:
was pretty much the entire collaboration, which is he was more like an emotional support system that was there,

Abi Aryan, [00:05:37]:
more than like, okay. Do this. Work on this. I think this is right or this is wrong. He never said this is wrong. More so because I don't think he had the expertise, to and he's the one person who doesn't disqualify an idea too quickly. Maybe although that's sort of his personality on social media, which is that was around the time when the book of why was announced it. A lot of people were sort of vocal about how deep learning models do not have a model of causality.

Abi Aryan, [00:06:04]:
So they're not supposed to be good models of intelligence. But I think that has been, to some extent, disproven by now, which is they can become really good models of intelligence. Yes. They have their own limitations. So his social media personality is, I think, a little bit more out loud, discussing things. But his in person personality is very open to all ideas you bring to him. If you say, I think there's a problem with your theory. He is open to listen to everything, you have to say on that.

Abi Aryan, [00:06:36]:
So it was more like humility. No. That was the one thing I learned from him. Humidity and persistence.

Ben Wilson [00:06:43]:
I have a question about your process with doing greenfield research. So you're talking about, hey. I need to come up with, you know, research into AutoML solutions for, you know, minimizing loss and determining where to route a particular, you know, output from one thing, stock in a in a, you know, multimodal system of agents. When you're doing that research and you come up with your ideas, you you mentioned humility in that process. Knowing that we all have bias about where we're going and the almost infinite possibilities that exist when doing independent original research, what's your process of internal checks and balances of sort of self doubting or making sure that you need to provide evidence to yourself to convince you that you're moving down the right path to make those incremental improvements?

Abi Aryan, [00:07:46]:
So I would say there were 2 things, which I was pretty much insane for the first one year, which is I cried every single morning. It it I didn't get out of bed until 3 PM because I had this constant imposter syndrome. I was like, I just want somebody to hand hold me right now, and that wasn't really happening. But when it comes to eventually, I think after a point once you realize you're pretty much by yourself, there's nobody going to come rescue or tell you what's right and wrong. Then you start to read a lot of work. And that was my process, which is when it comes to autumnal research, I've read, I think, about 700 research papers within the period of 3 months. And I didn't I didn't say, okay. These are the papers which are written by, Sewing Soul Labs, so this should be good.

Abi Aryan, [00:08:37]:
I just read very, very comprehensively. And I think there were 2 people who were, in my guides when it comes to selecting good research as well. So, even before I started working with Julia for about 2 years before that, I had a friend, Miles Prandish, who is now the policy manager at OpenAI. He is, doing his PhD at Oxford. So he would teach me how to select the right papers, and how to read a lot of papers very quickly. And the other person is David So I would I probably consider more like mentors when it comes to that. So if I got lost with a particular research direction, most of the times, I would go for advice to somebody who has already been through the process or has already done all of that sort of before. All of them were pretty much as confused as I was about what's going to lead and expecting, what's the right research direction.

Abi Aryan, [00:09:38]:
So after quite numerous, nobody knows anything. People just have their own bias. Like you said, people have their own bias of what would be the right approach to solving a particular problem, and they just go ahead with that approach. The the only thing I did differently was instead of sticking with 1, which is instead of saying, let's work in patient optimization or let's work in reinforcement learning or let's work in with biological algorithms, which is the genetic algorithms. I was more like, let me think about what are the problems and challenges with each of these approaches. So instead of saying, this is better than that, I I started looking at them from, like, that external angle, which is I may not be able to make great research in each of these things. And maybe I would end up picking something bad. But what's the common constant problem and challenge to each of these problems? So that that was the work which I did, which is, working on the problems and challenges in AutoML tool.

Abi Aryan, [00:10:47]:
All their techniques or the learning techniques that there are.

Michael Berk [00:10:50]:
Woah. So okay. So the process was massive lit review of 700 papers. Then instead of reading what the papers were good at almost, you looked at what the papers were all struggling with and then found the core tenants and then decided to research that.

Abi Aryan, [00:11:06]:
Yes. And I also participated in on the practice side, which is, most of the learning you can do is very much theoretical. But, again and there were automail competitions going on at newer IPS back then. So I started participating in those competitions yearly. I didn't win a prize because most of the people who are running for board, like, people who were doing very excessive hyperparameter optimization at least at that point. I didn't have those kind of resources because we were more like a theoretical lab. We didn't have the computational support, especially a data lab. So for me, it was more like, let's just practice.

Abi Aryan, [00:11:46]:
Let's get the foundations right for people who are winning. Let me see what are their what are their approaches as well and go part in that direction instead of, you know, picking 1 and trying to put all my resources into optimizing for, like, a 0.2% gain at something else.

Ben Wilson [00:12:05]:
But the foundation that you built in your mental model of how to approach something like that, I think it's pretty unique. I haven't talked to a lot of people that use that, like, that rigor associated with the scientific method of evaluating the results of other people's work and also of their own, And it probably gives you this this really comprehensive understanding of how to provide recommendations to others who are trying to employ these things. How do you have that conversation when you're when you were doing consulting and you're currently doing consulting, when you talk to somebody, like a customer of yours who is dead set on, this is what we have to use because this won some prize, or I read this this one paper and this is, like, the hot thing right now. Do you actually sort of steer people to that success that you found in doing your research?

Abi Aryan, [00:13:02]:
So most of the times, it depends which is what kind of organization that is. I've done consulting for small startups where I'm directly working with the founder. They come from, like, an ML background itself, and I've worked with very big companies, which is companies that have about 800 employees and as such, which is the trading firms. So one of the things which I do, especially with founders, is I try not to say, you know, this isn't kind of work. I try to ask them, you know, what their goals are, which is, when it comes to any model, the first thing I do is to try break, the problem into SLAs, SLOs and KPIs. Once we know okay. These are the SLOs that we want to define, which could be on every single parameter itself. So for large language models as well, you define your particular SLOs.

Abi Aryan, [00:13:58]:
I would say, you know, the SLOs would be the latency.

Michael Berk [00:14:03]:
It could

Abi Aryan, [00:14:03]:
be, the throughput as well. It could be the error rate. It could be the robustness. It could be consistency. It could be response time. It could be scalability, the capacity planning, or the data freshness, or the customer satisfaction compliance. So what I do is for each of these things, I try to ask them what their SLOs are. So, for example, when it comes to data freshness, asking, you know, how, often does your dashboard data get refreshed or how often should it be? And then I try to sort of reverse engineer the problem, which is if those are the SLOs and KPIs you have, is that the best model now? Are you are you still optimistic that this model can sort of achieve what you're looking for? And that's that's more sort of the way I think about system design as well as pipeline design because it's not just the model itself.

Abi Aryan, [00:14:55]:
It's it's a very intricate solution that needs to be put in place. I don't focus that much on x y z model as good or bad. I think you can optimize every single model if you really understand it well.

Michael Berk [00:15:11]:
Got it. Yeah. That's that's very similar to how we do stuff in the field with Databricks. We look to basically define success, and that can take a variety of different forms. SLOs, SLAs, KPIs, they're they're an amazing start. But, also, there's what are your thoughts on sort of, like, the fuzzy, did this feel right definition of success? Like, are people happy? Does that matter, or is it all about meeting hard measurable criteria in your opinion?

Abi Aryan, [00:15:41]:
It depends, which is when it comes to bigger companies, a lot of stuff works by intuition because, again, they have, years of experience in the domain, which I don't I may say, you know, this is what, the error rate should be like or this is what the perceived latency would be good. They may have a very different experience. They have that understanding of their audience and their customers that I don't have. So a lot of times, especially when it comes to that, we work collaboratively, and I don't try to stick by a measure. We do very intensive a b testing, with both the solutions, which is at least when somebody comes to me and starts saying, you know, this is where I think is right. I'm never the kind of person who will be like, this is wrong. I'll be the kind of person. I'll be like, okay.

Abi Aryan, [00:16:24]:
This is fantastic. Are you okay with testing one more thing as well? If it's not that much of a trouble, I think we can test this for this this reason. These are the pros and cons. So I get them to buy into the pros and cons of my approach. We test both the things and then whichever they select. I'm not attached to the outcome, which is who has the last say in in terms of things.

Michael Berk [00:16:47]:
Got it. That makes sense. Ben, did you cater to the emotional needs of of customers ever, or was it always come in and say the facts, leave?

Ben Wilson [00:16:57]:
Oh, it was always whatever not specifically the emotion. I tried to avoid that as much as possible and get that get them to adopt evidence based, you know, results because that's what I'd I'd had done in previous jobs that I had. I was like, I don't care who builds this or comes up the idea. We're all one big team here. Like, even if you're working with an an external company, when you're doing that work with them, you're effectively part of their team. You want them to be successful. So sometimes you I found that I had to sort of fight against that, where it was more of a sort of the who's the the chieftain in the tribe who really wants their idea to be adopted and get that person over to the the side of, hey. We're we're we're all working on this together.

Ben Wilson [00:17:49]:
Let's just let the data make the decision for us. You know? You're looking to to amplify click through rate for this recommender system. Let's test it. You know? And but let's when we're doing the evaluation, let's avoid all of these pitfalls of cherry picking. So here's how we're gonna design our experiment such that we're gonna collect very high quality data and then use the appropriate statistical methods to evaluate this based on, you know, temporal effects and cohorts of of humans that are interacting with this. And I spent more of my time when doing, like, project work historically, way more time, usually 4 to 10 x, the amount of time designing an experiment and setting that up than I did on building models and pipelines and stuff. Because model stuff is generally somewhat easy. Tuning them is generally somewhat easy.

Ben Wilson [00:18:45]:
But setting up like, designing appropriate experiment is just a lot of brute force work.

Abi Aryan, [00:18:52]:
I think a lot of that comes with experience as well. So for me, like, I started my masters when I was 19 years old. So I finished when I was 20. So in my head, I was always I I I didn't start out as this kind of person. I was the kind of person who would challenge the CEOs directly and always be like, I'm I'm ready to fight, and I'll call people stupid. Your feelings are not my problem. So so when I went into, like, the workplace, that was my attitude. But I think it took me about 2, 3 years to learn, and that's not the right kind of attitude as well, and there's another way.

Abi Aryan, [00:19:30]:
So, especially when, like, I was in school, my head told me one of the things you lack is diplomacy.

Engaged Participant [00:19:38]:
And I was like, I'm going to learn how to be

Abi Aryan, [00:19:41]:
diplomatic as a person. And I always had this negative perception of diplomacy, which is it's it's basically people pleasing, but then I just learned. I think at one point in this realization, which is you have data and you have information. The big difference is data is something which is raw. Information is basically something which is ready to be perceived by people. So there's a different you have an understanding of the state of the other person as well. And what I stopped doing was giving people data. Now I think more in, like, the terms of information when I'm communicating with the team.

Abi Aryan, [00:20:14]:
It doesn't have to be just data. It has to be information itself. But that certainly came with making some mistakes in, like, my early career.

Ben Wilson [00:20:25]:
Have you ever seen I I can say I have seen, a born witness to this, where an effective person who understands how to do that data to information conversion, if they're significantly talented in mathematics and statistics, chooses to craft the narrative to the way they wanna see it. And then have you ever caught somebody doing that and been like, please show me your work?

Abi Aryan, [00:20:52]:
Yeah. There's a professor at USC. I'm not gonna name names, but he's one of the professors in computer science department. He basically created research. He published a paper that got accepted as well. But what he did was selective, data. So he basically rejected all the outliers that you were seeing to get the kind of, get the kind of map or the graph that he wanted to. And at that and a lot of his research was essentially wrong, which is he perceived COVIDation as causation.

Abi Aryan, [00:21:26]:
And I was like, you're dumb. I can say it, but in my head, I was like, you're dumb. But, I mean, he's not essentially dumb. He's he knows what he's doing. It gets him cards and all of that stuff. So, yes, I have. People do manipulate data every now and then. We try to leave certain, data points when it suits them.

Abi Aryan, [00:21:50]:
But I don't think that's I think that's that's one of the beauties of large language models, I would say, which is but conventional machine learning models, one of the basic jobs that we were doing to make the model work really well was reduce all the missing data and outliers as much as possible. And now with large language models, we are at that point where we've won the outliers. They make the model quality better. So I I have that sort of method association with fantastic. That would work well for you to, I don't know, like, 2019. What will happen in 2050 when we have more advanced machine learning models? They would need those outliers to make fantastic performance. So, I mean, keep your tricks, but they will get outdated very soon.

Michael Berk [00:22:33]:
Oh, man. Wait. Did you confront the this unnamed professor, or what happened? Or were you just like, he's dumb and didn't say anything?

Engaged Participant [00:22:41]:
No. I I wasn't mature vibing to not not think, like, for a reason.

Michael Berk [00:22:47]:
Didn't, like, run up to him and smack him in the face. Say you did this wrong. Okay.

Abi Aryan, [00:22:52]:
No. I just pointed out, which is I think you missed a couple of things. And I pointed out the factual inaccuracies, but I didn't say anything about his or why he chose to do something. I was like, I think you missed this point. Obviously, it it was put in a way that was way more polite, and the conversation in my

Engaged Participant [00:23:14]:
head was not that polite.

Ben Wilson [00:23:18]:
I mean, that's that's a benefit you get from, I think, academic peer review with ideas, but something that depending on what the politics, are at a particular organization in industry, there is no peer review or peer review is stifled intentionally, and dissenting voices are kind of, you know, pushed to the side when somebody has this grand idea. Do you ever see that in consulting work, or have you seen it in consulting work where you're like, okay. I'm walking into a political minefield right now, and I need to navigate this in such a way that we were making the same sort of evaluations of an idea that we would or the results of the test of an idea that we would in academia.

Abi Aryan, [00:24:12]:
So somebody having to say in the final opinion or somebody just suppressing everybody else's opinion? Is that is that what you're asking?

Ben Wilson [00:24:22]:
Usually, the person the the worst things that I've seen are the person coming up with that idea or, you know, they're going off on their own building something to prove to everybody that their idea or their their approach is better than anything else, and then not saying like, not showing their source code.

Abi Aryan, [00:24:42]:
So especially in consulting, I don't think I've seen people not showing me the code. Most of the reason is I think because I'm doing consulting, and they've they're hiring me and paying me for a certain reason. So I'm I'm there to help in the 1st place. So no. But one thing I've seen is people building their own stuff. And with that, I try to be more open minded now. I I would say in in the past 6 years that that has been a change, which is if somebody's taking the initiative to build something even if it's bad, then I don't go in and say, okay. I think this is stupid.

Abi Aryan, [00:25:21]:
I'm more like, okay. I think fantastic. You've done this. Now let's see how can we improvise over top of this thing itself. So I steer them back to my thing by saying, let's improvise this. So if they've implemented, let's say, one kind of evaluation system, then I would be like, what's the result of this? Can we try this? Can we improve this by any chance and let them brainstorm? So that's something which I think, Robert Crane says in his books a lot, which is let people think it's their idea and not yours, and you'll have it very easy in the workplace.

Ben Wilson [00:25:58]:
Could not agree more. We've done an entire podcast episode about that, actually.

Michael Berk [00:26:02]:
I was about to say. Yeah. Sounds kinda familiar. Because just the the recap of our mind, Ben's conclusion at least was if someone thinks it's their own idea, they'll be a lot more emotionally attached to it, and therefore, they'll champion that idea a lot more than if you say this is the correct method.

Abi Aryan, [00:26:21]:
So But the thing I hate is I don't hate people who take initiative and build their own thing. I hate the kind of people who just want to have that mental masturbation in the room and say why something isn't going to work. Those are the people I find are the barriers to success as compared to people who go on attention entirely and work on their own solution. So execution is something I appreciate even if it's not perfect. Even if it's bad execution, I'll appreciate it.

Michael Berk [00:26:51]:
So to clarify, if someone just is a naysayer that's, like, putting down people's ideas without an alternative, that's what you don't like?

Abi Aryan, [00:27:00]:
No. Even if there's an alternative, but if you're not executing, if you're just talking about things, then that's something which I hate. Not everybody is an executor. There are a lot of people who just likes talking about things. They are articulating very sound. They know how everything works. But when it comes to implementation, they can't do crap. So those those are the people I find particularly annoying.

Abi Aryan, [00:27:24]:
And so them over a person who do their own thing and, you know, is super stubborn. I'll go with a super stubborn, person who will implement something and then because I know I can work with that person very easily. I can say, let's improvise this. Let's implement this. I've written this code. Can we integrate this? I'll I'll give you my code. Can you sort of implement this in yours? And so I find them easy to steer as compared to the people who are like, I'm not gonna do anything, but I'll tell everybody what to do.

Ben Wilson [00:28:00]:
The idea people. They're pro I I share that sentiment with you. Those were always the most frustrating humans to interact with in the technical environment because when pressed to, like, okay. You said this idea is terrible. You you have this better way of doing it. Show me, on Friday. You got 4 days. Like, it doesn't have to be perfect production code.

Ben Wilson [00:28:26]:
Just get something. You don't even have to test. Just I wanna see it execute, and they always have excuses or just fight against that. And then sometimes having that conversation with them 1 on 1, be like, can you actually implement what you said is a better idea? And a lot of times, those better ideas that you hear are not even possible outside of theory. It's like, yeah, that algorithm that you're talking about, this is how long it would take by my estimation. Let's write a simple test to see. You know, you think this this means of optimizing is is substandard, but you have this great idea. What's the operational complexity? Like, the the big o notation for what you just explained, and then you walk through on a whiteboard with them, like, this is, you know, you know, of such significant complexity, o 2 to the n or something, and and at a a the minimum value of that is in the the tens of 1,000 for this iteration.

Ben Wilson [00:29:27]:
This isn't gonna complete on, you know, silicon based architecture anytime soon. So let let's just go with this other one that that works.

Abi Aryan, [00:29:38]:
Yeah. I think that's the that's the thing with the idea people, which is they always have their ideas, but they have their ideas on paper. They're not essentially their ideas when it comes to execution. So it's it's one of those frustrations that I've shared, especially with people especially with why working with junior software engineers, which is one of the big conversations among junior software engineers. Let's let's use this language. Let's use this framework. And not which the company isn't using. And 99% of my effort, especially working with those people, is usually trying to get them or get them to use the use the ecosystem or the tools and the frameworks that we already have.

Abi Aryan, [00:30:19]:
Because there's a very good chance if they're and I've seen it with them, which is of made couple of them failed as well. I've not made them failed, but they didn't consider, which is it's not scalable enough. You know, there's there's not libraries there are not enough libraries for the kind of application that we're looking at. There will be a point where they'll hit the hard stop and they're not willing to go build an open source for to support that kind of the missing piece or the missing dependencies, as compared to what the company is already using. So those those are some those were sometimes frustrating kind of people, but I think that happens especially when you work with juniors in any any sort of, work because they they wanna experiment things. They wanna learn more. And I think the learning happens within when you start optimizing instead of experimenting with a lot of different things. Or at least that has been my experience, which is the most learning I've done is when I try to optimize the system that has already been built by somebody else.

Abi Aryan, [00:31:21]:
Else.

Michael Berk [00:31:22]:
Yeah. That's that's a really, really interesting point. You are you already have a logical framework and guidelines that you must navigate. So I always found that constrained problems are actually a lot more interesting, a lot more fun. And because it's not greenfield, I can't just go do x y z. I actually have to think critically, and, it's sort of by definition more challenging. But also because it's within a logical framework, there's something that's existing that's already meeting a use case, so there will be value from the outset. So if you can improve by 10%, whatever metric of interest it is, latency, accuracy, satisfaction, you can really provide a lot of results, that are valuable.

Abi Aryan, [00:32:05]:
Also, another anecdote of something very similar is, one of the consulting works I was doing, before working with the company, and they said, why don't you implement simple a bit more complex deep learning models for the kind of application that we have? Because, again, they wanted to make the presentation through the PCs and as such. And the existing models that they were using for another application, but in the very same domain, we're working very well. But they were simple statistical machine learning models. So, like, GBM kind of models and such. So that's that's one conversation I've I've had to have with the c the CTO of the company. I was like, I'm okay with implementing a more complex model, but you have to know that I'm not guaranteeing that it will have better performance. What do you exactly need? So let's just think about your KPIs first. Do you want to, be able to catch errors by so and so rate? If so, I can take the existing algorithm that you've applied for a different use case in the same domain.

Abi Aryan, [00:33:13]:
I'll implement that same thing and improve the performance on that one. And that's that's essentially what we did. So I I got that pushback about 2, 3 times in our process, which is every single time a board meeting was coming up, they wanted to show something which was very cooler. And 99% of the types, the cooler things are essentially what will get you closer to, solving the actual problem, or this the business problem itself because some algorithms do work really well with a particular kind of data. So let's say if you have geopositional data or if you have time series data, then any of the gradient boosting and sample methods will work very well with those kind of models itself as compared to, like, let's try to implement a transfer one model onto that one. It's it's probably not the greatest idea.

Michael Berk [00:34:08]:
Yeah. That your story makes me viscerally frustrated. Why do people think that a like, what is cool? Can you define to me what other people think is cool? And, like, why is not the most optimal solution the coolest solution? For both of you? Like, genuinely, what in human nature makes a transformer cooler than linear regression?

Ben Wilson [00:34:33]:
Because it's new, I think. I think humans are fascinated by shiny new objects, whether that be an idea or a new technology or a place that we haven't gone before as a species. People get really excited about that, like, because by nature, evolutionarily, we are explorers. We are always curious about the things that we don't know about. And sometimes I've I mean, not sometimes. Many times, I've seen that exact scenario play out with working with a customer. I remember having a conversation for too many hours with the team of data scientists that were all I think the most senior person on the team was 18 months out of undergrad, and they're just like, we really wanna get you know, we have this this time series, you know, this forecasting model that's supposed to, you know, estimate how much we're gonna sell of all these different products. I'm like, cool.

Ben Wilson [00:35:39]:
Awesome. Like, how does it work? And they're like, well, dude, it's not totally optimal for these these particular types of products in these particular regions. Like, okay. Are you, like, missing data? You have, like, no sales for a couple of weeks or something? They're like, yeah. Yeah. That's exactly it. I'm like, okay. Tell me what what your proposal is.

Ben Wilson [00:35:58]:
Like, why why am I talking to you, basically? And they're like, well, we need a deep learning expert to to build, like, a single meta model that does all of it. I'm like, I'm not that person. I I'll I can build deep learning models, but, I don't know if I can, like, replace what you have right now with a single model. I don't know if you have enough data, and I don't know what the appropriate architecture might be for constructing that with what you want, PyTorch. I'll go and read some papers for you, though, and I'll try to figure some stuff out. But, like, let's think about how we can solve that problem that you have of, hey. These certain markets and these products, I don't have enough data to, you know, estimate what what that forecast should be, can we can we just think about how we're assigning these hierarchies of where these products are in what markets? Do we need to have it as each individual SKU at each store? Is that effective? Can we look at geographically? There's gotta be a distribution warehouse that provides these materials to these stores. Can we build a model for the the sparse data that goes at that distribution center? And then the distribution center has humans that are smart, and they can figure out where to ship stuff when they need it, but you need to place orders to to handle that that distribution center.

Ben Wilson [00:37:31]:
And there were a couple of people in

Abi Aryan, [00:37:34]:
the room.

Ben Wilson [00:37:34]:
They're like, we never thought about that. Let's try that. And we were done with the engagement within 4 days because all we did was just make some very simple configuration changes in the code, and it worked. They were bummed out that we couldn't use a Torch model to do it all. So I gave them a little, like, you know, hackathon project that I worked on them with where they wanted to learn Torch, and that was really their excuse. So we did some image classification and object detection just for fun, with, like, data that they had. They had warehouse cameras and video feeds. So, like, let's detect if we can see if there's a a risky operator moving forklifts around and in order to be able to detect, when an accident may occur.

Ben Wilson [00:38:23]:
And they built, like, this cool prototype, shared it to management, and they're like, we need to actually make a project out of this for safety. And I'm like, cool. That's an appropriate use of that model or of that framework.

Abi Aryan, [00:38:40]:
Yeah. I think I agree, which is there there are 2 sides of it. One is the side that Ben mentioned, which is there's that curiosity to learn for whatever reason because, a, they wanna put some sort of monopoly in the market, which is the we've applied in Gartham, which none of the competitors has already applied. So we're ahead of the curve. The second part is the FOMO side of things as well, which is there's only so much stability that we have in the workplaces today. It's not unusual for companies to go down or the carriers to stagnate as a place, and you don't wanna lack skills as well. So I think a lot of people come from that formal perspective as well, which is I wanna, if if if everybody else is using transformer models, I wanna implement transformer models as well because applying to a newer place or newer company, which is now doing well. They might be working on transformers.

Abi Aryan, [00:39:36]:
So I need to have some sort of work experience in my current workplace that already implements that. So from a career standpoint and, I mean, even when it comes to talking to VCs, it's the same thing, which is people who pay you money. Most of the people are not paying you money to keep doing boring things. 99% of the times you have to innovate in some way or the other. If you can't really innovate in terms of, 99% people cannot really innovate in terms of customer acquisition itself. So what they try to do is to innovate in terms of tech instead. And I think that's that's probably gonna change what is happening, I think, with the whole people who are building up wrappers right now, which is a lot of people have made a lot of money for particularly that reason, which is you don't always need to innovate the technology. You can use the endpoints as is.

Abi Aryan, [00:40:34]:
Understanding customer acquisition is usually the hardest part of the problem. But it's also the more frustrating part of the problem. It's not the sexy part of the problem to a lot of people as well.

Michael Berk [00:40:48]:
What are some problems that you're currently excited about?

Abi Aryan, [00:40:53]:
So a couple of them. One is, I would say, recommender systems because we've worked extensively in, like, the ecommerce space and mostly in in the whole, in that domain where we were working on the communication system. So one way or the other. So those are the ones which I'm personally excited about because a lot there's so much business. There are so many retail businesses, and there are so many ecommerce businesses. They are still struggling with that over People are not able to search for the right thing. Or there's there's a very minimal comparison that people are able to do right now on their websites. Even when you take Amazon as well, which is even though they've got gotten good at recommending people more products, They can get better as well.

Abi Aryan, [00:41:43]:
And that's one place where most of the Spotify Shopify based companies are sort of missing out, which is if they are able to recommend things better, then they are able to sell more. So I'm I'm a capitalist when it comes to that. People should sell more. And the second is, I would say, I would say probably a lot more people are excited about autonomous agents and not as much, but I'm more interested in audio speech synthesis. Because, again, there's there's very high latency with Amazon, MXN, all of those systems have been working with so much real time data. I think that's one place which has been lacking attention, which is there's not large systems, home devices and home systems in in this space, particularly because of that reason. One is the size of the data and the the time taken to basically do inferencing on on the models itself, can be very high.

Michael Berk [00:42:48]:
Got it. Going one level deeper, why are those the areas of interest? Did you do your 700 paper lit review and think these are the the core issues, or is it just cool to you?

Abi Aryan, [00:43:00]:
So I've worked in audio speech synthesis in 2021, 2022, which is when we were, building up. So one of the companies, we were building up their TikTok kind of application. We didn't get funded eventually. We ran out of money. But the other application that we were building was, a voice filtering system. So, basically, like, you have, I I don't know if you've used this, app called voice mod, which allows you to apply filters and, you know, sound like a child or sound like an old man or something. So what that allows you to do is to mimic certain voices or to be able to change your voice into something completely different. So for me, I was like, let's try to work on this.

Abi Aryan, [00:43:48]:
We very quickly ended up hitting limits when it comes to hardware itself. So for in Windows systems, their drivers are pretty locked in, when it comes to Mac systems as well. And their drivers are not as open. There's a single, like, free I think it's called Blackmagic or something like that, which allows you to be able to work on something like that. But, Microsoft isn't working on improving that side of things either on, like, the entire Windows systems. And one of the core reasons for working on that was there are so many people who are still dealing with the, oh, I don't understand your accent. So for me, it was more like one from an educational point of view and second from a hiring point of view as well, which is we can anonymize people by anonymizing their voices and anonymizing their accent as well. So we can reduce some bias.

Abi Aryan, [00:44:42]:
Same when it comes to YouTube as well, when it comes to creators. I think one of the big limitations for them is basically their voice. We can sort of change that entirely. When it comes to movies as well, you do the same thing very quickly, which is you can very quickly give people an accent wherever you want using their original voice without taking basically how they actually sound like.

Michael Berk [00:45:06]:
Got it. That's super cool. Yeah. And we there's obviously a bunch of nefarious applications of that, but there's also some really good low latency. Like, Alexis, as you mentioned, like, that's a a classic example that would improve many lives. And are you planning on

Abi Aryan, [00:45:22]:
that's just working on it, but there are very few companies there. So there's probably 4 or 5 companies that I know in the space that I'm working on very specifically audio speeds of it. So I think it has so much potential, especially, like, when you start implementing Alexa systems or thinking every single device in your house is going to be voice enabled in the future, which is your beds will be voice enabled, your tables will be voice enabled, your bulge and everything will be voice enabled, the big barrier would be those technologies sort of understanding people's accent and people's voice and how they create their sentences or all of those things. So right now, I think what Google has done is they've really provided a lot of data to, Google or OKR or whatever their thing is to be able to understand how people say certain things. But I don't think it's as close to, somebody speaking like a local. So, for example, if my mom tried to speak to it, probably she would have problems, getting the exact search that she wants to. And that, for me, is, like, one very important point of being able to search in the future, which is we're not gonna type as much for at least for very simple things.

Michael Berk [00:46:35]:
Do you think we're gonna need to create Gen AI to solve because all these problems are tangential, like self driving cars, speech audio. Alright. Is there gonna be one model that just bashes a bunch of these problems and checks off all the boxes, or is there gonna be small independent models that does each one individually?

Abi Aryan, [00:46:55]:
So I would say they I mean, it depends on the scale of the application itself, which is when it comes to companies like Google implementing such a solution. Instead of having one single model, they would have multiple models, which is it comes down to compliance side of things and the optimization side of things. But if a company has, let's say, about 1,000 users itself for the application, they don't probably need that level of optimization. Or if they're simply focusing on a single geography, they might have just one button. So it comes down to, like, where is the serving scale and what is the diversity of that serving scale itself.

Michael Berk [00:47:34]:
Yeah. That makes sense. It's like the general purpose GPT versus a fine tune, llama 3, for instance. Okay. Cool. And then another question to that end, how are you spending your time? Are you working on these speech recognition models? I heard you're writing a book. Is that going well?

Abi Aryan, [00:47:55]:
So we're working on the early release of the book. I've already submitted 6 draft chapters. We have 10 chapters in the book, but I write about 30 drops before I actually finish the chapter and say it's done. So it will it will be by, I think, end of this year or early next year by when the book would come out. But the second chapter is ready for, like, early release. So that would come out probably next month or so. So, yes, that's what's happening. The consulting work is still happening, which is moving away a little bit from, like, spending so much time on development, which is the engineering side of things for them, and most of focusing on the training side of things, which is in right now, last month itself, I was training a team, which is couple of software engineers, IT people, and everybody on how to implement large language models safely, helping them build solutions.

Abi Aryan, [00:48:57]:
So, like, instead of hiring a Jenny, I came from outside and instead of hiring random people from Twitter because everybody sounds like the smartest person on Twitter, hire them and they don't know how to do anything. So a much better solution is hiring people internally, and that's what a lot of companies are doing right now, which is taking people from different teams and combining them together and training them to be able to build solutions because they already understand the company, and the problems inside of them. They have very realistic understanding of what the KPIs would be and when would implementing something make a difference and when it doesn't. So that's been one of the focuses. And then working on my own startup as well, which is we're working on a product, to mostly, like, build a second brain, which is more like a dream project. I've wanted to do that since I was probably, like, 19, 20 years old because I've got ADHD. I can't remember all of the things. I don't have the best executive function, which is I could learn.

Abi Aryan, [00:50:06]:
I do serve really, really teach myself, on how to, how to not get overwhelmed. Also, how to manage multiple things as well. Right now, basically, I'm the kind of person who cannot do 500 different things at the same time. There are a lot of people who can be highly stimulated at all times. I'm the kind of person if I start coding, I will keep on coding for 10 hours. And if you talk to me in the middle of that, I'm not the most pleasant person to have a conversation with. So I don't do context switching that well. So that's that's pretty much my, my week, which is that's pretty much what is happening, which is the writing.

Abi Aryan, [00:50:49]:
The training, I've already created a lot of training material. I'm planning to put it on, like, some sort of website Coursera or or maybe or something like that. O'Reilly is already I'm having a conversation with them. It's in the early stages. I'm talking to Manning as well. They wanna they want to implement some of the content as well. And the third thing, which is the productive, which is where I'm putting in all my engineering juices instead.

Michael Berk [00:51:16]:
Do you sleep?

Abi Aryan, [00:51:21]:
Yes. But I have trouble sleeping, which is I can sleep 4, 5 hours, but I don't think I can always sleep more than that. Once a week, I'll sleep, like, 10, 11 hours.

Michael Berk [00:51:34]:
Wow. Sounds kinda sporadic. And it works for you?

Abi Aryan, [00:51:40]:
It does, but just I I don't know what I'll do. If I sleep too much, I can't afford. I keep waking up.

Michael Berk [00:51:49]:
Like, I could be writing code right now. Might as well wake up and do that.

Abi Aryan, [00:51:54]:
Yeah. I mean, I I think it depends on how much work do you are you required to do as well in how many dreams do you have, which is, like, every single time I'm going to sleep, in my head, I'm thinking of how many problems can I solve if I was coding instead, if I was burning up the solutions instead? So, for example, every single time I get irritated with humans, and I do get irritated with people a lot. I'm the kind

Engaged Participant [00:52:19]:
of person who's like, I'll replace you with a machine.

Abi Aryan, [00:52:22]:
I don't say you're a bad person. I say I'll replace you with a machine in my head.

Engaged Participant [00:52:28]:
Yeah. Oh,

Abi Aryan, [00:52:29]:
so the same conversation goes. If if I'm irritated with my parents for over something, which is they probably didn't ask me what I'm going to eat or something else like that, I'll be like, one day I'll build a machine that will ask me what to eat and bring me the same food. So there's there's way too many problems to be solved.

Michael Berk [00:52:51]:
So that's why you started this company, to replace your parents.

Abi Aryan, [00:52:55]:
No. Not to replace my parents, but to replace people. The replace the dependency on people. I think I'm one of those hyper independent people that doesn't really like to, rely that much on others. I can ask others for her, but I don't like that when I feel helpless, which is I really depend on you for something else. So I work this company to sort of remove that reliance on humans entirely. Obviously, it will not be removed. The company needs people.

Abi Aryan, [00:53:27]:
The systems cannot run themselves. They need to be monitored. There will be people working. But, again, I think as long as we're working on that solution, this is making people more and more

Engaged Participant [00:53:38]:
of

Abi Aryan, [00:53:38]:
making people more and more happy, and the technology being there for the sake of human beings instead of something that we have to learn because of our careers. I think that's that's one of the best places to be.

Ben Wilson [00:53:52]:
Seems to be a common theme in a lot of builders that we've talked to, on these interviews is that that is the the, sort of the underlying foundation of most people's motivation. We're trying to build stuff with AI, like systems or platforms or frameworks for people. Is that effectively what you said. It's like giving people agency again. So if you are reliant on another person for things that could technically be automated, we have the technology to do that. I really like that idea that you have about, like, that second brain of, like, I need improved short term and long term memory storage and a system that I can interact with for that.

Abi Aryan, [00:54:39]:
One of the things that we're implementing is very similar to how our conscious and unconscious brains work, which is we're constantly storing all the information, but we're not accessing all the information at every single point in time. So for one of the things that and then the systems are really bad at doing is they don't have that segregation of what was something which was 2 years ago. And, you know, this is no longer relevant. This doesn't need to be retrieved as much. So having sort of like, having that understanding of what are the most relevant and recent things that might be impacting your decisions or what would be the things that you're looking for. So as to be able to help people, what are they trying to do next?

Ben Wilson [00:55:22]:
It's almost like implementing something like an RFM framework on terms of contextual access of memory could be beneficial for something like that. Like, how recent was it accessed? How many times was it accessed? And what was the the qualitative impact associated with accessing that information? Like, how relevant was it, and then almost, like, scoring, like

Abi Aryan, [00:55:48]:
The the downside of that I would say, though, the downside of that is, if you implement, if you implement basically weight associated stuff with it, then the problem is it converges too quickly as compared to looking at the entire hyperparameter space when you're just looking when you're just missing out on so much information that what's the point of training the model on your on the massive data? So that's one of the reasons we're implementing knowledge graphs. So we can learn the hierarchies of certain information and how that is basically stored within the model itself.

Michael Berk [00:56:25]:
Do you mind giving us a 60 second overview of the architecture and secret sauce that can be publicly shared?

Abi Aryan, [00:56:36]:
I If that's possible. Probably probably in a few months from now once the implementation is out. It may be too early for that. But not from, like, okay. It's it's something to be hidden, but we need to know it works well. So we've not done evaluations entirely on that system yet.

Michael Berk [00:56:56]:
Got

Ben Wilson [00:56:56]:
it. I'm curious about what the plans for evaluation of something like that would be when you when you're accessing something in that knowledge graph about, like, okay, what am I gonna retrieve and how is that gonna change over time? Would you use something similar to, like, almost like an evolutionary algorithm to determine where you're gonna go for modifying or maintaining that system?

Abi Aryan, [00:57:24]:
So I would say evolutionary algorithms are really well when you're looking at problem that needs creative solutions, which is now how can I look for something new? So any sort of open workings and all of those kind of problems are where, evolutionary algorithms work well. The way we're we're doing evaluations for this is particularly we've done 2 parts, which is one is we've we've focused a lot on the retrieval itself. And when it comes to retrieval, although there are few parameters of retrieval, for example, like, there's recall, there's MRR, there's, context recall as well, and there's context relevance you're looking at. What we're doing is we're also integrating some serve, KPIs with that that are associated with human input as well, which is how much of that is being used? How often is the person querying again and again for the similar kind of information as well? Are they digging deep into this thing or are they trying to reiterate the same question itself? If they're asking the same question again, then probably we're not giving them the kind of response that we should have generated in the first place. So what I've done is basically, created a sort of system to be able to rank the responses based on how people are interacting with the model itself and the kind of consequent conversation that is happening within within that context itself.

Ben Wilson [00:58:56]:
So you could effectively train a system or a framework that can do this such that it's optimized for human retention of information at the appropriate level of response.

Abi Aryan, [00:59:09]:
Yeah. So if

Ben Wilson [00:59:09]:
you're like, hey. We're giving too much information in this response because it has all of this context or or it's too complex of a topic to be at the level that we're presenting it. So somebody has to search it 7 or 8 times over a 2 week period to really grok what's going on.

Abi Aryan, [00:59:28]:
Exactly. So that's

Ben Wilson [00:59:29]:
If it's too high level, people will, like, start digging even further into it. You're like, okay. Maybe we made this too simple.

Abi Aryan, [00:59:37]:
That's essentially what we're doing, which is we're building a ranking system based on that very specific thing itself, which is based on every single conversation, can we rank how much information was actually used and how much was actually, being asked for once again? So let's say if the person, was only looking for, like, certain needs where we're giving entire definitions and then we're giving a table or something, which is, like, extra context around the same thing, which is not really useful. So for example, if you're looking at where the competitors were x, y, z, or what are the alternatives for x, y, z framework. Instead of giving, the definitions or the deep dives of how every framework works, just giving it the simple names and letting people generate the next question and seeing if that question is relevant to, like, the one of the names that we already presented in, like, the earlier conversation itself.

Ben Wilson [01:00:33]:
How do you feel about bootstrapping systems like that?

Michael Berk [01:00:36]:
Like,

Ben Wilson [01:00:37]:
there'd be an an initialization of the state for a particular user, and then an ID would be assigned to a human that's interfacing with that that system. Would that adapt over time? And do you think products like that are viable for, you know there's some amount of that with OpenAI where it's like, okay, the the session ID associated with it that now spans across multiple sessions. It seems like it's kind of customized. Like, my wife and I have tested stuff. But, like, I'm gonna ask a bunch questions, and then you're gonna ask the same thing, but in different ways and see, does it diverge at some point and to a certain degree? Yeah. But how do you feel about hyper personalization of stuff like that where it's, like, everything you ask it, the system starts to build adaptable responses to that individual person that are actually relevant?

Abi Aryan, [01:01:30]:
How do I feel about that? I think that would be that would be ideal in the long run. In this modern, which is they're getting the information that they want to, it would be really hard. And that's that's one of the reasons why I said evolutionary are not one of the greatest choices, especially in this kind of scenario. The case 99% of the times every single time you're generating a different response for different people as well, then it's it's very hard to be able to test what's actually what remains accurate as well. Right. So the divergence between you have to always maintain, the level of skewness between responses as long as that skewness doesn't really leave a certain threshold, then you could.

Michael Berk [01:02:22]:
You're optimizing messages for human understanding. How do you evaluate that?

Abi Aryan, [01:02:32]:
Message is for human understanding. So what we're talking about is basically, like, the generation side of things. Right?

Michael Berk [01:02:39]:
Yeah. Like, you were talking about how much of a information in a piece of text is actually used and how much is sort of discarded by the human brain? How do you evaluate that?

Abi Aryan, [01:02:50]:
So as of now, what we've really implemented is, basically, like, the query level understanding of how relevant does the query from the last query itself, and how many words are sort of reused and how much context from the previous conversation is reused once again. So that's that. On the basis of that, we're we're sort of building up a threshold system. But, again, we don't have that's one of the reasons why I said, you know, maybe not the best time because as of now, all of the optimization that we've done is just for one person, which is for myself. We're not really put it at scale where we're testing the system for, like, 100 users and then see what really happens. And that's where I can say for sure, you know, I think we've sort of established, the relevance of this thing, and we know for sure that this system actually works.

Michael Berk [01:03:45]:
Crystal clear. K. Cool. Thank you for letting us dive a little deeper. Excited to see some maybe some blog posts. Would they come out in a few months?

Abi Aryan, [01:03:57]:
There might be blog post, very regularly, 1 a week. I've put some down into my drafts and stuff. But, again, I've not published them, but I was planning on doing it more regularly. Now the book is coming out, I need to generate interest. So it's it's primarily for marketing reason, but, one of the things that's already out is a report I put out on what is LLM ops. So that is on the ORA website. I'll give you, like, a link on the email after this. Itself.

Abi Aryan, [01:04:27]:
So what that does is it it defines what large language model operations is, why do we need that, and, the 5 or 6 applications that we're going to talk about that we didn't get a chance to talk about that much. And then one of the big problems when it comes to operationalizing a lot of them is what and how does that sort of divert from MLOps or the basic MLOps that we were doing for, like, the past few years. So there's not massive focus on, like, the tools and frameworks, but there's a big focus on what are the things that we need to measure when it comes to the life cycle as well, what are the things that can essentially go wrong.

Michael Berk [01:05:09]:
Crystal clear. Okay. Cool. Well, we are over time per usual, and we also didn't even talk about the stuff they were talking about before kicking off recording. Maybe there's another episode per usual. Maybe there's another episode in store. But I'll quickly summarize some of the cool points that I heard on this, episode. So first, when you're breaking down problems, try to break them down into SLOs, SLAs, and KPIs.

Michael Berk [01:05:35]:
Metrics are super important for evaluating success. And also from a contractual perspective, you can point to them and say, hey, we did the job or we didn't. When communicating, try to convert raw data into context aware information, and the context not only includes the person that's receiving it, but also context about the problem itself. You learn more when optimizing than just doing greenfield prototyping, and evolutionary algorithms work well with outliers, for instance, creative problem solving for your digital brain. So, Abby, if people wanna learn more about you, your work, your book, where should they go?

Abi Aryan, [01:06:10]:
The best place would be my website or LinkedIn or I'm I'm not super active on Twitter these days, but, again, there's so much content of product on Twitter, which is what I do on Twitter is I'll leave threads. Anytime I'm reviewing papers, I review papers, which is every couple of months I go, and I'd say, these are the best papers you can read in the space. And this is and these these are the summaries of each of these ones. So if people are looking for recent stuff, go to my Twitter. If people are looking for more implementation, operational engineering stuff, go to my LinkedIn. If you want deep dives into something, then go to my website where I have a blog. So that has really good, very deep blogs with a lot of mathematical stuff that I can't really get to put. Like, latex equations would really work well on Twitter.

Michael Berk [01:07:03]:
Right. Yeah. More short form, little, like, math is cool statements, not a LaTeX. Yeah. Yeah. Okay. Cool. Well, this was a lot of fun.

Michael Berk [01:07:14]:
Thank you for joining. Until next time, it's been Michael Burke and my cohost,

Ben Wilson [01:07:17]:
Ben Olson. And

Michael Berk [01:07:18]:
have a good day, everyone.

Ben Wilson [01:07:20]:
We'll catch you next time.
Album Art
Demystifying AI Innovations - ML 165
0:00
01:07:25
Playback Speed: