Michael (00:01.474)
Welcome back to another episode of Adventures in Machine Learning. I am one of your hosts, Michael Burke, and I do data engineering and machine learning at Databricks and I'm joined by my lovely cohost.
Ben (00:12.236)
Ben Wilson, I build stuff so you don't have to at Databricks.
Michael (00:16.562)
Thank you for your service once again. So today we're speaking with Eric Daimler. Eric started off in the financial world and then became a professor at Carnegie Mellon where he taught software engineering practice. After that, he did a million interesting things. One of them was working as a white house presidential innovation fellow for machine learning and robotics. That's pretty insane. And we're going to get into that in a bit. And currently he's the CEO of connexus.
which provides adaptable data consolidation. And one of their core innovations is developing a migration as a service API, which sort of handles metadata management and a few other things. So Eric, what made you decide to found Kinexis?
Eric Daimler (01:01.095)
That's a fun place to start. It has many different points of luck, we will really say. There's right place, right time of being in the White House when this research got funded at MIT by one of my co-founders. It was people around the government knowing me and liking me. So they would tell me stuff. And then...
Lastly, when they tell me stuff, I would understand it. So that's kind of a third point of luck, I guess. I had the right training to hear what they were going to say. One of the fantastic parts about being in the US government, besides the genuine feeling of service to Americans and their allies, is this very large perspective of what is present and what is coming in the future. The scale.
of the problems and therefore the scale of the solutions implemented by the agencies within the U.S. government, most notably the Defense Department, are really not found in very many places. So it gives a particular vantage point with which to view the future. This research that got funded out of MIT is based on a discovery in mathematics.
So, laws of nature doesn't really get any more fundamental of that. And that is what I saw as the place we needed to look to break the manual processes that were in evidence for many of the AI implementations that I saw in industry and in government. So that had me motivated to, when I left the White House, get personally invested.
into this research and explore the commercial opportunities and then ultimately putting in more of my own money and jumping in full time bringing some of my friends along.
Michael (03:05.042)
Was there a triggering moment where you like thought to yourself, we need this tool right now, or was it slowly built up over time?
Eric Daimler (03:13.066)
Well, I could see us needing this tool right away. How this got implemented, and I'm not saying anything out of school, is looking at airplanes within the Defense Department. We as American taxpayers spent a great deal of money developing the F-16, and kind of like Windows XP, where many, many iterations get rid of all the faults in the underlying infrastructure. The F-16.
It went through many, many years of testing. Unfortunately, when we then wanted to go to next generation fighters, we had to throw the whole thing out. We couldn't transfer any of the schemas, really anything we learned from the F-16 to the F-22 or the F-35. That cost us a good deal of money and critically a good deal of time. So besides the Defense Department working on...
new solutions. NASA also realizes that their 10-year cycles for project delivery can't sustain themselves. They had come to this research to say, our history as NASA has never compressed that 10-year cycle. We need to do something different. This is a solution to that. This fundamental law of nature in category theory.
abstract math, right, related to graph theory, kind of adjacent to type theory. That is what is going to allow the scaling of these formal methods that previously just was unavailable and had us go down these paths of ad hoc implementation.
Ben (04:55.996)
I have so many questions for you. On a nontactical note, question one is in a position like advising the federal government and particularly the White House decision makers and policymakers when you're brought with questions about what are the possibilities of utilizing cutting edge technology, which could be stuff that's been around for a long time. But the people that are in...
Eric Daimler (04:57.495)
Yeah.
Ben (05:24.448)
positions of power are not necessarily educated about what that means because it's highly specialized, you know, knowledge of that. What were your most successful methodologies when somebody came up to ask about is this thing possible by using this technology, like AI as a broad term?
Ben (05:51.365)
I'd like to know what your thought process is when you're breaking down a problem to explain it back to them about whether it is or isn't possible in order to give them the sufficient amount of information for them to make an effective policy decision.
Eric Daimler (06:06.502)
there's a lot in there. That's a good sophisticated question, but unfortunately there's no short answer to that. There's a range of expertise that I find. I came into the US government maybe as cynical as anybody about what's evident in these large bureaucracies. I was very happily surprised that at the top layers, the levels at which I interacted,
They're some very smart people and very motivated to contribute to the American people. For not a lot of remuneration, let me tell you, I mean, these are not well-paid jobs and these people should get some multiple of their salaries out in private enterprise and probably we should correct that even working for the government. So a lot of times I could be talking to people as equals. Now when I go to elected representatives, you know, they're not, they're lawyers mostly.
They're not expected to know anything of technology. And as we know, if you really don't understand a technical area, you're prone to think that what is hard is really easy and the easy things are actually pretty hard. There's a fundamental disconnect absent some level of understanding. I was really pleased that Justice Stephen Breyer, when he was on the Supreme Court,
I had a good interaction with that guy once at a dinner party and oh wow, he impressed me. You know, just, you know, I don't think I've even been in the most technically adept circles and just suddenly over a glass of wine started getting into a conversation with somebody about probability, you know, conditional probabilities and like...
Justifier was right there with me, genuinely asking questions and genuinely being able to hear me. It was really quite disorienting, but also really heartening. Probably one of the only times I had a glass of wine and felt better afterwards. So that's just a way of saying that there's a wide variety of audiences to hear whatever is going to be said. The staff.
Eric Daimler (08:28.49)
of congressional representatives or senators, certainly gonna be more educated than the elected representatives themselves. But what I found effective was in telling stories. You know, one of my PhD advisors a long, long time ago just talked about the degree to which you need to tell your story with farm animals. And that always stuck in my head.
not to not to denigrate the members of Congress, although some of them deserve it. But I did find myself relating very simple stories to talk about AI. That is what I found to be effective. You know, people aren't going to remember facts. People aren't going to remember logic. You know, as it said, they're going to remember how you felt and they're going to remember the story.
Michael (09:16.282)
What's your success rate when communicating these complex ideas?
Eric Daimler (09:21.934)
Uh, short answer is I don't know. Uh, right, I don't know if the, if the elected representatives heard me. Uh, I would say among, uh, among colleagues across the executive branch, you know, the people with whom I interacted, you know, peers and the defense department peers and the energy department or transportation, you know, those people we could have a good dialogue with. Those are, those are smart people. You know, actually one of my colleagues
from Carnegie Mellon was at the FTC when I was in the White House. So we have these sort of interesting interactions. My role was I could often translate between leadership and technical experts in a way that people didn't consider to be threatening. Because I didn't have a career there. So I didn't have the proverbial dog in the hunt. It's interesting to know.
Maybe I'm just a little slower than others, but the Secretary of Transportation does not have a direct line to the FAA. So if you're talking about next generation air traffic control, that's not a direct conversation between the chief scientist of the FAA and the Secretary of Transportation. So to determine what's real in the difficulties of implementing next generation traffic control, who's trusted to do that translation?
Those are the types of roles that I found myself engaged in, being a trusted interlocutor for technical issues to leadership.
Ben (11:01.56)
It's also interesting, one of the early analogies that you brought to the discussion about fighter generation construction and design. I remember vividly a conversation that I had in the Arabian Gulf in 2004 with a table full of our strike fighter pilots, the FAA-18 Super Hornet guys. And we were sitting down.
And somebody had just seen a movie recently. I don't remember which one it was. It was terrible, but it's basically about AI taking control of fighter jets. And we had all watched it together and they're just ragging on it, trashing it. There's like, there's no way you could get a system that could, could ever compete with the human. And then, you know, the last couple of years, people were like, no, we have, we don't just have AI controlled drones. Um, we have the capability to do Hive.
you know, communication amongst, you know, AI agents that are basically trained with advanced reinforcement learning. And when you start seeing the perspective of a sixth generation fighter that could be out there with, you know, the human kill order being, you know, an interlock before engagement, I'm interested to know somebody who's been on the inside of policymaking decisions like this, where this could be, you know, could be a threat.
fundamentally change the global landscape if technology is applied in that way to national defense. What's the general consensus, not consensus, but what's the pulse among policymakers and also the people in the executive branch who are career professionals deciding these things?
Eric Daimler (12:49.378)
I think this goes back to the point about what we think is easy is hard, what we think is hard is easy. I think there's a misconception on what AI is going to provide in these theaters. We often think of killer robots, and to some extent, maybe they can be. But I think where AI is going to be providing the most value is at the command and control level. You know, we're going to be introducing autonomous vehicles of all times, autonomous weapons, but also logistics.
inside a theater where the speed with which they've been introduced to the battlefield may not be familiar to the leadership or may preclude familiarity to leadership. So the leadership needs something else to be able to understand the panoply of resources available to them in any particular area at a higher velocity than we have typically been used to. That's the value
bringing a drone around or having some sensors on a humanoid looking robot. I think that's going to be a misconception. It's interesting the F-18 pilots would be the ones that bring this up because an F-22 or F-35 pilot would understand their airplanes are actually unflyable without AI. Those are augmented to such an extent that they're just unrecognizable to an F-16 pilot or an F-18 pilot. The flying wings.
And those were available not just because of compute, but also because of discoveries in math. The Fast Fourier Transform was what enabled the flying wings of the, what, the 80s, 90s, 2000s to get airborne safely, as opposed to the flying wings we saw back in the, boy, before we were all born, where they would come up and then crash. Right? So we had flying wings a long time ago.
the technology and in all of its manifestations, either laws of nature of math or in compute and analogous, enabled these new developments. So I use that as evidence to suggest that AI is gonna play a different role than just substituting the pilots.
Ben (15:03.308)
Right. And it really is about blast radius. And that's something that we don't do a lot of discussions with, you know, DOD customers. We have them, but Michael and I don't talk to them. But regardless of the industry that you're in, it's a common theme that we end up discussing with people who are looking to monopolize on the power of what these systems can do, not just
follow the hype and then trying to play catch up to what other people are doing. But the forward thinking people in certain industries that are saying, yeah, we could build this cool bot that does this thing, or we could use an LLM to, you know, automate this little thing. The forward thinkers are the ones saying, how can we train systems that can distill information at the rate at which we can create and ingest it in order to basically get these systems to
do what we currently do way faster and with having a much broader picture. Exactly as you said with that battlefield landscape, when you're talking about sensor data in – I remember being out in 2003 deployed to a war zone and you look around at how many ships are out there, how many aircraft are flying sorties, how many troops are on the ground during the Shokin-Ala campaign of the invasion of Iraq. It's –
a ridiculous amount of data that's coming in, that's being generated. And that's old Aegis weapons system stuff that's on the battlefield there. And if you had some system that could help humans really figure all that out in real time and do simulation modeling and say, what are the probabilities of the best method of attack here? Or where are we exposed? And being able to calculate that in near real time.
Eric Daimler (16:53.114)
You said a lot of things in that are worth addressing. We often will talk about speed of data acquisition, and that's all true. But if just speed of data acquisition was the problem, then equity markets would have a lot of difficulty. We have time sliced equity market transactions to a point where.
it's really in a different league with high frequency and stuff, which I've never dabbled into. That's a data pipe that's formidable, but one we have, to some extent, solved or at least addressed. What's different here is not just this exponential growth of data or the velocity, but as you point out, it's this exponential growth of also data sources, which can freak people out.
And so if you have an exponential growth of data sources and exponential growth of data, the intersection of that where knowledge is created, you're creating the models based on this, that's just a combinatorial explosion that people can't deal with. And that's not just a compute problem. That's also a modeling problem. How do you actually have the math so that's tractable? And Edge Compute doesn't fully address that. We address this issue.
The original research out of MIT that became Kinexis addressed this issue where we need drones in the air, and in real time, we need to update the database schema upon which the drones are operating. That's for obviously a change in the landscape of the battlefield, but it's also for a security protocol. So you just constantly...
correcting these queries. And so evolving the queries, migrating the schema, that's a different level of thinking than just being able to take a bigger hose of data in. And it's in that heterogeneity of the space in which we need to operate that I think the world is going to orient over the next five to 10 years. It's not just about the speed of automation.
Eric Daimler (19:15.102)
It's about leadership in all of its manifestations, in all organizations, beginning to respect and understand all the different data sources and how they need to then take advantage of what they learn from those data sources and the changing world and then respond more quickly. Like I'm talking about the Defense Department, like I'm talking about NASA, because if they do it in the same old way of just dedicating another...
thousand or ten thousand people from Accenture are deloited into a problem. You know, they're going to come up with a solution long after the opportunity has passed.
Michael (19:53.918)
question.
Ben (19:55.705)
Interesting thing to bring up about mutability and access to mutable data sources. Do you think that there is something that's going to happen in say 10, 20 years, before 20 years is up, where we're going to be working with algorithms that are self-selective, that are going to say...
I know I have basically indexes of all the data that I have access to or that I know how to get access to these data sets. And it could be multiple disparate data sets. We're talking the same analogy we're using with the battlefield communications with drones. If it says, Hey, I know I'm not currently ingesting data from, you know, live weather satellites or ingesting.
I'm flying over this area that I've never been before. My model doesn't have pre-training on what to identify for visual recognition, but I know where to go and to acquire data that tells me about the cultural customs of the people from this region that I'm in to determine what would constitute a threat or is there an indicator of clothing, headdress style, haircut style, something that would help me better identify the...
the capabilities of threat analysis within that region. Do you see stuff like that as fully autonomous? And is that one of the things that your company's product is aiming to solve?
Eric Daimler (21:31.554)
Well, it is hard. Jeffrey Hinton says this, and he wasn't the first, but maybe the most notable to say, predicting 20 years is hard. Predicting timelines in general is hard. Predicting two years, pretty easy. We all make our living on being able to predict the next two years. Maybe five. After that, it gets a little fuzzy. And just a little anecdote to the degree to which that's fuzzy.
smart people that probably would have been challenged in the year 2002, 2003, to have been able to predict five years out that there would be a new class of developer called an app developer. And today nobody could have predicted there would be a job called influencer. It's hard. We can generally get the shape of it, but...
we can't get the specifics done very well. I remember working with a friend at IBM where they predicted to some extent the deprecating of web pages, but they didn't think apps, where they didn't identify apps as being the thing that took them over in the power. This is the problem. We can get generally the shape, but not the specifics. So I can identify the general shape of what you're describing. The general shape is that.
we're going to have not just these collaborative automated agents next to us. And Tachy2PT is just a user-friendly example of technology we've had for a bit. We're going to have that be ubiquitous. So we can all try to imagine what that would look like if you have ubiquitous collaborative agents. The easy expression that would fall out of that is that we will soon have
just something around us, not physically, but digitally, representing our values in the world and of interacting about how we want to take in data in all of its manifestations. What you're describing in saying there's new data sets that are not yet imported, I'm saying that today is the difficulty. So I'm expecting way before 20 years. If I was going to put a prediction on it, I might say 5 to 10.
Eric Daimler (23:54.154)
these data sets will begin to be brought together in a way that is accessible. But as we're saying, the level of expansion of the new data continues to increase, and the data sources are increasing. So that's an ever-changing problem. What Kinexis does is Kinexis scales these formal methods so that you have symbolic AI combined
probabilistic or stochastic AI that then has a hybrid AI. And that's ultimately where I think the world is going. I'm not alone in that prediction. And there's other companies working on that. But you're not going to design an airplane on a large language model. You're not going to operate a power plant on a large language model. Yeah, you're laughing. But people think it's somehow the solution to everything. We need to have a large language model. And they're fantastic. And they certainly.
Ben (24:43.012)
No, you're not.
Eric Daimler (24:52.47)
let our imagination run wild, but you need both. And that's what I'm going to predict over the medium term.
Ben (25:00.292)
Yeah, just the other day I was doing testing with GPT-4's new features with the retraining that they did. And I'm sort of known for asking really crazy, stupid questions to evaluate technology like that. A lot of it's humor based just to see like, Hey, can I confuse this thing to the point where it starts hallucinating in an amusing way so that we know where guards can be or how long does it take to start hallucinating? Because our users are going to be using these things that we're building.
But as it's funny that you mentioned the power plant, my formal education is in nuclear engineering. So I started asking it. I was like, all right, here's your guard, your guard rails. I don't want to pressurize water reactor. I want a boiling water reactor, but I don't want to have to deal with an excessive amount of uranium-235 byproduct waste from this. So please divine, like please try to design a revolutionary new reactor design.
It got about five exchanges between me and the agent before it started to, it came up with some interesting ideas that I think were about 400 years off on technology and R&D. My favorite one was basically taking a new revolutionary design of a fusion reactor. It's not based on a tokamak, but something that would be
Eric Daimler (26:06.478)
Ahem.
Ben (26:30.336)
would have stable magnetic containment within the actual confinement field at temperatures that a thorium-based salt reactor would be operating, which is 800 degrees Celsius. And it blew my mind that it came up with that. And then I started asking it, OK, can you give me a design, like a physical layout? And what would the components be? What materials would we need? And it was like, OK.
rapidly got to the point where it's like, we haven't discovered that yet as a species and material science, this could take millennia to figure this out. I'm like, thank you for being honest with me. But yeah, if you ask it something that you're genuinely curious about as a lay person, I think they're fantastic. Or if they've been specifically trained to do a task that is deterministic, like write code for me, that's logical thing that it has millions of examples for.
But when you try it to get it to do, you know, come up with ideas that are, that are valid or, you know, go into the weeds a little bit about something that you know about. It's pretty, pretty apparent pretty quickly. Like, okay, you're just making things up like this. There's no way this is possible.
Eric Daimler (27:45.654)
Just making things up. That's the first time you got that about LLM. Those are all fundamentally probabilistic. And so there's never going to be a solution for LLMs that I see that you're going to be willing to bet your life on. It's in the technology. And it's funny that you brought up nuclear engineering, which is probably the first complex system that required computation for your operations.
Ben (27:48.672)
Yeah. Yeah, that's.
Ben (28:01.388)
Right.
Ben (28:15.157)
Oh yeah.
Eric Daimler (28:15.93)
the first manifestation of something that we can't reason about. It's too complex. I think now just a generation or two later, more and more of our world is going to bump up, or really is bumping up against those sort of limitations, where these areas are just too difficult to reason about. Power plants, too difficult to reason about. Airplanes, too difficult to reason about. Rockets, too difficult to reason about. And you need AIs.
to come help humans manage these systems. And the framework is perfect for thinking about nuclear engineering. Semiconductors are another one. When we had the Pentium V disaster, back in the 90s, there were only, I think it was about 3 million or 5 million transistors on the chip when that error was discovered. So Intel had to go back and redo the way in which they
Ben (29:08.929)
Sounds about right.
Eric Daimler (29:15.726)
the discipline that they applied to their formal methods. And only that way could we scale where these things now have what, three billion, five billion transistors on them, something like that. Had they not scaled formal methods in semiconductor design, it wouldn't be a linear progression of errors that we would experience, right? As you know, and this audience would know, it's an exponential explosion of possible errors. So that's the future is you need a mix.
Ben (29:35.744)
Oh yeah.
Eric Daimler (29:43.05)
When you're doing exploratory work, LLMs can be fine. You're doing protein folding. LLMs can be great. But scaling expert systems, when experts are required, such as nuclear engineering, there's no substitute for that. That's what we need.
Ben (30:00.596)
Yeah, when I was up at the nuclear power training unit in Boston Spawn, New York, the, they have a system set up there. And I'm going to try not to talk about classified information, but basically it's a simulator that they use to design the next generation of, of core. And it's also used to train operators and stuff to like, Hey, here's how you operate a reactor safely. And it's just a computer, but we're treating it like it's the real thing. So don't do anything stupid.
But during the testing phases, they're evaluating through these computers. And I remember going to this interactive, you know, Setup and it's in this massive building. But the room that you sit in looks exactly like the engine room, the engine control room in a submarine. And so it's very familiar. It looks exactly what you're like, what you operate when you're on watch and training students, you come over and then you either do training or you're asked to come in to help.
the research scientists who are designing the next generation of core. They reconfigure everything in the back with this massive, it looks like something out of a sci-fi movie. It's really cool. But behind that control room, there's a massive supercomputer, truly massive that's running the simulation. And I ended up asking a couple of the scientists, I'm always curious asking people the wrong questions. But I was like, well, how do you actually figure out what the power is going to be?
I was like, hey, we're shutting everything else down. I just want to try some things. And they're like, yeah, go ahead. We'll return off the recording and everything. I was like, if I just shim the rods out as fast as I can at high power and then just start turning coolant pumps off, how does this computer simulate that? And how do you know that we can set different power levels over time based on the rate of rod withdrawal? And they're like, well, you know,
We know how that is because the physical reactors are built based on that model. And I was like, wait a minute, you have the super, the supercomputer is, is designing what? Like the shape of things in there and channel dimensions. And they're like, it's simulating the amount of polish we need to apply to the metal that goes into the intake, into the core and the exact dimensions within the micrometer of where all this stuff needs to be. And they're like, furthermore,
Ben (32:22.14)
This supercomputer figures out when they're building the fuel rods within fractions of a millimeter where to put each pellet of which size and mass. And it blew my mind thinking about that. I'm like, how long does that take to compute? Like about a month to do a simulation on this hardware. This is back in 2000 or 1999. But yeah, I mean, when we start thinking about
those processes around something that important. You get that wrong, you get Chernobyl, right? You know, a reactor that's not designed with safety controls that can prevent another Chernobyl disaster. But if we start applying those same principles with what your company is researching and working on, and I'm a huge believer in it, by the way, you start applying that to other use cases that people haven't traditionally thought of. I think that's the sort of thing that...
It furthers us as a species more than anything else.
Eric Daimler (33:19.862)
Hmm. Thank you, we agree. Yeah. There's a lot of opportunity that gets uncovered when you bring these experts together with different views. And what is missed in the media's portrayal of AI is that there are multiple valid perspectives. We can't have one golden rule about a golden source of truth among these experts. You can have a nuclear engineer and a mechanical engineer and a civil engineer and a geologist, and they all have
Ben (33:22.734)
Yeah.
Eric Daimler (33:48.694)
different stances for the same thing. I'm thinking of a particular use case we have also in energy. You have to actually have the AI generate the consensus among these. Because if you have the engineers work on a manual consensus, first of all, they're going to take months. But second of all, you're going to lose the nuance that you spent money and time collecting.
from all of these engineers. They just have different valid perspectives. That's a future of collaboration. The future of collaboration is having these multiple experts using the AI to collaborate. That's what Kinexis is working on for a variety of complex applications.
Ben (34:32.136)
So if we have a mixture of experts AI system that experts are interacting with, this is a really stupid question, so bear with me. What happens if we start applying this use case to things that are not of the STEM realm, and we start applying it to things that apply to the very fabric of human nature, and the mixture of experts starts
Eric Daimler (35:01.227)
Ahem.
Ben (35:01.616)
making decisions that simulates basically a conscience and starts challenging us as a species. How do you think that would define the future of humanity? About saying, hey, we're relying on the decisions of these systems and they're telling us to de-escalate this thing that we're doing. Like, don't go to war, don't create this chaos, don't kill each other.
that's bad, that's not good for the furtherance. If we put ideals and goals into these systems over time that are preservation of humanity, be beneficent to people. And what happens when these systems understand a more ideal goal for humanity and starts aligning us to that? How do you think human nature's reaction is to something like that?
Eric Daimler (35:51.467)
Yeah.
Eric Daimler (35:55.234)
Yeah, if I could answer this a slightly different way, I think this issue about consciousness or sentience is actually, it can find a better framing. And the better framing comes in two parts, we'll say. You know, one part is...
Synthesize unlikely in our lifetimes and the reason is because Fundamentally semiconductors are deterministic even if we have probabilistic algos built on top of them They're fundamentally deterministic talking to neuroscientists a fair amount about this The conclusion really is that the difference between we'll say semiconductors then plants then animals then humans is that the The way in which to have sentience is probably most suggestive from some biological component It's probably no accident that our brain
and our nervous system work in unison. And we have a deterministic nervous system and a probabilistic brain. That suggests that a sentient being is probably going to have more of a biological expression than anything we see today. So we probably don't want to be too concerned about these computers as we now know them becoming sentient. The other part of that.
argument is that there is a lot of bad stuff that can happen with just deterministic computers. You know, we don't have to look very far. We can look at... we have many examples about how we could get manipulated and that's going to get worse with the technology that's already in front of us. We don't even have to imagine sentient beings doing something outside of that. So we need to look at what's...
already right in front of us to be able to address what's in the future. You know, Nick Bostrom, the philosopher who talks also about the danger of sentience, kind of the existential danger of sentience, has this concept that we can dedicate some amount of our time to thinking about this existential risk. And I agree with that. Some amount of our time, some amount. And that can be as...
Eric Daimler (38:08.918)
Much of a kind of a ham-fisted approach is having a massive off switch somewhere. But it probably has some more reasonable nuanced approaches, such as being able to audit code. Even experts like in your audience having some proprietary access to commercial code before it goes into production, or doing it in a zero trust way, the way we do credit scoring. That's a reasonable short-term solution to some of the applications of AI that we can apply today.
to keep us from going down a bad path.
Michael (38:43.07)
What does biological mean?
Eric Daimler (38:47.862)
So in the context of the sentience, that we will not have these entities be chip based. They will have some more analogous chemical manifestation to our nervous system and our brain than a computational manifestation or a computational equivalence to our brain.
Michael (38:56.318)
Really?
Michael (39:13.222)
And you think they would plug into computers or would they be a sort of a separate entity altogether?
Eric Daimler (39:18.25)
Man, you are so getting outside of my expertise. My PhD is in computer science and not in biology. So I am channeling other experts. I, you know, there's other people that have said that, you know, the next war or the next big war with robots is gonna be difficult because we will have a hard time distinguishing them from us. You know, that's an interesting way to think about how we might start augmenting ourselves with new devices that kind of transform our being.
Michael (39:22.507)
Fair.
Ben (39:23.032)
Ha ha.
Eric Daimler (39:48.07)
into something that today's humans may not recognize. That's an interesting line of inquiry, I think. But bringing it back to today, I'd say, is solving real problems. What we're trying to do is bring together massively heterogeneous complex systems to have us all just make better decisions. That's what we need to do today. We don't want to be dropping stuff on the floor that we spent time and money collecting.
A lot of people are misled about the degree to which they're taking advantage of AI because they have these fancy dashboards or they have these fancy data lakes. But we often talk about throwing all your books in a library and then sorting by color. It doesn't help you retrieve the data. That's what you need to be thinking about and using it with speed. That's solving today's problems. That's what we're working on, kind of short to medium term problems.
Michael (40:47.302)
Got it. And then I have one more and I'll kick it back over to Ben. I know he has 1 million. Um, we were talking about something that was really fascinating to me in that hallucinations are not innovation. What is the difference in your guys' opinion?
Eric Daimler (40:52.974)
Hahaha
Eric Daimler (41:08.086)
The difference between hallucinations and innovation. We're making up facts, I guess, right? That's not something we're terribly interested in, unless you want to do a creative exploration. So Ben even talked about bringing up new ideas is something that LLMs are kind of manifestly good at in a variety of ways. We don't really yet know what is going to be the best use case for those. Everybody's still exploring. We had thought a decade ago even, that recently,
we would first see an automation of manual tasks and then white collar tasks and then creative tasks and the current thinking is that actually might go in reverse. We don't know.
Ben (41:49.948)
Yeah, to add to that, from my perspective of some of the places that I've worked before, when I was at Samsung, when we were just, it was funny that you brought up semiconductor chips because that's what I was doing. But we were, when I was working there, we were going from the 45 to 32 to 28 to 20 nanometer manufacturing process over a period of the five years that I worked there and each of those processes within
When going from 45 to 32, you're taking everything you learned in 45 and you're saying, I have all this data, petabytes of data. It's a ridiculous amount of manufacturing data that you have of running the true production process and you have snapshots of where you started at initial day, day zero production, and then all of the changes that you had to do to that manufacturing process across the 1600 processing steps.
and three months of time to create one single wafer. You know where you had to make those improvements. But humans have to go in and say, all right, so what did we change on 45 nanometers from start to when we got 95% production yield? We started with 1% yield or 10% yield. So that list of changes, they're not binary. It's not like, hey, we're going to have
we started doing this thing or we change this one parameter in this one recipe from a thousand Watts down to 950 Watts and that fixed this problem. It's never like that. It's a probabilistic distribution of that change, how it affects potential downstream changes. And because of those interactions, when you're evaluating all of that, there's an entire team at these factories that it's called integration engineering. And this is what they do is they
look at this stuff and they build the design of experiments where they have their own, call them boats of wafers, 25 wafers in a boat. And of those 25 wafers, you run specific experiments in order to, through a design of experiments process and say, what was the actual impact? And I'm going to do metrology on all of these and cut them up and tunneling
Ben (44:16.668)
you collect all the data you can to understand the interactions between these major changes within a certain range of processing. And that's what you're talking about, Eric, is like, how do you leverage all of that knowledge and, you know, use that probabilistic modeling to say, what should we do next? And that process from 45 to 32, that's nine months of time. And there's 300 humans involved in that.
That's just the manufacturing. That's not designing the chip. That's a whole different team of a couple hundred people that do that. So having AI agents that allow you to shortcut that human capital investment, which now there's tools that they're using for this type of stuff. And I've talked to a couple of manufacturing groups while I've been at Databricks, they're like, oh yeah, we're using this PyTorch model that does this. And then we're using this. You know,
basically Markov chain simulation in order to determine what our DOE should be. I always get interested about that. Like, yeah, I understand that technology is really old and what you're using, but what did that do for your engineering process? How many weeks did that save you? Usually they're like, no, it saved us months of time because before we were using spreadsheets to figure this out and running stuff manually. Yeah, I 100% agree that this is the future. The world is going to catch up to...
Eric Daimler (45:26.114)
Ahem.
Ben (45:44.628)
realizing that they need all of this stuff because if you don't do this, this isn't a small incremental change. This isn't like, Oh, I now have a fancy model that, that runs a chat bot. My customers are going to be delighted. It's not going to disrupt your industry, but if you're using what we're talking about right now and apply it to your business properly, that's stuff like shaving years off of processes to get to that right answer in days or weeks.
Eric Daimler (46:12.418)
It's not just shaving the... I love what you're saying. You're shaving time off. You're obviously shaving a great deal of cost off. You're making sure you're not going to introduce new errors from these manual processes. But often you can discover new opportunities because of the speed with which you are now adapting to this new world.
I love how you're talking about Excel. I think the world would get freaked out if they realized how much of large businesses ran on Excel still today in 2023. It's really quite weird. We work with a couple of airplane manufacturers that have formal methods for defining a fuselage or a wing and an engine, but they do not have a way of bringing those together in a formal...
dependable way. So they just have to test and test and test and use these Monte Carlo or Markov simulations and test and test some more. And then they still have failures that can often be catastrophic. How that manifests often is that different engineers or different components could have this common term called vibration, for example. It's really important to know whether vibration from the engine to the wing is additive.
Or does it cancel each other out? And then the wing and the engine to the fuselage? Additive or cancels each other out. That's the nature of these different interpretations of the systems that just can't be fully represented in just a simulation. That's where we're going with the scaling of formal methods enabled by category theory that then gets combined with these probabilistic methods to make sure no new errors are introduced from manual processes.
and to radically speed up these transformations to uncover new opportunities.
Ben (48:04.16)
So how much of this is going to be open sourced?
Eric Daimler (48:07.714)
So Conexus AI has an open source component. That was a requirement of the spin-up from MIT. So CQL is open source, and everybody can go check that out today. If you want the fast version, then that's commercial.
Ben (48:23.388)
Right. The integrations that are currently in place for saying, it's kind of a loaded question. What are the plans for this to become truly disruptive in industries across the board?
Eric Daimler (48:40.93)
We're seeing the most complex organizations adopt this first. So energy, aviation, supply chain manufacturing, those are the places that are bumping up against these increasing amount of errors. In one case, we have this large energy company that they said, yeah, Eric, it's lovely that you save us $58 million a year.
Eric Daimler (49:09.518)
our time to return on capital from six months to a couple of weeks. But what we really cared about is that you eliminated what we discovered was 86 meetings that were created at the leadership level from an error that could have been prevented had we used our solution. Because there's no amount of money that can eliminate those 86 meetings to leadership and the people right below them. That's the number that...
that I think might be our path to greatness.
Michael (49:44.27)
Yeah, that's compelling. Wow. 86.
Ben (49:44.328)
Awesome. Yeah. For those listeners who aren't aware, certain industries that we've been talking about that I've worked in. Um, you know, a lot of people are working like software, you know, they're working at a SaaS company or, or they're, they're doing data science work at a company, uh, in the sort of the public space, um, or sorry, the private sector, but it, you know, you're interacting with human customers. If you ship some bad code.
Eric Daimler (49:46.272)
Hahaha
Ben (50:13.508)
or something kind of goes wrong. It's a code reversion. You push a new version. You get production back up and running. It's a couple of days. You'll have to write maybe a retrospective on what went wrong. You had a slap on the wrist maybe, or you go get assigned a bunch of work to go and make sure that never happens again. When something like this happens with design of something in the DoD or in the semiconductor industry or
a power plant. Those incidents are handled by regulators. The government comes in and sends federal agents to start asking questions. You could be looking at an investigation period that lasts many months and it's not just two or three people that have to talk to that. That's the head of the site and maybe the head of the entire organization sitting down.
Eric Daimler (50:45.425)
Hmm
Eric Daimler (50:49.531)
Mmm.
Ben (51:08.148)
You could get subpoenaed to talk to them in front of Congress and explain what went wrong. So it's a big deal. And yeah, eliminating the errors or the near misses from even being a possibility. That's what I'm most excited about this technology and getting out into the world. As you said, the highest level of ubiquity is possible because the smarter that our agents, the smarter that the tools that we build to help us do better things in a better way.
It's just going to open us up to capitalize on the one thing that we're all resource constrained on as humans, which is time.
Eric Daimler (51:48.951)
That's the big deal. Yeah. We want to leverage what's in our heads and our collaboration with other humans. That's what we want to use AI for.
Michael (52:01.082)
So Eric, I have another question. Um, we've been talking a lot about the vision of sort of the AI industry, how it will impact in your perspective. What do you think is going to happen? So it sounds like we're still going to have a lot of human in the loop interactions. Um, some will be automated, but really it's, it's more like augmenting human functionality and leaving humans to do the human, the funnel fundamentally human thing, which is creativity and sort of thinking.
Beyond that, what else do you think is going to happen?
Eric Daimler (52:34.218)
I would say that a little differently. Because right now, and for a while, we've been augmenting ourselves. We augment ourselves with calculators. Showing up, I don't even have a calculator. We augment ourselves with Excel. We augment ourselves in a variety of different ways. I think that the three choices that I articulate are that you either need to participate
own automation. You need to be making the implicit explicit and then essentially turning that into code to automate. Or you're going to become a niche player, an artisan, which is cool. There's actually a lot of opportunities for artisans and niches in the world. If you don't do one of those two things and just continue down a path of not,
continually automating your own tasks, kind of participating in your own obsolescence in a way, and then repeating the process to find new value add. What you're going to find is you're just going to be slapped one day. The world is moving faster. We talked about the data growth. But what's really distinguished in the characteristics of a daily experience is the abruptness of change. Jobs have changed for generations, but you used to be able to think of them in terms of generations.
the elevator operators, the switchboard operators. But you begin to get a hint of the increasing abruptness with which jobs can change when equity market trading floors begin to vanish. The floor of the New York Stock Exchange is now just a tourist attraction and a backdrop for media. But because the nature of digital change is when people like me, this is what I did for a while, it began to understand, you used the ML of the time.
to automate what a treasury bond trader did, it didn't work. It didn't work. And then it didn't work. But then as soon as it did work, my boss didn't need to wait until Friday afternoon to fire the staff. It just works. It's done, let alone a generation or a year or a month or a week. It just happens. And that's the nature of digital transformation. So those three choices, you need to be thinking about how I make my knowledge explicit.
Eric Daimler (54:59.33)
Little bit more like machine readable code to the point where it then gets automated, and then repeat the cycle for ourselves. That's a prescription I have.
Ben (55:09.352)
Yeah, I usually answer that exact question when, when people pose it, uh, of asking how many people felt that their decision to continue to manufacture and produce iron horseshoes, how happy were they in 1941?
Ben (55:28.324)
Right? So if you look in the 1890s, that's what there were loads of those people. There are loads of blacksmiths out there that could reshod horses, you know, farriers, and by the time automobile became the primary mode of transportation in North America, the smart people were like, I'm going to learn how to fix those things. And if you don't start tinkering and get one of them and take it apart, put it back together.
They became successful auto mechanics or they went and did something else. And humans are incredibly resilient about doing that and adapting to those rapid paces of change. It's the people that are holdouts and that Luddite mentality, those are the ones that struggle the most. But people always kind of come around. If you look through any major technological advancements throughout history, everybody's got to eat. So, you know,
Eric Daimler (56:26.358)
You know, people all come around and like to be optimistic in that regard. But boy, it is really hard as people get older to acclimate themselves to a new reality. As forward thinking as I like to think of myself as being, I don't regularly go on TikTok. That's a different generation. And so that whole.
framework to say any technology that was built before you're 15 is background, anything that was developed between the time you're 15 and 30 is something you could build a career around. Any technology built after you're 30 is against the law of nature. That's a real thing, I think, for people. So the place in which people are willing to experiment, I think, has a lot to do with their age. But I really do encourage experimentation.
Ben (57:07.07)
Hahaha
Eric Daimler (57:20.638)
with these augmentation technologies in whatever form they take and the current trend is towards LLMs.
Michael (57:27.518)
Do you build that into your everyday life? Like, do you have an hour a week where you just learn about the cutting edge and play?
Eric Daimler (57:34.298)
So I have a couple of different practices, but I sit on a couple of boards of directors, and I will tell other people on the board that they need to have a tab open with ChatGPT to just constantly experiment with what they can use it for. Because they'll distinguish what's hard and what's easy and begin to then think about what can be automated and how to integrate that into a larger system, how they need to rethink.
their processes. That's the way I initially approach it, kind of at the leadership level.
Michael (58:10.13)
Got it. Cool. Yeah. Well, I know we're almost out of time. We didn't even get to half of my questions, but that was the semi-expected. So I will quickly summarize and kick it over to Eric for any next steps. So we talked about a lot of different things. There are a few tips. One tip was when communicating, try to tell stories. If you want to use farm animals, that's a great option. But if not, just telling stories is a great way to.
have people understand in layman's terms, what is going on. Um, moving on from the tips, lots of, lots of high level philosophical discussion. Um, one thing that absolutely hit for me was there's going to sort of this divergence between artisans and people who effectively augment themselves. So if you're going to stay in the, let's say technology industry, you're going to need to learn to use these tools. And if you don't need, or if you don't learn, you will become obsolete.
It's kind of a fact. But another route is you can become more of like a creative artisan and choosing that path is sort of up to you. Regarding sentience, computers are deterministic and you need probability. And so we're gonna be looking at more of a biological tech stack to create sentience. AI will impact command and control. This was really interesting as well. So instead of just taking humans out of the loop and maybe...
firing off nukes, maybe not. Um, instead AI will be used to augment decision-making and sort of help with that process. There was lots more, but, um, Eric, if people want to learn more about you or your work, where should they go?
Eric Daimler (59:50.17)
Conexus.com is Kinexis AI's website. We represent a different type of generative AI, one that is deterministic. When a generative AI, you can bet your life on. That's what Kinexis AI does, kinexis.com.
Ben (01:00:07.548)
Yeah, and if you're really looking to nerd out, they have some great papers on the Kinexis Research link. And it's look in the main page and look for the MIT local. You'll see the link to that. And have fun reading them. They're really good.
Eric Daimler (01:00:15.062)
Thank you.
Eric Daimler (01:00:25.254)
There's a, that is, that is a bottomless repository. So yeah, if people could definitely reach out, if they want to know more about category theory and the manifestations of that in a commercial enterprise.
Michael (01:00:36.006)
Sheesh, what a mouthful. All right, well, I think that's about it. Until next time, it's been Michael Burke and my co-host. And have a good day, everyone.
Ben (01:00:44.48)
Ben Wilson. See you next time.