ChatGPT and the Divine - ML 105

"Any sufficiently advanced technology is indistinguishable from magic." Today, Michael and Ben talk about the broad implications of ChatGPT and similar algorithms. Expect to learn about...

Show Notes

"Any sufficiently advanced technology is indistinguishable from magic." 
Today, Michael and Ben talk about the broad implications of ChatGPT and similar algorithms. Expect to learn about...

  • The difference between AI and ML
  • General Artificial Intelligence
  • Some personal opinions about the overlap between "the divine" and AI

On YouTube

Sponsors


Transcript


Michael_Berk:
Hello, everyone, welcome back to another episode of Adventures and Machine Learning. I'm your host Michael Burke, and I'm drawn by my co host,
 
Ben_Wilson:
Ben Wilson,
 
Michael_Berk:
and today we do not have a guest, which is very sad, but frankly I'm ready to get down on a panelist episode and talk about something that I'm wildly under qualified to talk about Ben. We'll explain his experience with with this stuff as well, But yeah, what we're gonna be talking about today is God, Long pause. Uh, so in, like the recent M l world, that B D has sort of shaken up what a lot of people think Mel can do and everybody is like, Oh my God, it's artificial general intelligence. Here is Terminator coming. We got self driving cars. We got all this amazing technology. That's a I driven. and so last week Ben and I were just talking about I, and God is like a relevant topic right now, so we're goin Get into that and hopefully not offend anyone. but we do apologize. Um, so yeah, sound good to you, Ben.
 
Ben_Wilson:
Yeah, and to preface this, maybe we shouldn't use the term God, because that has connotations with actual religion in the world today. What if we just use the term the Divine and what we refer to? There is not so much organized religion Because we're not. We're not gonna be touching that of, but more the concept of what was humanity fifteen twenty thousand years ago when they were living on the plains of North Africa, and you know figuring out how to master the creation of fire And create pottery and early cave paintings, and the concept of the stuff that we've discovered of people wanting to record for their own culture's sake of history. Like some of the cave paintings, you see, there's pictures of certain types of animals. There's pictures of humans standing around fires, pictures of hunting, pictures of uhmedeorological events. There's things that are of a nature. Aw that they consider them divine. Like. Why do the herds come at this time of year? Why do we get? Why are we able to hunt now, but can't at this other time of the year? Why do the rains come? Why do the things grow at certain times of the year? Why is it that when we look out at a certain point in the horizon, the sun rises at this exact time at this exact place every single year. That seems like there's an outside Force that's doing that. And that's what we mean by the Divine or God. Is people attributed that to an outside force that is beyond their comprehension. We no longer do that. we know what the substances are. We know what what animal migration patterns are we know about the seasons. We have data that explains all of that, So it's no longer a mystery. You know, we don't. We don't worship a sun god any more.
 
Michael_Berk:
Yeah, and to be clear that it makes a lot of sense why people do that. I find the Big Bang pretty freak and unsatisfying like time started there. What was there before? How can there be something out of nothing like? And as a human, I find that's fundamentally a difficult concept to accept, but I think patients either like you can attribute it to God, a hundred percent a valid valid approach, but also Being patient and humble and saying well, Maybe we just don't know yet. That's that's the opinion that I subscribe to and it's proving itself throughout history. But who am I to say?
 
Ben_Wilson:
And the thing that brought this up last week was after our recording, where we just sat for like a half an hour chatting about this. And what was it that we were saying? I think we did. We quote Arthur C. Clark, like any sufficiently advanced technology is indistinguishable from God.
 
Michael_Berk:
Yep.
 
Ben_Wilson:
And we were talking about the history of humanity. About how theology started. You know, this worship of of the mysterious, and attributing somebody's got to be in control. Because if nobody's in control, what does that mean that chaos reigns? Turns out that a lot of that is true. Like why did this flood happen? Why did this tornado come through and rip through our village? Uh, why did my my cousin die while fighting a wild boar? Well, if it is just pure chaos and chance, which we now know that stuff is you know. There's you know. we could debate like You. Could you know, a butterfly flapped its wings in Java, which caused you know differences in air currents and stuff, and created this hurricane. Yeah, we could talk about chaos theory like that, I guess, but I don't want to. Um. but when you, when you're talking about not understanding something and having you know chaotic alike situations rule events, it's like you said. It's very unsettling like we become powerless. So as a defense against that, we can say no. There's something out there that it just needs sacrifices from us. Whether that's fealty, whether that's You know, sacrificing animals like they did back in the day, or dancing around a camp fire and praying for rain or singing for rain. And people believed at that time that that was how you made that happen or increased the chances of that. We now know that got nothing to do with it. But then
 
Michael_Berk:
Probably
 
Ben_Wilson:
but then,
 
Michael_Berk:
I don't
 
Ben_Wilson:
the
 
Michael_Berk:
know.
 
Ben_Wilson:
the Arthur C Clark thing is, have we have, we, as a as a species in a cult Or descended this slope of uncertainty of not understanding why things happen to a point where there's enough people and enough scientific understanding Where we say now we have a lot of stuff kind of figured out, and as as humanity has gone down that slope over millennia, you have fewer and fewer people that that believe in a higher power, believe in The divine. And and now there's people who are now straight up. Atheists are like. No, I don't believe in any of that. I believe in science. Um, but what happens when those people get exposed to something that they no longer understand or that they're incapable of understanding? Do we have a new? A divine that gets created,
 
Michael_Berk:
Yeah, So that sets up the rest of this podcast. But before we get into it, I just wanted to quickly define some topics. This is going to be again very non technical, and hopefully it will be fun. That's always the goal, but it should also provide some context about a lot of the societal and click bate issues that are currently present, And so let's let's sort of define some of the core things in the machine learning world. So starting off with artificial intelligence, Ben. What is?
 
Ben_Wilson:
Complete misnomer, blanket term for marketing purposes, that encompasses everything from basic statistical analysis all the way to ludicrously hard to comprehend scale large language models, such as the parent of the big hype right now, that P. T De Vinci, something that I don't know. I can't remember how long it took them to train that thing, but it was many, Many months on a lot of hardware, and they've been working on it for years, so A I. s. a general term is when I hear it spoken by lay people. I know that Okay, They're talking about something that computers are doing That is not a human interacting directly with a computer to make a decision. And it's just this blanket term When I hear practitioners use it kind of caring a little bit. Just What do you mean? I usually it's a marketing term that people are using. Um, but specificity is where you actually want to know. Like well, what are you talking about? What? What type of our next topic or our next title is? Usually what I ask
 
Michael_Berk:
Yeah, So and I think artificial intelligence initially stemmed from a valid definition or valid concept, Which is this concept of general artificial intelligence. So something that has almost human like intelligence that can make decisions, think critically, do its own thing. and frankly, nothing out there can do that
 
Ben_Wilson:
Correct.
 
Michael_Berk:
that that I'm aware of. Maybe somewhere deep in a in a military basement, there's an a. but, Uh, nothing available to the public that I'm aware of. So that brings us into the second topic which is sort of this more tangible and specific definition called machine learning, Ben. What's machine learning?
 
Ben_Wilson:
It's another blanket term, but it's it's a little bit more specific and it's it's the idea of using data to train an algorithm that can make inferences about novel data that's given to it. And this topic or this concept is used in every industry on the planet. It's It's something that. I mean, Arguably we do as well every single day as humans were making inferences. Things were collecting data about our environment or about things that are going on where we're making micro predictions like hey, I should, I'm driving down the road at fifty miles an hour. Somebody swerves into my lane and I see that five hundred feet out I know I should turn my wheel to move away from that. You're making a prediction or an inference. You're inferring that your car is going to smash into this other car, So That data that we're receiving is similar to what we can do with statistical models. We can say we have a bunch of of historical events that have happened or no data that that we're controlling how it's going to go into this algorithm. We train it, and we make sure that it's we have a ground truth, or for supervised learning, which is the vast majority of things that are out there and it will adapt. It will Timize itself to minimize the error in prediction against known historic data, And then we just use it against data that we don't know the answer to yet, and act on those inferences.
 
Michael_Berk:
Got it so sort of to summarize a sort of a blanket market marketing term that encompasses a lot of different things, And then machine learning is a more specific set of algorithms that usually looks to fill in missing data. Other, be
 
Ben_Wilson:
Hm.
 
Michael_Berk:
supervised or unsupervised. Is there anything sort of that you have seen brought into the machine learning definition that actually isn't Ma machine learning?
 
Ben_Wilson:
Sure, I've I've heard people refer to something like like statistical models where you're building like a simple regression. You're just fitting a line basically on the data and then people saying well, that's that's M. L. It's like not not really. It's jus the statistical model. You're just using basic algebra and you're minimizing. You know, the error between the fit line the data, and you're just recording with that That Reston equation is Uh, which is different than generalized line regression, Which I would classify that as machine learning, because you have to have you know training data and test data, and it's iterating through and constructing a more robust model generally, And then also stuff that I would classify as business intelligence and analytics. I've seen people call that M. l. where somebody's you know, plotting a Parao curve based on data and finding where outliers happen. They're like, Oh, we need to to look into this data here and see what's going on. It's not M. l. It's using statistics. M. L also uses statistics, but it's It's not something that the machine is autonomously Doing. You're just feeding data and saying, Analyze this and tell me what the relationship is here. So I do see that quite a bit, but not with practitioners. generally. I've never heard of statistician be like, Oh yeah, I'm doing M. L. With this they're usually like. No, I'm just just using statistics. Basic stuff. Sometimes
 
Michael_Berk:
Yeah,
 
Ben_Wilson:
it's super advanced, but it's it's not machine learning.
 
Michael_Berk:
Yeah, yeah, Sa. And so now that we have those those definitions out of the way, let us continue to the first topic, which is pretty freak ing, mind blowing. and that is the turning test is complete.
 
Ben_Wilson:
Hm,
 
Michael_Berk:
It has been like blown out of the water by chat. I've been playing around with it and it writes better English than me, being totally honest. It's grammar is clean. It's logical sentence structure S clean. It's concise Thorough. Uh, I definitely could not differentiate it from a human. So Ben, what are your thoughts? Is this? is this A I?
 
Ben_Wilson:
No, um, it's clever. I'll give it that It does certain things in such a way that it's easy to. It meets the criteria for the turning test. definitely, but it exceeds the criteria in so many different ways that it's far too easy to tell that this is an algorithm that's generating stuff. So to take your example that you just gave it writes more concise better English than I do. Um, I would a hundred percent agree with that. for the vast majority of people on this planet that speak English. Ah, it is really good. However, I can tell it to modify how it's writing English. I can tell it to give me an answer about a general topic and then say I want this written in a way that would be good for Twitter. And it will re, write it, and it's it looks legit. Looks like a human wrote a tweet on this exact topic. And then I can say I want you to write a short form post for linked in. For me, this is what. a typical linked in post looks like. It'll rewrite it. Not a single grammar flaw. The topic is correct. It looks legitimate, and then I can tell it. Hey, I want you to write a long form essay on this topic and I want at least seven pages, and I want you to delve into these technical terms and I want you to write it in a white paper format. It'll do that, but it will do all of those things in seconds, and it can Apt, and you can say, I want you to write it in a different style. I want you to write this as an argumentative statement, or I want you to write this as reference material. It will rewrite that stuff in. you know half a second. no human on the planet can do that, so it's very easy to be like. All right, this is this is pretty clever. There are. there are flaws, of course, which I'm sure we're going to get into here in a minute.
 
Michael_Berk:
Yeah, So then why is this not artificial intelligence? It can, looks like a human talks like a human. It can seemingly think critically. it can learn what's missing.
 
Ben_Wilson:
I mean it is a I. It's not G. It's not general intelligence. So if if this this bought, or it's its parent, De Vinci were actually general artificial intelligence, it would be able to do things that we can do. We can just natively do. If it was a super intelligence it would, It should be better at everything that I'm good at. It's not. Um. And that was what we were discussing before we were started recording. Was if I take a topic and interact with this. This l l M that I don't know a ton about, which, Honestly, that's why most people would be interacting with with stuff like this. It's why they're putting them into search engines. because they're great at this. If you just have cursory knowledge about something, or you want to learn more about some topic that you don't know a lot about It presents the illusion of A super intelligent person who knows a lot about this thing. However, if you start asking it deep details about things that you know an exceptional amount of information about and having life experience about this topic. Assuming that it's you know something that it could have been trained on. so it's got to be about. You know, knowledge that can be written down. The first That I tried to do with that was M. Of course, you know, trying to get at the writ. some unit test for me. not that I'm that lazy if a developer. but I was just curious. Um, you know, just finished writing a function. I had the chat window open. I was like. I wonder if this thing could actually write a pretty good unit test because I've written seventeen of them so far today, and I wonder if you can just write this this one real quick. So I asked it to do it and I looked at the output. I'm like damn bro like that's not right, And I just told it. That was like. I don't think that that's that's correct. What you're actually testing Here is something that it's always going to fail. So why do you? Why are you expecting to catch an exception that's always going to throw based on what you just told it to do, and it, it responded in a very polite way and was like, I'm very sorry, you're right. I'm going to rewrite that for you And it re wrote it and it adapted and it got it correct and I looked at it what it had generated and I'm like. Actually, Why did it use a four loop here? Like? why didn't it just use a comprehension? And I told it that I was like, You know, the run time of this would be much faster if you used a list comprehension here in Python. instead of a four loop. it responds back. Hey, you're completely correct. Let me That for you, And it re wrote it and I was like, Yeah, that's That's exactly how I wrote this because I wrote it before I started asking it. And and it matched. I mean, the the variable naming wasn't the same, but then I got it to fix. that too was like. I don't like abbreviations and my variable names. Can you write them out? And it wrote it, and it was character for character and match to what I had written for the code based. I was like That's awesome and then I asked It was like, Can you make this even more efficient? And it went and generated something that. it took me about five minutes to look through what it had generated to understand what the heck it created because it created something that used four separate partial functions that were all defined nested within one another, And the way that it was written was not intuitive. To like how I would expect the test to be written. It was just really complex code, but I had asked it. Can you make this run faster? Copy the code ran it in a time it tested it ten thousand times. It was forty percent faster than what I had written. There's no way that's ending up in the code base because nobody can read it, but it's pretty clever it can. It can do stuff like that, So with stuff that you have a pretty good understanding of you can kind of coax it into Generating cool stuff that is useful, and then when you send it the next thing that you wanted to do, it's pretty accurate with that, and it maintains that context pretty well, but when you tell it to do something that it's not aware of or there's no way for it to have collected data on to learn the associations, And I use the term learned loosely here. it's not learning anything. it's just creating the associations in its narrow network. You can start pushing it into places where it starts falling apart, and if you push it hard enough, it'll just abort your chat chat session and the sessions closed. So I've broken it a couple dozen times and I know a lot of other people have as well where. It's just like I don't know how to respond to this and then it just shuts down. your session. Be was probably just timed out. It was looking up in appropriate answer and just couldn't sort of deadlocked itself. So the
 
Michael_Berk:
Rolling.
 
Ben_Wilson:
what's
 
Michael_Berk:
Sorry,
 
Ben_Wilson:
that
 
Michael_Berk:
Go ahead
 
Ben_Wilson:
now? it's just I don't know if you wanted to go into the one thing that I found that it really couldn't do.
 
Michael_Berk:
In just a second. I just wanted to Devil's Advocate really really quickly.
 
Ben_Wilson:
Hm.
 
Michael_Berk:
So you said It's not actually learning. It's just adding associations and its Noral network. What is learning? then?
 
Ben_Wilson:
Learning of knowledge is exactly that. It's It's a learning facts and understanding their relationships to one another, but true learning is understanding from historical context what the repercussions are of your misapplication of knowledge.
 
Michael_Berk:
Yeah, that's fascinating. So what you're saying is that we can have a essentially a data representation that encompasses stuff, but what it cannot do is then use those stuffs as first principles and logically apply next step. So if this, then that is that what you're saying?
 
Ben_Wilson:
I think it can. Based on what I've tested, it does that pretty darn well what it doesn't do. Well, and I don't know if there's an answer for this in current deep learning architectures Is. It doesn't have fear because it doesn't have an ego, and even people that claim to be egoist that are experts in things, you do have a D, an ego and a super ego that are all vying for you, wanting to become a member of your community. That doesn't mean the community where you live means the community of your your peers. Nobody wants to be an, to be seen as an idiot who's con Or to be an expert. So you have this frontal part of your brain that's acting as a filter of the knowledge that you have the associations that you have, and it basically knows, Went to interject. If you're about to say something really stupid or ask a really stupid question or just present something that's completely wrong. You don't have an intuition of how wrong it might be, And if you push these large language models enough They'll They'll keep on giving you incorrect answers. You can correct them and they adapt and they start getting a little bit better, But there's no. There's no human factor like human intelligence factor. That's in there. About Like to give you an example. If if you and I were working on a project and I was, I was your mentor for that project and I asked you to come up with the design of how to build something, and your first pass is just utter garbage. Understandable like that, That happens quite a bit when you're working on something complicated. And if I give you feedback, that's like Hey, you really should have thought this through a little bit better. Like think about these thirty seven things And I want you to spend some more time to really think about that. What do you think your response is going to be the next project that we work on together? You're not
 
Michael_Berk:
Hopefully,
 
Ben_Wilson:
going to get thirty seven comments
 
Michael_Berk:
yeah,
 
Ben_Wilson:
from me of Hey, you really didn't think this through That thing is going to be. You'll get two comments, Maybe because you've learned that and that affected your ego. You felt whether you felt it in a positive way of like, Hey, I'm learning something I'm growing. I'm getting better. You're subconscious is like Damn, I'm such an idiot like I really screwed that up. I'm not going to. That's not going to happen again. So that's a. That's a nature of our intelligence as humans is to have that that desire for community, and that makes us create fewer errors as time goes on of a particular thing that we're supposed to be good at,
 
Michael_Berk:
Yeah, that makes a lot of sense. So basically we might have sort of the the knowledge framework, but we don't have these external motivators to
 
Ben_Wilson:
Right,
 
Michael_Berk:
influence how that knowledge framework is used. That makes a lot of sense and that I'm reading a book about. I swear, I'm not inert at all, but I'm reading a book about how it is more statistically likely that we don't see reality through our eyes than we do like. It is evolutionary Le advantageous to not see reality for exactly what it is, and we can't. We
 
Ben_Wilson:
Yeah,
 
Michael_Berk:
could get into those arguments. But there's an example of emotions and how they can perpetuate the life of a species or an organism. And so they have there like a loss function that perpetuates having more of that same organism. So should we just run chat, g, T through a genetic algorithm and call it a day?
 
Ben_Wilson:
Um, I don't know how many compute resources are available to do something like that, because if it takes six months to train it on, I don't know how many tens of thousands O P. use that thing. used, Um that generative or genetic algorithm. How many epochs do we run? A hundred thousand a million. We might get a really good trained model with that feedback built into it. Ah, some time by twenty eight, You know that's the real risk of these things like, Yeah, we can make it better and we can make it smarter and you now have it respond better to this, but the efficiency of the structure of the computation of fitting needs to improve in order for extreme improvements to these models, which there's loads of people in universities. King on this problem right now. some incredibly intelligent people are trying to solve this and have been solving it continuously over the last fifty years. About how can we make structures of these things? Uh, because everybody is talking about Jachatgpt right now. This is not some new thing like deep learning that goes way way back nineteen sixties, But researchers have been trying Failing and succeeding to make new structures and new ways of computing. You know lost functions across, you know, iterations. and what are the weights that need to share? What is the structure of that Nero network? But I still think we're a long, long, long way off from creating something that we would be able to look at as an expert and start asking questions and get expert responses back, and have an actual true dialogue. This thing, whatever it may be right now, you can't really do that. I can't be working on a problem for softer implementation and have a dialogue with it where I'm getting something back. It's a novelty for me and I think it's fun and cool and I'm all for it, but I can't ask it the questions that I would ask you like. Hey man, I'm kind of stuck on this problem. What's the best way that I can you know, compressed, Jason. Then make sure that I can send it over the wire and the compress it. And you know I'm just stuck on this this design. You know, part of it, I can ask really abstract questions about something that it should have logical understanding of and it doesn't know how to respond. I have to teach it over the period of many hours to get that one session to be just marginally useful because it's not creative. It's not thinking about. it's not truly thinking about stuff. It's regard, Eating knowledge,
 
Michael_Berk:
How do you define creativity?
 
Ben_Wilson:
Sort of novel thought that is not grounded in a direct knowledge based answer. It's the thing that sets us apart from computers, and from any M. L, or you know, blanket a I term, humans can do that computers cannot. I don't care what Algar you talk about. It's it's not capable of doing it. They can't Computers can play classical music. they can't play. Jas, Humans can play Jes, So it's improvisation. It's using your intuition based on your knowledge and filling in the gaps and saying, I've never seen this problem before. I have no real context for how what the correct answer is for this thing, but I'm going to take some guesses, and the more deep your knowledge is in that subject, the more educated that guest is, and the higher the probability that It is a viable. Is it the most efficient answer? Probably not, but it will be a viable solution that you could test
 
Michael_Berk:
So I will personally give you a Nobel prize. If you answer this, What is the data structure for intuition?
 
Ben_Wilson:
The data structure for intuition.
 
Michael_Berk:
And just elaborating a little bit. So
 
Ben_Wilson:
M.
 
Michael_Berk:
we have, let's say, a binary representation of a graph structure, And that corresponds to quote on quote, knowledge. That's sort of what a deep learning model would would encompass.
 
Ben_Wilson:
I mean it's a cubit. it's It's a non binary state Is intuition, So we don't have you know A, you know a zero or one in there. we have all possible values between zero and one. It's just a continuous, You know threshold of values that we can say. All right. I, I think localization is going to be somewhere around zero, point, zero, one one and six, two, three. Um. And will we're able to hone in on what a possible solution would be For that That intuitive thought based on a gradient, not based on binary. That's why when people have talked to me about this before, I've always sort of said like, I don't think silicanis the way to go to get general intelligence, because we don't think you know our brains. Are you know, to a certain degree binary structures, but the connections between them in our own personal neural net is far more complex than anything you can do with with Silicon based, C, p, s or g, p, S.
 
Michael_Berk:
So my understanding was that neurons are binary. They either fire or they don't.
 
Ben_Wilson:
Hm.
 
Michael_Berk:
What do you mean by continuous or cubic versus binary?
 
Ben_Wilson:
We're not talking about individual neurons, though we're talking about not random but collections of them that are immutable. so we might take these seventeen that are part of this. This one concept and the combination of their signals on an output would lead us to imaginary thought of that potential potential condition, but in that process of of thinking that through how many clusters and groups of neurons and different configurations, We're doing so instead of a deep, like a deep learning neural network where you're just passing from phase to phase, or you know, transmitting a graph of fixed position entities that you can switch on and off these different things and provide weights to what their impact is to the next consecutive layer. I think our brains are far more complex than that. Like orders of magnitude more complex.
 
Michael_Berk:
Interesting? Yeah, I remember. a couple of months ago we had a guest that specialized in
 
Ben_Wilson:
Hm,
 
Michael_Berk:
artificial general intelligence, and he mentioned that the brain only has about ten million neurons, And
 
Ben_Wilson:
Hm.
 
Michael_Berk:
so we have the compute power to represent that right now on not that powerful chip. So it sounds like it's sort of the three d structure that we're missing because I, still, I still don't
 
Ben_Wilson:
Maybe,
 
Michael_Berk:
quite understand the difference between H. Like that We, we can recreate a nuron structure in our brains. Why is that not sufficient to have sort of intuition?
 
Ben_Wilson:
Why is it that when we put those awesome hats on people's heads and have them think about certain different things or do different things, all these different regions start lighting up to different levels Like Hey, this person is thinking about their favorite food,
 
Michael_Berk:
M,
 
Ben_Wilson:
or they see a picture of their favorite food, And you get this one center of the brain that lights up quite a bit, and then something that they just kind of like as a as a food. And it's It's sort of the same area, but the shape is slightly different in the intensity as me Lower. How does that work and we're just measuring the electrical signals from the neurons activating. We're not an electrical entity. You know, we're not using electrons entirely. There's chemistry involved, a lot of complex chemistry. I don't think that we understand how brains work quite yet. not as well
 
Michael_Berk:
Got
 
Ben_Wilson:
as
 
Michael_Berk:
it
 
Ben_Wilson:
we as a species think we do.
 
Michael_Berk:
Cool.
 
Ben_Wilson:
I think
 
Michael_Berk:
All
 
Ben_Wilson:
that
 
Michael_Berk:
right.
 
Ben_Wilson:
research
 
Michael_Berk:
What
 
Ben_Wilson:
is still active and it will be for Very long time.
 
Michael_Berk:
Got it? Well, I'll give you half of a noble prize for that answer, but sounds good. cool. So moving on, I had another topic that I wanted to chat about, which is a quote from Nick Bostra, which states General I is the last invention that humanity will ever need to make. What are your thoughts on That
 
Ben_Wilson:
Um Agree For a number of different reasons. The first superficial cursory reason is the the doomsayer interpretation of that where, like hey, it may be the last thing that we need to invent, And then it may be the last thing that we invent. I don't believe in that. I don't think that's going to be a thing that happens. Um, that could happen many thousands of years later. But and if humanity approaches the generation of something like that in a really stupid way that could happen, But I guess I'm a humanist. I work with technology, but I believe in humans far more, and I believe in the spirit of humanity. If you look at history and what we, as a, not just as individuals but as a culture as a species on this planet where we've come in such a geologically short period of time, You know, from evolving from our well, evolving from our last known ancestor and then mixing with Um, with several other hominids throughout the world, and we've created this, the species that's able to go from You know, living in trees and foraging for for rotting carcases in the jungle or on the prairies and picking berries, and we've learned how to do everything from animal domestication to crop domestication, to the concept of Of thinking through how things could go poorly. So in nature, so we start storing food, which allows us to create cities and creates all of these other aspects of that. not all of them great, but it's something that is almost on a. If you look at it a large enough time scale, We, we're on this the direction as a species of continuing to expand and Making it so that we can have specialization of tasks so that individuals can hone in on very specific problems that we're trying to solve as a species. And where is that going to take us in the next thousand years? Where they goin take us in the next hundred thousand years. Who knows, but our advancements in in a lot of different ways. Are we're talking about technological advancements. We've done quite a bit Short period of time like we're We're flying to our our near stellar bodies, sending people back for the second time. Our second group of time. We're talking about colonizing another planet in our, in our system. That's quite a bit of progress and that isn't because we stuck with what we're good at. If we just stuck with what we're good at, we'd be really good stewards of the land. I think we would be. Our climate wouldn't be in a in a problem Right now we we learn how to live harmoniously as as we probably did for thousands and thousands of years, but becoming more efficient at doing certain things. So maybe there certainly wouldn't be as many people on the planet. Um, but if we just stuck with what were good at, we'd have really good spears and really good bows and arrows, and everybody would be really efficient At making fires. But we don't do that. We adapt were creative. Uh, we use our intuition to try new things. Sometimes they're stupid, sometimes they're amazing. Sometimes their parade shifters for us as an entire global species for better or worse. So the idea that general intelligence is the last thing will have to build. I don't buy it. It's the same argument that people used about you know the horse, The car. Everybody's freaking out like. Oh my God, this new horseless carriage is coming. Like what's going? What are we going to do with all of our horses? Um, there's just not as many horses around any more and there's certainly not you know, being used to pull farm implements and wealthy people around cities anymore. cars are used for that, but people panicked about that and they got really upset. Like. Well, what are we going to do? What are all the people That the careful horse is going to do? They'll be fine. They found other things to do. Her children did other things. They're not grooms anymore. Their, you know, maybe they went,
 
Michael_Berk:
They're Barisis at Star bucks.
 
Ben_Wilson:
or their university professors, or they're working in a. you know, humans adapt and we do different things over time. If you look at it, a short enough time series time pair. That's when people panic, and it just so happens that that time series is usually one hundred percent of line to a political election cycle. So in America that four year time gap people, everybody talks about things that can be done in that short period of time, and people panic about that because that happens to coincide with the people that everybody are listening to that are on television all the time. Uh, they're the ones talking about that stuff like They need to know. Think about creating new jobs in the steel industry in Pensylvania, Like, really, Do we really need to do that? There's why don't you take some of that money and re train these people to do something more productive for society, or just wait a decade or so. they'll figure it out there on their own. Humans are mutable and they change and adapt and they're highly creative. so even if there's this super general intelligence that's everywhere ubiquitous throughout Socit, And and helping us do certain things, humanity will adapt and fill in the gaps will start doing things that that. It's not like the super general intelligence is going to be doing everything possible ever. and there's going to be this this massive shift in existence of like Hey, everything just got built. Everything that's ever been or that ever could be is now done. It's not how reality works. Um, so I think it will be a useful tool to work with us, and we'll come up with creative ideas, and we might not be going through the painful process of testing that out, which could take years or decades or centuries. sometimes talk about like drug discovery like Hey, we need to come up with this new drug that cures this nasty disease. Maybe we tell the super general intelligence like Ken, You run a bunch of simulations and figure out what is the most promising thing to do with this, and then turn on that factory that produces you know test vaccines, And can you just generate a thousand of the most promising ones and we'll test it out, and the super General intitletant just puts a big thumbs up on the screen and the factory starts up. I think it's something of that that nature would like. That's where we're going, but how that's done right now Is ten thousand humans that are that went through collectively hundreds of years of schooling in order to figure out how to do that entire process. And it takes twenty five years Like. I think
 
Michael_Berk:
Yeah,
 
Ben_Wilson:
there's better things that people could be doing with their time. I think the people doing that right now would agree as well, like, Yeah, I'd rather be solving these other problems at the same time, so it's going to be a tool for us
 
Michael_Berk:
Yeah,
 
Ben_Wilson:
as a species.
 
Michael_Berk:
I agree that I think the quote is a bit short sighted, and I don't think that artificial intelligence, although it's impressively powerful, is much different from a car or any other disruptive technology.
 
Ben_Wilson:
Hm,
 
Michael_Berk:
It will dramatically change jobs. It will dramatically change required skill sets. it will dramatically change. Even maybe what humans look like, Maybe we don't need to be as strong and carry as many heavy things, but there will be things left for humans to do, For instance, building the Ai. It could
 
Ben_Wilson:
Hm,
 
Michael_Berk:
build itself, but I think that we should probably have some humans helping out just to make sure it's on the right path, and also art. like, like Ben said, before this call, we were chatting about trying to get chat, g B T to make a a musical composition and it's not amazing at it yet, and I think that one of the fundamental things about being human is those evolutionarily inspired And developed traits that influence how we use this knowledge in our brain, and maybe art from chat. g, P. T. in a hundred years is going O be the best thing ever, But I think fundamentally human art will retain some of it, some of its qualities, and differentiate itself from a generated art. So I hundred percent agree, and then also one more thing. History backs this up super super hard. If we sort of take a dose of human, It and say all right, Well, a hundred years ago there was a similarly disruptive technology. Take the Adam Bomb, for example, than the nuclear bomb. Everybody thought it was doomsday, and humanity has has more or less survived. so I think that a while it's impressively powerful is not going to be that different
 
Ben_Wilson:
No, it's It's goin be another tool and it's going to be a great one. I'm looking forward to it, and mean, maybe we came off as being negative. Nance's on this this podcast. But I think we're We're both pretty big fans at this type of stuff and I'm excited at what the next iterations will be. I want to see that the generation of one of these l. l. Ms. five generations from now and then look at it and be like Hey, I'm trying to do this thing. I need a package that does this, and if I can explain in four sentences and it can write an entire library for me, that's a big thumbs up from me. M. so I gonna move on to doing other things and not do the stuff that just is time consuming and it doesn't really prove, doesn't It's things that need to get done. But but don't you know, generate immediate value And that's where I think we're going to move towards as a species, and that's where we continue to move. Just as you said. Like history has proved this out, we're always trying to become more efficient by elimitating the annoying. That's what we
 
Michael_Berk:
Facts.
 
Ben_Wilson:
do
 
Michael_Berk:
Yep,
 
Ben_Wilson:
like. Why did? Why was the car invented? It wasn't where the bicycle. Why were either of those two technologies invented? Because it sucks walking everywhere and riding on a horse for five hours is painful. I don't know if any listeners have done that. Try taking a ride through the the Australian outback for four days on the back of a horse And you tell me how your outback ends up feeling, Um. At the end of that, it is painful. Um, so people didn't want to do that. So we create these these things of technology that allow us to more efficiently get to the place that we're trying to go. You know, people can really enjoy riding a horse. I like riding horses. I think they're super cool and they're awesome animals, but I'd like to ride one for an hour on a beach or along a trail as recreation, and then feed it some apples and carrots at the end. Um, I don't want one, you know in a building in the back of my house that I have to feed every morning and I have to subject it to pulling things for me constantly. Uh, yeah, cars are good for that, you know, so
 
Michael_Berk:
Agreed,
 
Ben_Wilson:
Yeah, we're constantly becoming more efficient and that that's where I see creation of these technologies and what they're going to be doing. what they continue to be doing. That's what M. l is. In general, You know, think of all the applications that you've ever done in industry or companies that you work with. Now you're helping them build things. It's all it's all to automate things to make things simpler. It's the automat, a way, the annoying. And that's that's what we do and we'll
 
Michael_Berk:
Exactly.
 
Ben_Wilson:
continue to do that.
 
Michael_Berk:
Yeah, I couldn't have said it better myself, so I'll quickly wrap and let you guys continue on your amazing day. So today we talked about sort of the high level power of artificial intelligence. Connected it a little bit to the divine, Thought we would go a little bit more into that, but there were some other interesting topics at hand, So to summarize some core things we discussed, A I is essentially a marketing term that includes almost all modeling. General artificial intelligence Is this concept of a thinking feeling being that can make logical decisions, So there's a big difference there, and a I is over. used in general, Just don't don't use the word i. M and then machine learning is a sub set of artificial intelligence, which is specifically a set of algorithms where a machine learns over off and over durations, but there are also other methods that are sometimes lumped into M l, and no need to get into the nittygrity. But That's generally what M l is and then finally chat. G, B, T is not artificial general intelligence. Sorry to break it to you. It's cool. It's powerful. It's going to probably change the face of a lot of work and subsequent technology s built on. It will definitely change the face of a lot of work. but right now it is not terminate
 
Ben_Wilson:
Definitely not,
 
Michael_Berk:
Cool.
 
Ben_Wilson:
and it can't write anything but crappy pop songs.
 
Michael_Berk:
I love crappy pop songs that are my favorite kind of pop song. All right. Well until next time it's been Michael Burke and my co host
 
Ben_Wilson:
Ben Wilson,
 
Michael_Berk:
and have a good day, everyone.
 
Ben_Wilson:
take it easy to you next time.
Album Art
ChatGPT and the Divine - ML 105
0:00
51:34
Playback Speed: