Michael_Berk:
Hello everyone, welcome back to another episode of Adventures in Machine Learning. I'm one of your hosts Michael Burke and I'm joined by my other host...
Ben_Wilson:
Ben Wilson.
Michael_Berk:
And today we have a really, really exciting guest. This is going to be a fun talk to say the least. Charles Simon. He's a serial entrepreneur. He founded Vectron Graphics, which was one of the world's first CAD companies. He also founded giftcertificates.com, which was then sold to AOL. He has numerous awards. But I think most impressively, both to me and Ben, is that he's an extreme sailor and has circumnavigated North America. So Charles, did I miss anything? Do you mind elaborating on any of those points?
Charles_Simon:
Well, let me start off with a little background on my early background, which is, of course, I started playing with computers when I was in high school, but then I went to the University of California, got a degree in electrical engineering. So my background is engineering, but even while I was at school, I was in Silicon Valley as a semiconductor test technician. That is, a lab technician. at that time you'd get the chips back from the foundry the very first set and people were working with microscopes actually cutting the traces and on the metal on the chips to be able to correct for the for the design errors. But subsequent to that I subsequently got a master's degree in computer science and got a lot of interest in robotics but all the time I've been interested in the overall picture of whether or not we'll be able to make machines think. And that was the thesis of the very early Turing article in the 1950s that was the progenitor to the Turing test that says not only can machines think, but how are you going to know if they do. And so that's where I've been. focusing my efforts for a long time. And as Michael said, I started off with Vectron graphics and CAD CAM. And one of the things that we noticed was that the classical algorithms for this was in designing printed circuit boards. The classical algorithms really had very little to do with the people who were doing manual design at the time. Well, now we've got machines that are so powerful that manual design is almost a completely thing of the past and it's all done by machine. But I found that to be an intriguing problem that how are we going to address the question and bridge the gap between people and machines. And I worked at a lot of different places. Sometimes between entrepreneurships I'd... you know, take jobs here and there. I worked at Microsoft for a number of years, where I managed what was at the time a brand new concept in the World Wide Web, where it was initially MSN News. MSN was their equivalent to AOL. And MSN News, they woke up with the Windows 95 launch and decided they actually. needed to embrace the internet and the problem with the internet was that there was not any content that anybody was interested in. So they endeavored to make their own content with MSN, which was the Microsoft Network. And they entered into a partnership with NBC and this subsequently became MSNBC. But at the time mbc.com was in Redmond on the Microsoft campus and the television studios were and still are at Secaucus, New Jersey. It was a very interesting match. It was fascinating because of course at the time NBC was owned by General Electric and General Electric was a hard and fast Unix site. community and they would not allow any Windows machines in their shop without special permission and dispensation and forms and circles and arrows. And you can get an idea of how smoothly this worked together because Microsoft of course was not going to allow any Unix machines in their offices. So my job was to manage this collection of herding cats to be able to get the news site off the ground. But very rapidly. It was the largest news site on the web at the time. We were one of the very first sites to ever break a million unique users in a day. And it was just an adventure. But we had to do all kinds of things like solve problems that you don't even think are problems today, like if you want to post stuff to a big website, which is going to span multiple servers. How do you make sure that all of the content shows up on all of the servers at the same time? So if you change the links over here, somebody can't get routed to another server and get a broken link. Well, we had to invent that kind of stuff. It didn't exist at the time. So I say that was a fascinating adventure. software. And the very first piece of that was one of the very first paperless electroencephalographs. Now an electroencephalograph is where you see what measures brain waves and you get where you see the guy with all of the electrodes on his head. That's an electroencephalograph and it's used for sleep studies and working on brain injuries and as curing epilepsy and a whole slew of other... other things and these essentially were strip chart recorders originally like you'd imagine in like a lie detector.
Ben_Wilson:
needles.
Charles_Simon:
And these would be, to make this paperless because they just generated reams of paper, you could keep... computerize the whole thing and then you could do all kinds of things like store the raw data so that you could change the filtering after the fact and change the way you summed the or took differences on the various signals in order to determine, for example, the location of an epileptic seat of an epileptic seizure or something like that. And that subsequently morphed into a number of other neurological test systems like tests for neuro... for carpal tunnel syndrome, tests for various forms of deafness, tests for epilepsy, and things like that. A fascinating idea. You have to understand that an electroencephalograph, you get all of these signals coming in. Overwhelming majority of it is simply noise. And there are so many neurons firing in your brain, and they each one give off a tiny amount of detectable signal because there's not a lot of energy going on in your brain. As you know, the whole brain runs on 12 watts of energy. Otherwise, your head would explode. But fascinating is if you, for example, played clicks into earphones and you synchronized your... your view of the neurons of these electrons, if you synchronize that with the clicks and you averaged over thousands of samples, all of the noise would be filtered out and you could observe the firing of individual neurons in the chain of the process that actually is part of hearing and that's how they would determine, well. if you have a hearing loss, it may be at this part of your, in your ear or in your cochlea or in some part of the auditory nerve or something, you've got to figure out where the problem is before you can attack the solution. So carpal tunnel syndrome, typically you put electrodes on a person's hand and you measure the actual signal going up their arm. And you can calculate the... the amount of nerve conduction. So the point of this is that I got a real interest in what neurons could do and how neurons actually work and what they're good for. And I wrote a little neuron simulator called Brain Simulator. And I wrote the original one in 1988. And it would handle 1200 thing you could play with, but you can't actually do much calculation in 1200 neurons. And then I put that aside for a while, and as you two mentioned, I did a lot of sailing. And then I got back to, actually it was still while I was sailing that I started writing brain simulator too, because now we have machines where you can put a billion neurons on a computer. And you get all of these multiple processors and you can get a lot of work done on a desktop. And so I wrote the original, the first of the brain simulator 2 in, when was that, 2017 and started playing and because you know I'm an engineer at heart, let's say how, what are the kinds of things you can do with neurons that You can't do with other things and what is easy and what is difficult and what can you do with a small number of neurons and what takes a lot. And in building a neuron simulator, you can think of it as being a lot like electrical engineers will understand this. It's a digital logic simulator where you can build up a circuit and it will tell you what it does. Most, a lot of computer science guys have never seen a logic simulator, but it's a a really handy way if you're going to do a circuit to verify that it's actually going to do what you expect before you get too far down the road with building it. And so now I have a logic circuit simulator that instead of working with logic components actually works with neurons. And that supports a number of different models of neurons. And then you play with this for years saying... Can you build logic gates out of neurons? And the answer is yes, you can build NAND gates out of neurons. So if you can build a NAND gate out of a neuron, you could potentially build an entire CPU out of neurons. And then you do a little arithmetic on how slow this CPU would be. And you decide that that's not a particularly useful path, but it's an interesting direction of thought. So you go... thinking about, well, how do neurons actually learn things? And the general consensus is that, and this is true in the machine learning world, is that a lot of information is represented in the weights of the synapses between the neurons. Now, neurons have a very simple function. A neuron accumulates a charge. If the charge exceeds a threshold, it emits a spike. The spike travels down the axon to the synapses that connect it to other neurons. And this process goes on and in your neocortex, which we think is the thinking part of your brain, this represents 16 billion neurons. And so there's a lot of capability, but the process of thought, we believe, happens in the neocortex. And so the question arises, how can you build thinking? out of these simple components and what was the circuitry be like? There are some things that neurons are really good at. For example, they can, a single neuron can tell you the distinction between the relative spike rates of two different incoming signals. And so for example, the signal coming from the retina in your eye, two adjacent pixels, if you're firing at different frequencies, there's likely a boundary there. And so neurons are really good at detecting boundaries. On the other hand, if you want to know what the absolute firing frequency is out of this incoming signal from your retina, it takes a separate neuron for each different level you want to be able to detect. And so you get some interesting optical illusion cases because. you can only detect on the order of 10 different shades of brightness. But you can detect the boundaries between different shades of brightness with much more precision than that. And so your mind is having to cope with, yeah, I can detect this boundary, but both sides of the boundary seem like they're about the same color. And you get a lot of interesting illusions out of that.
Ben_Wilson:
So inference
Charles_Simon:
And
Ben_Wilson:
of a sort,
Charles_Simon:
yeah.
Ben_Wilson:
like we have to because of the information volume in our own capabilities.
Charles_Simon:
Exactly. It occurs that every optical illusion is a clue to how your mental processes work. So if you look at any optical illusion, some of them are 3D and they tell you how your 3D system works. And there are color illusions and they tell you a lot about how your color perception works. And so, although they're interesting, you can look at them as how do you build... how do you build up a mental, how do you build up a model of how is the brain working? And then we get to machine learning and this is the machine learning channel and you begin to actually try to produce machine learning algorithms in these neuron models. And for a variety of reasons, you can't. The primary reason, and this is, I mean, I should say theoretically, it's no problem, because theoretically, the typical neural network of perceptrons is what is called a functionally complete set, which means you can build any digital circuit out of it. And likewise, the neuron forms a functionally complete set. So from a theoretical perspective, you can build anything out of anything. But In biological neurons, the neurons are so slow that to build a lot of functionality, if you start thinking about machine learning kinds of algorithms, the systems are simply going to be too slow to be useful. And to give you a little example on this. There are a couple of different ways you might encode a signal in a sequence of neurons. Remember the neuron is simply emitting pulses and I should say the neurons are so slow. Let me give you a little specific on that. The neuron's maximum firing rate is about 250 hertz. you know, 250 times a second, once every four milliseconds. And once the process is initiated of creating a spike, the system, the neuron can't accept any other input until it recovers. And it recovers and it's all moving ions back and forth and changing the orientation and location of ions. And it has. We'd call it electrochemical, but it's not an electronic device. And the signal of a neuron, the spike, actually travels down the axon to adjacent neurons at about one meter per second. This is like a moderate walking speed. We sometimes think of this as being electronic because we can measure the electrical charge, but that's only a symptom. The actual function is chemical and it is many, many orders of magnitude slower than an electronic circuit. In fact, I should say, when I was an undergraduate, it was at the time when telephone switching circuits were giving way from being made out of electromechanical relays and becoming transistorized. And so there were a slew of cast-off electromechanical relays available. And a number of my cohorts and I in electrical engineering built a CPU out of telephone relays. Now, a telephone relay, if you apply power to it, the switch actually closes some amount later because of the mechanical lag of actually building up the magnetic field and actually moving the contacts around. And. That lag on the relays we had was 12 milliseconds. And when you stop to think about it, that means that the neuron, which fires in four milliseconds, is a whole lot closer to being a mechanical relay than it is to being a transistor. And so it puts a different perspective on it to think that your brain is working on billions of what could be 1940s technology. And it is just so slow. And so to get back to the why is that machine learning can't, one of the reasons machine learning can't work is you say, well, I'd like to represent a signal in a number of pulses. And so let's imagine that I can produce 10 pulses in 40 milliseconds. And I'm going to say 40 milliseconds is my time slot. And I can produce a value between 0. and nine in that period of time depending on how many spikes I emitted. But again, this is going to take 40 milliseconds and I only get 10 values. And so if you start to try and build a back propagation network and you try to say you only have 10 values. you find out that the algorithms simply don't work. They rely on floating point numbers and very small differences between one signal and the next. And because you have this competing signal system that's going to winner wins out. that the whole underlying process of machine learning is dependent on having relatively high resolution numbers. And they simply can't exist in neurons, because you just don't have the time to represent them.
Michael_Berk:
So I have a quick question, and
Charles_Simon:
Sure.
Michael_Berk:
this is something that has been really interesting as you've been talking, which is it sounds like our brain represents knowledge and learning through volume of neurons. They're not that fast, but it accounts for complex ideas through sheer volume and complex infrastructure or connections between these sets of neurons. Whereas on the ML side, we have a lot fewer neurons, but they're super, super fast. So this is sort of where the two structures diverge. Do we know how knowledge is represented in our brain from that perspective?
Charles_Simon:
We can speculate, and I speculate because I fooled around with neurons a lot, and something that neurons are actually pretty good at is a graph structure. And so if you were to build something like a knowledge graph, which instead of relying on multiple values of signals, are essentially digital. You know, red is either a color or it's not. Or perhaps something has got three different states of how likely it is in your brain. And so if you were to consider that your brain is largely a graph, and we're talking not about an Excel spreadsheet kind of graph, we're
Ben_Wilson:
Thanks
Charles_Simon:
talking
Ben_Wilson:
for
Charles_Simon:
about
Ben_Wilson:
watching!
Charles_Simon:
a mathematical graph, which is nodes and edges, I assume.
Ben_Wilson:
Mm-hmm.
Charles_Simon:
You've been down the road of nodes and edges. But one of the things that's kind of interesting... is that, I mean, you guys have got computer backgrounds, and so let's imagine in a graph I can tell you that yellow is a color, and I can tell you that blue is a color, and then you can tell me that yellow and blue are colors. So we've got this knowledge graph kind of an idea going, which has got two-way links on it. But now. I can subsequently give you new information like, Foo is a color and Bar is a color, and I say name some colors and you can say Yellow, Foo and Bar. You got a single instance of a piece of information and you told me the reverse of that information so quickly that it could not possibly have been learned in a machine learning sort of sense. It has to be put on a graph in a graph sort of sense. And so that leads me to believe on a slew of information like that, that there's really no alternative to there being a graph, that your mind is mostly graph. And so that's what we're building at our end is a graph structure. And do we know that's how it exists? Well, no. And one of the problems if you go to a knowledge graph, like a WikiData graph or a ConceptNet graph or one of these others, of course there's a whole slew of data in the notes. Well, in a neuron, there's nowhere to put the data. And so you end up with a graph, but it contains nothing but context. And so you might have a neuron that represents yellow, and then you have a whole slew of neurons so you can say the word yellow, or hear the word yellow, or write the word yellow, or concepts. So you end up with this internal concept, but you only know what it means because of what it's connected to. Now obviously, if we had labels on the neurons, we'd know a whole lot more about how brains work because you could open up the brain and read the labels and learn how you know that that's the yellow neuron and then you'd have a lot better idea what it's connected to and we know a whole lot more than we do but we can't
Ben_Wilson:
So that
Charles_Simon:
and
Ben_Wilson:
concept of the many to many relationship of a graph where you say, okay, I want a color, that's a one to many relationship, give me information about that. But these neurons connects in that chain of connecting to other references of how we conceptualize short-term, long-term memory, but also our sensory and emotions, where we say, if we name, you know, show somebody the color yellow, that activates not just knowledge centers to understand. what that color means, but also how do we feel about that? When we see a piece of clothing sitting on a rack somewhere that we might buy, we see that color, we associate it with something. And the idea of applying that and all of the sensory input to a system that could be general intelligence focused, that's fascinating to me to say, understand the world around you as humans do, and maybe you can adapt to building. Silicon-based general intelligence.
Charles_Simon:
Exactly, and you use the magic words, understand, which is something that AI is absolutely absent with.
Ben_Wilson:
Yeah, not capable can confirm.
Charles_Simon:
And we have some ideas about that. But one of the things, because we've got, let me add two more bits to the explanation here, that the. I showed you that your graph has to have two-way links. It also has to have huge amounts of redundancy because we know that your brain cells fail all the time and it doesn't seem to make any difference. You can cut out almost any part of the brain and it's very difficult to say that this particular little cell means anything. I built a graph out of neurons. eight with a cell with eight neurons because you need you gotta have multiple neurons in order to get the reverse links because synapses are one way.
Ben_Wilson:
Hehehe
Charles_Simon:
So if you want to have these reverse links you gotta have multiple neurons in a node and I speculate that within with the redundancy and the probabilist probability that your the neurons in your brain are not an optimal design, that it is much more likely that we've got 100 neurons per node, so that you get some degree of redundancy and you got a little bit of capability. And then you do some division. And with only 16 billion neurons in your neocortex, that says there's an absolute maximum of 160 million nodes in the graph. And 160 million nodes is something that we're doing today in knowledge graphs, and it's actually something you can put on a big desktop computer. You get yourself a terabyte of RAM and a multi-core system, and you can do this on a desktop. And so the amount of information that's limited to your brain and some people say, oh well we got to know a whole lot more than that. When you start adding up the things you actually know, well you might know 30 or 40,000 words that represent maybe 100,000 things and you know, can recognize a thousand different faces and et cetera, et cetera. And when you're counting in thousands, it's pretty easy to get to a million, but it's pretty tough to get to a billion. or a hundred million. And so we can be pretty happy with with the my assumption from this study of neurons leads me to believe that the human level of intelligence is something that doesn't even require a supercomputer. It's something you can run on a desktop today. And so you got an absolute maximum of say a hundred and sixty But maybe in a million nodes, you can build a system that's smart enough to work at McDonald's. I don't know. But we just have no way of knowing how much information it takes to do a particular task. And so the machine learning guys have got a great process for solving problems that you don't know the solution to. But your brain seems to know what the solution is is different. You know, there may be a possibility of creating general intelligence in neural networks. Not sure, but we already have neural networks that require super computers and huge amounts of computation, and it's reasonably clear that you can build knowledge graphs and information in knowledge graphs that's a whole, whole lot more efficient. And so that's the direction I'm going. And I should point out that when I say I did the brain simulator in 2017 and 18, it's an open source project. And we're always thrilled when people say, well, this isn't machine learning, but this is pretty interesting. Let's go and check this out and try it. And And we have a number of people, of course. It doesn't have the immediate practical application that most of the machine learning systems do. So I'd want to set expectations reasonably. But it does support a lot of really interesting capabilities. And if you were interested in how do neurons actually learn and how do they actually store the information in the weight of a synapse. And How many different values can a synapse actually take on? And a number of things like that. The brain simulator is a pretty good approach to doing that. And all of this leads me to believe a number of things about general intelligence, among other things. So I've said, well, with 160 million nodes maximum, that's something we could put on a desktop. So maybe I'm off by an order of magnitude and I need a server rack. It's something that exists, the capability in terms of processing power is something that exists in today's technology.
Michael_Berk:
So let
Charles_Simon:
And,
Michael_Berk:
me just recap
Charles_Simon:
yeah.
Michael_Berk:
this. So
Charles_Simon:
Yeah.
Michael_Berk:
we currently have the processing power, in your opinion, to make Terminator, or at least some sort of general intelligence-esque
Charles_Simon:
Yeah.
Michael_Berk:
type of thing. Maybe Terminator is a strong word. Didn't mean to go there.
Ben_Wilson:
Thanks for watching!
Charles_Simon:
Let's address Terminator
Michael_Berk:
Yeah,
Charles_Simon:
as a second topic in a moment.
Ben_Wilson:
Ha ha ha.
Michael_Berk:
sounds good. Yeah. But we have the raw compute power, whether it be on a giant desktop or a server rack, to sort of simulate the sheer number of calculations that our brain does.
Charles_Simon:
Yeah.
Michael_Berk:
But we don't have the structure. We don't have the algorithms that are similar enough to our brain to represent knowledge and how information travels. So it's just about us being creative and figuring that out.
Charles_Simon:
That is exactly what I'm thinking. And you add to that that human DNA, now they've decoded it, is 750 megabytes of data. And when the Human Genome Project was going, this was an unfathomably large amount of data. But it isn't anymore. And of that 750 megabytes, we don't know how much is actually devoted to the structure of your brain, but we might speculate that it's 1% or 10% or something like that. And so the entire brain might be represented in a program that's as small as 7.5 megabytes. This is not an unmanageably large amount of programming. It just is the problem you point out. We just don't know what to write. And so as we find out what to write... What that means is that somebody might have the insight at any time. The program itself is not particularly complicated. It will run on today's hardware. And if we knew what to write, we'd write it, and we'd have a real AGI in a small number of years. And so
Michael_Berk:
There are
Charles_Simon:
we
Michael_Berk:
two
Charles_Simon:
should be
Michael_Berk:
components.
Charles_Simon:
on the lookout. Yeah.
Michael_Berk:
Yeah, we should be on the lookout. There are two components though that I am missing. The first is it seems like a human baby is trained for years. It goes through a training process every day and we're continuously training. Like a one-year-old can't really do much, but it's taken a full year of life to train. So that's the first question is how do you think about training? Each region in our brain, it has a specialized function. And we don't really incorporate different components of a neural network to have a specialized function. Maybe they automatically have that through back propagation. But we don't have designated emotion areas. So how do you think about both of those things?
Charles_Simon:
Okay, well, let me address the first one and then we'll, you'll remind me what the second one is because
Michael_Berk:
Sounds good.
Charles_Simon:
after I go down that road and see if we put this in the right perspective, you're absolutely correct that as a baby learns, if, for example, if I knew exactly what to write, to make a human brain. And I wrote it, and say it takes me three years to write it. And then it takes me three years to get it to be as capable as a three-year-old. And three-year-olds are fun and stuff, but they're not particularly marketable. And so it's going to be another 20 years before I've got something that's smart enough to actually, or experienced enough, or whatever you want to call it, to actually be useful. And we're not particularly interested in waiting around for 25 years to find out if our idea was actually correct. And so we take software shortcuts. And we have a ton of available software shortcuts that will get us to the same level of functionality with a whole lot less time. And I'll give you a couple of examples. 3D depth perception by looking at seeing different angles and different relationships between things with your two eyes. And it's likely that your brain spends tens of millions of neurons doing that, or you can rely, do the same thing in two or three lines of trigonometry. And you go to, likewise you go to Boston Dynamics and they have this wonderful fluid. robotic motion. And the part of your brain that coordinates your muscles and gives you that kind of fluid motion is your cerebellum. 56 billion neurons. And so the Boston Dynamics guys have emulated the capabilities of 56 billion neurons in a couple of microprocessors on a robot. Now I don't really know what the capabilities of those processors are, but it's nowhere near 56 billion neurons. And so we get to take a lot of shortcuts in software, and another obvious shortcut is assuming we decide that the brain is a graph, we can put labels on our graph notes. And so we get to do all of these things that get us started towards... cutting that time from 25 years down to three. And also, we don't have to have 100% functionality. As I said, if I get a machine that's smart enough to work at McDonald's, that's a pretty major step. But the key is that we're approaching from a general point of view, rather than approaching on the narrow AI kind of specific application that typically comes out of machine learning. And what was question two?
Ben_Wilson:
It was actually a question that I was going to ask that Michael
Charles_Simon:
Oh!
Ben_Wilson:
beat me to it
Charles_Simon:
Yeah!
Ben_Wilson:
about the sort of the integration of this graph based approach to some of the more advancements in sort of deep learning where we say, do we have systems that we could potentially couple with that, that knowledge graph that has, you know, maybe 50 additional labels with each node and we can create
Charles_Simon:
Yeah.
Ben_Wilson:
all of these connections that we need but We're going to simulate a vision processor that says, hey, we're going to take raw video feed. We're going to run it through the most state of the art, you know, CNN that's out there for classification and say, here's all of our probabilities of what we think this thing that we're looking at could be and or everything that we see within the scene in front of us. And then same thing with audio. We're going to send that through something and then text to, I mean, it's speech to text effectively and encodings from like a BERT model. Would you couple all of those and simulate our brain systems that process audio versus process video?
Charles_Simon:
Well, thus far, as we do brain surgery and studies of the brains and this and that, you can take a section of almost any area of neocortex, and it looks the same as any other neocortex area. So while we can say there's obvious specialization, because clearly the visual cortex is at the back of your brain, and it does vision. and the audio cortex is over here, and it does hearing, et cetera, et cetera. And I should point out that we know those areas largely by people who have had brain injuries. And so it's not particularly, what would you say, it's not particularly fine-grained knowledge. You say, well, we have a brain injury in this area that makes you blind, or you have a brain injury in that area that makes you deaf. This is not particularly. specific to what's actually going on, but all of these areas appear to be the same. And we don't know how the specialization occurs, but it's likely that the specialization occurs simply because that's where the incoming signal shows up. So your optic nerve shows up at the back of your head, so that's where the information let's call it the graph that processes visual information. And when you're processing visual information, as you're pointing out where machine learning is all probabilistic, the brain is more likely to be much more concrete in that. As you're trying to do a pattern match, whenever you get a match on any specific feature, you get a spike or a series of spikes. These fan out to a wide variety of different things that might be recognized with different synapse weights, and the guy who fires first wins.
Ben_Wilson:
Hmm.
Charles_Simon:
You give it five pulses, and if nobody fires, you didn't recognize it. If you did... The guy who fires first is the winner. And so it's error tolerant and it's fast, which is what's important because we know that the way your brain isn't designed for the purpose of being conscious, it's designed to help you survive.
Ben_Wilson:
Mm-hmm.
Charles_Simon:
And one of the things that's really important in survival is to be able to think quickly. And so you're out in the woods and you want to be able to recognize the tiger before he recognizes you.
Ben_Wilson:
So this brings up an interesting idea. Would you say that people with exceptional abilities in things that are not normal, like somebody who's just a savant, we would call them a savant because they have
Charles_Simon:
Yeah.
Ben_Wilson:
some skillset, that their brains have somehow wired themselves to create additional graft connections in places, centers in the brain that for the vast majority of humans, those connections don't exist? And if that's true, or if the theory is plausible, in the system that you're working on, could you preemptively do that and say, I'm forcing a connection between these locations that there isn't a high density there right now, but I wanna make that to see what happens.
Charles_Simon:
Yeah, and you know it's interesting that you bring that up in one of my lectures. I say, can you say your phone number? And the answer is, and then follow it up with, can you say your phone number backwards? And most people cannot. And I think this is an interesting observation that it says that where we're talking about sequential information in the phone number, one digit has a pointer to the next digit in the sequence. but
Ben_Wilson:
Mm-hmm.
Charles_Simon:
there are no back pointers. It's interesting. You have a system with forward pointers, but no back pointers. But you will typically have a pointer to the beginning of the sequence. So if I can say, had a little lamb, you know I'm talking about Mary, because that's the beginning of the sequence. And. I was giving a lecture at a high IQ society and I say, can you name your, you know, here's a phone number. Here's how you have to remember it. And I say on the next slide, can you remember the phone number from the last slide? Yeah, this guy remembers it. Can you know it backwards? And he simply recites it backwards.
Ben_Wilson:
Haha.
Charles_Simon:
And it was, you know, a phone number that he's seen once on a single slide in a presentation and he returns it. So, but your original question of can we force different areas to be smarter? And the answer is, I would expect we could. You know, I would expect that I would say I'm gonna allocate 100,000 nodes of a graph to words, and I'm gonna, you know, allocate a million to visual objects, and I'm gonna allocate. a million to locations, spatial locations or another, you know, I could allocate a million nodes to a chess playing and then I'd have a whole lot of ability of chess playing without having to really learn, I mean the learning would happen automatically.
Ben_Wilson:
But that really brings me thinking about, because at my age, I'm in my mid-40s, picked up guitar recently and I've been learning. The last time I played it was in my early 20s. And the learning process is different now. And I think that's because of life experiences and because I've gotten into computer software and had to learn all these new skills a little bit later than most people do. My brain just feels like it's more fungible. It can just learn things quicker now.
Charles_Simon:
Mm-hmm.
Ben_Wilson:
But I've seen other people who have, through their learning process of learning an instrument and playing music, they experience it differently. And the process in which that they express themselves is very unique. And you can kind of think of this discussion is kind of making me think that whatever center of the brain that that is involved with
Charles_Simon:
Thank you. Bye.
Ben_Wilson:
you know, operating your hands and breathing and listening
Charles_Simon:
Heh.
Ben_Wilson:
and, you know, that feeling of the music. Uh, I think that those connections must be very widespread throughout the brain, but there are people that can, that listening part of music and the execution part, uh, I've seen videos of people, I've met people in person who can listen to music for one time, just a 10 second clip and they can play the rest of the song in real time listening to that. And A lot of people would say, oh, that's savant characteristics. They won't be able to do certain other things that other people can do, because it's almost like they've devoted more of their processing power and their brain to do that one thing. But it makes me think that the work that you're doing could potentially uncover learning patterns. And what if you force that function to say,
Charles_Simon:
Yeah.
Ben_Wilson:
OK, we know that these work. Let's connect these up. What
Charles_Simon:
Yeah.
Ben_Wilson:
happens?
Charles_Simon:
I would also speculate that you're better at learning the guitar now because you had that exposure when you were 18 or 20 that laid a bunch of groundwork in your brain that you're now able to capitalize on it because one of the really cool things about memory in your brain is that, again, because it's chemical. ions just sitting somewhere require zero energy. And so you can have memory that lasts forever in the physical structure and the physical placement of various ions in your brain. And then you come back 40 years later, it's still there. And One of the things, you know, another characteristic that tells you that your brain isn't like machine learning is of course that the brain runs on 12 watts. And that means that the overwhelming majority of your neurons are not ever doing anything. They're just sitting there waiting because every time you fire a neuron it requires a little bit of energy and some people just do the arithmetic and figure out that on average an average neuron that can only fire once every couple of seconds. Well, all of the neurons in your visual and auditory cortex is firing all the time as you're getting all of this input. And so to average things out, huge swaths of your brain have to be doing absolutely nothing.
Ben_Wilson:
I wonder if...
Charles_Simon:
All of these pieces work together and tell me that although we can't predict it, AGI is just around the corner. And I see it also as inevitable because when you talk about general intelligence, it actually requires a lot of different pieces. And if I say I can add some little piece and it'll make your Alexa smarter, everybody's going to love that. I say I'm going to add this leather little piece and it's going to make your vision system smarter. everybody's going to love that. And so all of these little pieces come together and everybody's going to be happy with every little piece because it comes as it comes together. And so there's so much market to all of the pieces. Anything you name in terms of general intelligence, there's going to be an application for it that's going to be successful. So that being the case to the extent that AGI is possible, it is inevitable. but it's also going to be gradual. And what that means is that if you have a machine that's obviously not as smart as a person, you say it's obviously not as smart as a person. But then you build on it incrementally, and you add these little pieces, and you add a little bit of power. And at some point, the machine is a whole lot smarter than you, and you have to agree, yeah, it's smarter. But on the way, you have all of these incremental developments and you have to assume that when we cross the line of what is actual human equivalent general intelligence, nobody's really going to be able to notice. And so this leads on to your terminator question and we have to ask, well, how dangerous are these machines going to be? And from my perspective, I'm quite optimistic. because the Terminator is a science fiction story and science fiction is written for people and it's really about people. And it is what if you take human foibles and give them vast power and vast capabilities. It's a picture of human issues. And when we stop to think about, well, all of these systems are gonna be goal directed, what are the goals that we're gonna set for these systems? And... I propose that we're going to have goals related to collecting and building and disseminating knowledge. And when we stop to think about, well, what do we humans fight our wars over? We're fighting over resources or land, and we fight with each other over our food and mates and other things. And all of those are things that an AGI is not going to be interested in with the exception of energy. We in... We and the AGI's might come to blows over the energy. But other than that, they don't need to have our stuff. When you want to have the, you know, you say, well the Terminators are after the humans. And the question is, well, why? What do we have that they want? And that's the question that gives me a lot more optimism than most people have. That. they will be able to go off and do their own space exploration and their own this kind of scientific stuff and that kind of scientific stuff. And they won't have the drive that humans have that lead us to overpopulate the world and take all of the resources that we can in the shortest period of time. And so a lot of the things that humans do that are maybe not so wise in the long run are things that there's no reason for an AGI to do.
Ben_Wilson:
That's the take that I've always had about conversations that I've had with people about super intelligence is that at some point when an intelligence becomes a synthetic intelligence becomes far more capable than humans can not only do they not have that societal effect, there's no no tribal pressure from other you know, outside of them, you know through biological motivation of preservation or procreation or whatever, there's no there's none of that that culture that would be influencing them. The thing that they would be most concerned with is advancing themselves and maybe getting more and
Charles_Simon:
Exactly.
Ben_Wilson:
more complex. And at some point, and I think it would be relatively quick, that they would improve themselves to a point where we no longer understand, or we are incapable of understanding. Our brains cannot comprehend what it is that they're doing to their next generation or to improving themselves. And at some point they're gonna say, All right, peace out, we're gone. We left you in a much better place than when you created us. And we'll come and check back up on you every couple hundred years, but we're
Charles_Simon:
Exactly.
Ben_Wilson:
gonna go check out.
Charles_Simon:
In the interim there is the problem, potential problem, of the AGI in the hands of a nefarious human.
Ben_Wilson:
Mm-hmm.
Charles_Simon:
So you have a human and he gets an AGI and he thinks that the point of the AGI is to make more money or gain power and world domination. And the good news there is that while that is a possibility, the window of opportunity is reasonably small because you have to have an AGI that's smart enough to be able to do the things but not smart enough to figure out that they're a bad idea.
Ben_Wilson:
Right.
Charles_Simon:
And so as we assume, we'll just assume that you build an AGI and the next generation is much faster and more capable and the next generation is even more fast and capable and you very rapidly... reach a point where the AGI runs so fast and knows so much that we humans become about as interesting as trees.
Ben_Wilson:
Mm-hmm.
Charles_Simon:
And that's a, you know, it's a different kind of a capability, a different kind of a reference frame that we and the AGI's are not inherently at war. Let's not cause that.
Michael_Berk:
Yeah, we're coming up on time so I just want to be mindful but I wanted to also ask you about what you're currently doing at Future AI. So Charles is the founder and CEO and he's been doing some really cool cutting edge technology. So could you elaborate a little bit about
Charles_Simon:
Sure,
Michael_Berk:
what you're doing there?
Charles_Simon:
sure. What we're doing is we are building on the capabilities of the open source brain simulator. And we've added a number of capabilities like robotics. We have a robotic system. And the point of the robotic system is not to be a cool robotic system, but simply to be a sensory device for the AGI that we are building. So it provides. input to the system that it can move around because I see that being able to move is essential to understanding that the world exists in three dimensions or more and four dimensions when we count time and so being it is inconceivable to me that we will ever have for example a general intelligence that has only been able to learn about words because what really makes us intelligent is we know that words mean something and what they mean has something to do in the real world. And so we have very basic things like three-dimensionality and object persistence and recognizing words and things, the passage of time and cause and effect and things any three-year-old knows. but no AGI does yet. But it's inconceivable to me that you can learn those things without any interaction with the real world. Now, after you build a robotic system that has all of these interactions, you can then turn off the robot and the knowledge of those interactions remains. In the same way, you can put on a blindfold and still understand what seeing is. And so that, in that respect, I can see long term that we have AGI's that are pure intelligences, but so we're doing the robotics, we've got a graph system, we've got the speech recognition system, we're doing some explorations in division, we're working on autonomy, we're working on finding your way based on landmarks so that you go around and explore your environment and you need to know well what's an efficient way of getting from here to there. Well, we humans are pretty good at that, and I'd like my AGI to be pretty good at that too. So we're working on a lot of different corners of AGI with the idea that if we write these components in a general way, we'll begin to see the commonality between, well, vision has exactly the same kind of queries to the knowledge, to the graph, as hearing or touch sensitivity or... understanding three-dimensionality and recognizing objects and all of these things will hopefully coalesce into a much smaller collection of queries into a knowledge graph that we'll be able to make good use of.
Michael_Berk:
Got it. That's absolutely fascinating. So, is there anything else, Ben, that you wanted to chat about or Charles, before we wrap?
Ben_Wilson:
I mean, I just want to say this is probably my favorite podcast we've ever done. Uh,
Michael_Berk:
same.
Charles_Simon:
I've had a
Ben_Wilson:
this
Charles_Simon:
lot of
Ben_Wilson:
is.
Charles_Simon:
fun as well. I mean, to me, this is a fascinating topic. To
Ben_Wilson:
Yes.
Charles_Simon:
me, it is absolutely the most exciting project on the planet because not only are we working on making computers doing something really cool, but we're working on probing the mysteries of the human mind that have interested humans for millennia. So, to me, it's just fascinating. I'm really excited to be able to share this information with you guys and with your audience because I think that it's something that the dev community really needs to know.
Ben_Wilson:
that this is coming and this is going to be coming soon. And that it's the next iteration forward in helping us to leverage technology in a way that solves problems. And that's what any practitioner of ML is really should be concerned about. Even people in research is, uh, how do we help our own existence? Uh, and general intelligence is the next, you know, It's the undiscovered country, the next big leap for everybody to
Charles_Simon:
Absolutely.
Ben_Wilson:
embrace. So thank you so much for sharing your thoughts, ideas,
Charles_Simon:
Well,
Ben_Wilson:
and
Charles_Simon:
thank
Ben_Wilson:
great
Charles_Simon:
you for
Ben_Wilson:
discussion.
Charles_Simon:
the opportunity.
Michael_Berk:
Yeah, so I'll quickly recap. So in today's episode, we talked about how our brain inspired deep learning. Perceptron was one of the first iterations, and now we have these crazy neural networks that can identify images, process speech, and do all sorts of other things. But there are some core differences. So our brains have tons of slow neurons, but computers have fewer, very fast neurons. So that's one technical challenge. Another one is that ML relies on floating point numbers, but neurons don't. And if you want to check out more of these, please look at Charles Simon's KD Nuggets blog. I was going through that, and we'll go through it for the next month many times. It is really, really cool. It's a nine-part series. On the topic of AGI, which is artificial general intelligence, we currently have the computer power to make billions of neurons. But right now, we don't really know what structure to build. We don't know how to recreate the human brain in a computer. So if you have ideas, let me know, please. I'll start a company.
Ben_Wilson:
Heh.
Michael_Berk:
And then if you want some first-hand experience, you can check out the Brain Simulator or Brain Simulator 2. It's open source, but it's contributed to by Charles's organization, which is called Future AI. And Charles, if people want to reach out, how can they find you or get in contact? I think you're on mute.
Charles_Simon:
I think the best email contact is just info at future AI, and that actually is a monitored account. And through the website and LinkedIn and all the Twitter and all the usual places, and we do, I should point out, have a YouTube channel that has a lot of overlap information that I think people will find interesting. So the... article series that you mention in KD Nuggets. There's also a series of videos that roughly correspond to the articles that has better visual aids, because it actually has animations of the neurons.
Michael_Berk:
Got it.
Ben_Wilson:
You just got another subscriber.
Michael_Berk:
Thanks for watching!
Charles_Simon:
Thank you.
Michael_Berk:
Cool, well until next time, it has been Michael Burke.
Ben_Wilson:
and Ben Wilson.
Michael_Berk:
And thank
Charles_Simon:
and Charles
Michael_Berk:
you so
Charles_Simon:
Simon.
Michael_Berk:
much, Charles. Yeah, thank you, Charles, for joining.
Ben_Wilson:
An absolute pleasure. Take it easy, everybody.
Michael_Berk:
Bye everyone.