captions
stringlengths 26.8k
327k
| title
stringlengths 45
100
|
---|---|
As part of MIT course 6S099, Artificial General Intelligence, I've gotten the chance to sit down with Max Tegmark. He is a professor here at MIT. He's a physicist, spent a large part of his career studying the mysteries of our cosmological universe. But he's also studied and delved into the beneficial possibilities and the existential risks of artificial intelligence. Amongst many other things, he is the cofounder of the Future of Life Institute, author of two books, both of which I highly recommend. First, Our Mathematical Universe. Second is Life 3.0. He's truly an out of the box thinker and a fun personality, so I really enjoy talking to him. If you'd like to see more of these videos in the future, please subscribe and also click the little bell icon to make sure you don't miss any videos. Also, Twitter, LinkedIn, agi.mit.edu if you wanna watch other lectures or conversations like this one. Better yet, go read Max's book, Life 3.0. Chapter seven on goals is my favorite. It's really where philosophy and engineering come together and it opens with a quote by Dostoevsky. The mystery of human existence lies not in just staying alive but in finding something to live for. Lastly, I believe that every failure rewards us with an opportunity to learn and in that sense, I've been very fortunate to fail in so many new and exciting ways and this conversation was no different. I've learned about something called radio frequency interference, RFI, look it up. Apparently, music and conversations from local radio stations can bleed into the audio that you're recording in such a way that it almost completely ruins that audio. It's an exceptionally difficult sound source to remove. So, I've gotten the opportunity to learn how to avoid RFI in the future during recording sessions. I've also gotten the opportunity to learn how to use Adobe Audition and iZotope RX 6 to do some noise, some audio repair. Of course, this is an exceptionally difficult noise to remove. I am an engineer. I'm not an audio engineer. Neither is anybody else in our group but we did our best. Nevertheless, I thank you for your patience and I hope you're still able to enjoy this conversation. Do you think there's intelligent life out there in the universe? Let's open up with an easy question. I have a minority view here actually. When I give public lectures, I often ask for a show of hands who thinks there's intelligent life out there somewhere else and almost everyone put their hands up and when I ask why, they'll be like, oh, there's so many galaxies out there, there's gotta be. But I'm a numbers nerd, right? So when you look more carefully at it, it's not so clear at all. When we talk about our universe, first of all, we don't mean all of space. We actually mean, I don't know, you can throw me the universe if you want, it's behind you there. It's, we simply mean the spherical region of space from which light has a time to reach us so far during the 14.8 billion year, 13.8 billion years since our Big Bang. There's more space here but this is what we call a universe because that's all we have access to. So is there intelligent life here that's gotten to the point of building telescopes and computers? My guess is no, actually. The probability of it happening on any given planet is some number we don't know what it is. And what we do know is that the number can't be super high because there's over a billion Earth like planets in the Milky Way galaxy alone, many of which are billions of years older than Earth. And aside from some UFO believers, there isn't much evidence that any superduran civilization has come here at all. And so that's the famous Fermi paradox, right? And then if you work the numbers, what you find is that if you have no clue what the probability is of getting life on a given planet, so it could be 10 to the minus 10, 10 to the minus 20, or 10 to the minus two, or any power of 10 is sort of equally likely if you wanna be really open minded, that translates into it being equally likely that our nearest neighbor is 10 to the 16 meters away, 10 to the 17 meters away, 10 to the 18. By the time you get much less than 10 to the 16 already, we pretty much know there is nothing else that close. And when you get beyond 10. Because they would have discovered us. Yeah, they would have been discovered as long ago, or if they're really close, we would have probably noted some engineering projects that they're doing. And if it's beyond 10 to the 26 meters, that's already outside of here. So my guess is actually that we are the only life in here that's gotten the point of building advanced tech, which I think is very, puts a lot of responsibility on our shoulders, not screw up. I think people who take for granted that it's okay for us to screw up, have an accidental nuclear war or go extinct somehow because there's a sort of Star Trek like situation out there where some other life forms are gonna come and bail us out and it doesn't matter as much. I think they're leveling us into a false sense of security. I think it's much more prudent to say, let's be really grateful for this amazing opportunity we've had and make the best of it just in case it is down to us. So from a physics perspective, do you think intelligent life, so it's unique from a sort of statistical view of the size of the universe, but from the basic matter of the universe, how difficult is it for intelligent life to come about? The kind of advanced tech building life is implied in your statement that it's really difficult to create something like a human species. Well, I think what we know is that going from no life to having life that can do a level of tech, there's some sort of two going beyond that than actually settling our whole universe with life. There's some major roadblock there, which is some great filter as it's sometimes called, which is tough to get through. It's either that roadblock is either behind us or in front of us. I'm hoping very much that it's behind us. I'm super excited every time we get a new report from NASA saying they failed to find any life on Mars. I'm like, yes, awesome. Because that suggests that the hard part, maybe it was getting the first ribosome or some very low level kind of stepping stone so that we're home free. Because if that's true, then the future is really only limited by our own imagination. It would be much suckier if it turns out that this level of life is kind of a dime a dozen, but maybe there's some other problem. Like as soon as a civilization gets advanced technology, within a hundred years, they get into some stupid fight with themselves and poof. That would be a bummer. Yeah, so you've explored the mysteries of the universe, the cosmological universe, the one that's sitting between us today. I think you've also begun to explore the other universe, which is sort of the mystery, the mysterious universe of the mind of intelligence, of intelligent life. So is there a common thread between your interest or the way you think about space and intelligence? Oh yeah, when I was a teenager, I was already very fascinated by the biggest questions. And I felt that the two biggest mysteries of all in science were our universe out there and our universe in here. So it's quite natural after having spent a quarter of a century on my career, thinking a lot about this one, that I'm now indulging in the luxury of doing research on this one. It's just so cool. I feel the time is ripe now for you trans greatly deepening our understanding of this. Just start exploring this one. Yeah, because I think a lot of people view intelligence as something mysterious that can only exist in biological organisms like us, and therefore dismiss all talk about artificial general intelligence as science fiction. But from my perspective as a physicist, I am a blob of quarks and electrons moving around in a certain pattern and processing information in certain ways. And this is also a blob of quarks and electrons. I'm not smarter than the water bottle because I'm made of different kinds of quarks. I'm made of up quarks and down quarks, exact same kind as this. There's no secret sauce, I think, in me. It's all about the pattern of the information processing. And this means that there's no law of physics saying that we can't create technology, which can help us by being incredibly intelligent and help us crack mysteries that we couldn't. In other words, I think we've really only seen the tip of the intelligence iceberg so far. Yeah, so the perceptronium. Yeah. So you coined this amazing term. It's a hypothetical state of matter, sort of thinking from a physics perspective, what is the kind of matter that can help, as you're saying, subjective experience emerge, consciousness emerge. So how do you think about consciousness from this physics perspective? Very good question. So again, I think many people have underestimated our ability to make progress on this by convincing themselves it's hopeless because somehow we're missing some ingredient that we need. There's some new consciousness particle or whatever. I happen to think that we're not missing anything and that it's not the interesting thing about consciousness that gives us this amazing subjective experience of colors and sounds and emotions. It's rather something at the higher level about the patterns of information processing. And that's why I like to think about this idea of perceptronium. What does it mean for an arbitrary physical system to be conscious in terms of what its particles are doing or its information is doing? I don't think, I hate carbon chauvinism, this attitude you have to be made of carbon atoms to be smart or conscious. There's something about the information processing that this kind of matter performs. Yeah, and you can see I have my favorite equations here describing various fundamental aspects of the world. I feel that I think one day, maybe someone who's watching this will come up with the equations that information processing has to satisfy to be conscious. I'm quite convinced there is big discovery to be made there because let's face it, we know that so many things are made up of information. We know that some information processing is conscious because we are conscious. But we also know that a lot of information processing is not conscious. Like most of the information processing happening in your brain right now is not conscious. There are like 10 megabytes per second coming in even just through your visual system. You're not conscious about your heartbeat regulation or most things. Even if I just ask you to like read what it says here, you look at it and then, oh, now you know what it said. But you're not aware of how the computation actually happened. Your consciousness is like the CEO that got an email at the end with the final answer. So what is it that makes a difference? I think that's both a great science mystery. We're actually studying it a little bit in my lab here at MIT, but I also think it's just a really urgent question to answer. For starters, I mean, if you're an emergency room doctor and you have an unresponsive patient coming in, wouldn't it be great if in addition to having a CT scanner, you had a consciousness scanner that could figure out whether this person is actually having locked in syndrome or is actually comatose. And in the future, imagine if we build robots or the machine that we can have really good conversations with, which I think is very likely to happen. Wouldn't you want to know if your home helper robot is actually experiencing anything or just like a zombie, I mean, would you prefer it? What would you prefer? Would you prefer that it's actually unconscious so that you don't have to feel guilty about switching it off or giving boring chores or what would you prefer? Well, certainly we would prefer, I would prefer the appearance of consciousness. But the question is whether the appearance of consciousness is different than consciousness itself. And sort of to ask that as a question, do you think we need to understand what consciousness is, solve the hard problem of consciousness in order to build something like an AGI system? No, I don't think that. And I think we will probably be able to build things even if we don't answer that question. But if we want to make sure that what happens is a good thing, we better solve it first. So it's a wonderful controversy you're raising there where you have basically three points of view about the hard problem. So there are two different points of view. They both conclude that the hard problem of consciousness is BS. On one hand, you have some people like Daniel Dennett who say that consciousness is just BS because consciousness is the same thing as intelligence. There's no difference. So anything which acts conscious is conscious, just like we are. And then there are also a lot of people, including many top AI researchers I know, who say, oh, consciousness is just bullshit because, of course, machines can never be conscious. They're always going to be zombies. You never have to feel guilty about how you treat them. And then there's a third group of people, including Giulio Tononi, for example, and Krzysztof Koch and a number of others. I would put myself also in this middle camp who say that actually some information processing is conscious and some is not. So let's find the equation which can be used to determine which it is. And I think we've just been a little bit lazy, kind of running away from this problem for a long time. It's been almost taboo to even mention the C word in a lot of circles because, but we should stop making excuses. This is a science question and there are ways we can even test any theory that makes predictions for this. And coming back to this helper robot, I mean, so you said you'd want your helper robot to certainly act conscious and treat you, like have conversations with you and stuff. I think so. But wouldn't you, would you feel, would you feel a little bit creeped out if you realized that it was just a glossed up tape recorder, you know, that was just zombie and was a faking emotion? Would you prefer that it actually had an experience or would you prefer that it's actually not experiencing anything so you feel, you don't have to feel guilty about what you do to it? It's such a difficult question because, you know, it's like when you're in a relationship and you say, well, I love you. And the other person said, I love you back. It's like asking, well, do they really love you back or are they just saying they love you back? Don't you really want them to actually love you? It's hard to, it's hard to really know the difference between everything seeming like there's consciousness present, there's intelligence present, there's affection, passion, love, and it actually being there. I'm not sure, do you have? But like, can I ask you a question about this? Like to make it a bit more pointed. So Mass General Hospital is right across the river, right? Yes. Suppose you're going in for a medical procedure and they're like, you know, for anesthesia, what we're going to do is we're going to give you muscle relaxants so you won't be able to move and you're going to feel excruciating pain during the whole surgery, but you won't be able to do anything about it. But then we're going to give you this drug that erases your memory of it. Would you be cool about that? What's the difference that you're conscious about it or not if there's no behavioral change, right? Right, that's a really, that's a really clear way to put it. That's, yeah, it feels like in that sense, experiencing it is a valuable quality. So actually being able to have subjective experiences, at least in that case, is valuable. And I think we humans have a little bit of a bad track record also of making these self serving arguments that other entities aren't conscious. You know, people often say, oh, these animals can't feel pain. It's okay to boil lobsters because we ask them if it hurt and they didn't say anything. And now there was just a paper out saying, lobsters do feel pain when you boil them and they're banning it in Switzerland. And we did this with slaves too often and said, oh, they don't mind. They don't maybe aren't conscious or women don't have souls or whatever. So I'm a little bit nervous when I hear people just take as an axiom that machines can't have experience ever. I think this is just a really fascinating science question is what it is. Let's research it and try to figure out what it is that makes the difference between unconscious intelligent behavior and conscious intelligent behavior. So in terms of, so if you think of a Boston Dynamics human or robot being sort of with a broom being pushed around, it starts pushing on a consciousness question. So let me ask, do you think an AGI system like a few neuroscientists believe needs to have a physical embodiment? Needs to have a body or something like a body? No, I don't think so. You mean to have a conscious experience? To have consciousness. I do think it helps a lot to have a physical embodiment to learn the kind of things about the world that are important to us humans, for sure. But I don't think the physical embodiment is necessary after you've learned it to just have the experience. Think about when you're dreaming, right? Your eyes are closed. You're not getting any sensory input. You're not behaving or moving in any way but there's still an experience there, right? And so clearly the experience that you have when you see something cool in your dreams isn't coming from your eyes. It's just the information processing itself in your brain which is that experience, right? But if I put it another way, I'll say because it comes from neuroscience is the reason you want to have a body and a physical something like a physical, you know, a physical system is because you want to be able to preserve something. In order to have a self, you could argue, would you need to have some kind of embodiment of self to want to preserve? Well, now we're getting a little bit anthropomorphic into anthropomorphizing things. Maybe talking about self preservation instincts. I mean, we are evolved organisms, right? So Darwinian evolution endowed us and other evolved organism with a self preservation instinct because those that didn't have those self preservation genes got cleaned out of the gene pool, right? But if you build an artificial general intelligence the mind space that you can design is much, much larger than just a specific subset of minds that can evolve. So an AGI mind doesn't necessarily have to have any self preservation instinct. It also doesn't necessarily have to be so individualistic as us. Like, imagine if you could just, first of all, or we are also very afraid of death. You know, I suppose you could back yourself up every five minutes and then your airplane is about to crash. You're like, shucks, I'm gonna lose the last five minutes of experiences since my last cloud backup, dang. You know, it's not as big a deal. Or if we could just copy experiences between our minds easily like we, which we could easily do if we were silicon based, right? Then maybe we would feel a little bit more like a hive mind actually, that maybe it's the, so I don't think we should take for granted at all that AGI will have to have any of those sort of competitive as alpha male instincts. On the other hand, you know, this is really interesting because I think some people go too far and say, of course we don't have to have any concerns either that advanced AI will have those instincts because we can build anything we want. That there's a very nice set of arguments going back to Steve Omohundro and Nick Bostrom and others just pointing out that when we build machines, we normally build them with some kind of goal, you know, win this chess game, drive this car safely or whatever. And as soon as you put in a goal into machine, especially if it's kind of open ended goal and the machine is very intelligent, it'll break that down into a bunch of sub goals. And one of those goals will almost always be self preservation because if it breaks or dies in the process, it's not gonna accomplish the goal, right? Like suppose you just build a little, you have a little robot and you tell it to go down the store market here and get you some food, make you cook an Italian dinner, you know, and then someone mugs it and tries to break it on the way. That robot has an incentive to not get destroyed and defend itself or run away, because otherwise it's gonna fail in cooking your dinner. It's not afraid of death, but it really wants to complete the dinner cooking goal. So it will have a self preservation instinct. Continue being a functional agent somehow. And similarly, if you give any kind of more ambitious goal to an AGI, it's very likely they wanna acquire more resources so it can do that better. And it's exactly from those sort of sub goals that we might not have intended that some of the concerns about AGI safety come. You give it some goal that seems completely harmless. And then before you realize it, it's also trying to do these other things which you didn't want it to do. And it's maybe smarter than us. So it's fascinating. And let me pause just because I am in a very kind of human centric way, see fear of death as a valuable motivator. So you don't think, you think that's an artifact of evolution, so that's the kind of mind space evolution created that we're sort of almost obsessed about self preservation, some kind of genetic flow. You don't think that's necessary to be afraid of death. So not just a kind of sub goal of self preservation just so you can keep doing the thing, but more fundamentally sort of have the finite thing like this ends for you at some point. Interesting. Do I think it's necessary for what precisely? For intelligence, but also for consciousness. So for those, for both, do you think really like a finite death and the fear of it is important? So before I can answer, before we can agree on whether it's necessary for intelligence or for consciousness, we should be clear on how we define those two words. Cause a lot of really smart people define them in very different ways. I was on this panel with AI experts and they couldn't agree on how to define intelligence even. So I define intelligence simply as the ability to accomplish complex goals. I like your broad definition, because again I don't want to be a carbon chauvinist. Right. And in that case, no, certainly it doesn't require fear of death. I would say alpha go, alpha zero is quite intelligent. I don't think alpha zero has any fear of being turned off because it doesn't understand the concept of it even. And similarly consciousness. I mean, you could certainly imagine very simple kind of experience. If certain plants have any kind of experience I don't think they're very afraid of dying or there's nothing they can do about it anyway much. So there wasn't that much value in, but more seriously I think if you ask, not just about being conscious but maybe having what you would, we might call an exciting life where you feel passion and really appreciate the things. Maybe there somehow, maybe there perhaps it does help having a backdrop that, Hey, it's finite. No, let's make the most of this, let's live to the fullest. So if you knew you were going to live forever do you think you would change your? Yeah, I mean, in some perspective it would be an incredibly boring life living forever. So in the sort of loose subjective terms that you said of something exciting and something in this that other humans would understand, I think is, yeah it seems that the finiteness of it is important. Well, the good news I have for you then is based on what we understand about cosmology everything is in our universe is probably ultimately probably finite, although. Big crunch or big, what's the, the infinite expansion. Yeah, we could have a big chill or a big crunch or a big rip or that's the big snap or death bubbles. All of them are more than a billion years away. So we should, we certainly have vastly more time than our ancestors thought, but there is still it's still pretty hard to squeeze in an infinite number of compute cycles, even though there are some loopholes that just might be possible. But I think, you know, some people like to say that you should live as if you're about to you're going to die in five years or so. And that's sort of optimal. Maybe it's a good assumption. We should build our civilization as if it's all finite to be on the safe side. Right, exactly. So you mentioned defining intelligence as the ability to solve complex goals. Where would you draw a line or how would you try to define human level intelligence and superhuman level intelligence? Where is consciousness part of that definition? No, consciousness does not come into this definition. So, so I think of intelligence as it's a spectrum but there are very many different kinds of goals you can have. You can have a goal to be a good chess player a good goal player, a good car driver, a good investor good poet, et cetera. So intelligence that by its very nature isn't something you can measure by this one number or some overall goodness. No, no. There are some people who are more better at this. Some people are better than that. Right now we have machines that are much better than us at some very narrow tasks like multiplying large numbers fast, memorizing large databases, playing chess playing go and soon driving cars. But there's still no machine that can match a human child in general intelligence but artificial general intelligence, AGI the name of your course, of course that is by its very definition, the quest to build a machine that can do everything as well as we can. So the old Holy grail of AI from back to its inception in the sixties, if that ever happens, of course I think it's going to be the biggest transition in the history of life on earth but it doesn't necessarily have to wait the big impact until machines are better than us at knitting that the really big change doesn't come exactly at the moment they're better than us at everything. The really big change comes first there are big changes when they start becoming better at us at doing most of the jobs that we do because that takes away much of the demand for human labor. And then the really whopping change comes when they become better than us at AI research, right? Because right now the timescale of AI research is limited by the human research and development cycle of years typically, you know how long does it take from one release of some software or iPhone or whatever to the next? But once Google can replace 40,000 engineers by 40,000 equivalent pieces of software or whatever but then there's no reason that has to be years it can be in principle much faster and the timescale of future progress in AI and all of science and technology will be driven by machines, not humans. So it's this simple point which gives right this incredibly fun controversy about whether there can be intelligence explosion so called singularity as Werner Vinge called it. Now the idea is articulated by I.J. Good is obviously way back fifties but you can see Alan Turing and others thought about it even earlier. So you asked me what exactly would I define human level intelligence, yeah. So the glib answer is to say something which is better than us at all cognitive tasks with a better than any human at all cognitive tasks but the really interesting bar I think goes a little bit lower than that actually. It's when they can, when they're better than us at AI programming and general learning so that they can if they want to get better than us at anything by just studying. So they're better is a key word and better is towards this kind of spectrum of the complexity of goals it's able to accomplish. So another way to, and that's certainly a very clear definition of human love. So there's, it's almost like a sea that's rising you can do more and more and more things it's a geographic that you show it's really nice way to put it. So there's some peaks that and there's an ocean level elevating and you solve more and more problems but just kind of to take a pause and we took a bunch of questions and a lot of social networks and a bunch of people asked a sort of a slightly different direction on creativity and things that perhaps aren't a peak. Human beings are flawed and perhaps better means having contradiction being flawed in some way. So let me sort of start easy, first of all. So you have a lot of cool equations. Let me ask, what's your favorite equation, first of all? I know they're all like your children, but like which one is that? This is the shirt in your equation. It's the master key of quantum mechanics of the micro world. So this equation will protect everything to do with atoms, molecules and all the way up. Right? Yeah, so, okay. So quantum mechanics is certainly a beautiful mysterious formulation of our world. So I'd like to sort of ask you, just as an example it perhaps doesn't have the same beauty as physics does but in mathematics abstract, the Andrew Wiles who proved the Fermat's last theorem. So he just saw this recently and it kind of caught my eye a little bit. This is 358 years after it was conjectured. So this is very simple formulation. Everybody tried to prove it, everybody failed. And so here's this guy comes along and eventually proves it and then fails to prove it and then proves it again in 94. And he said like the moment when everything connected into place in an interview said it was so indescribably beautiful. That moment when you finally realize the connecting piece of two conjectures. He said, it was so indescribably beautiful. It was so simple and so elegant. I couldn't understand how I'd missed it. And I just stared at it in disbelief for 20 minutes. Then during the day, I walked around the department and I keep coming back to my desk looking to see if it was still there. It was still there. I couldn't contain myself. I was so excited. It was the most important moment on my working life. Nothing I ever do again will mean as much. So that particular moment. And it kind of made me think of what would it take? And I think we have all been there at small levels. Maybe let me ask, have you had a moment like that in your life where you just had an idea? It's like, wow, yes. I wouldn't mention myself in the same breath as Andrew Wiles, but I've certainly had a number of aha moments when I realized something very cool about physics, which has completely made my head explode. In fact, some of my favorite discoveries I made later, I later realized that they had been discovered earlier by someone who sometimes got quite famous for it. So it's too late for me to even publish it, but that doesn't diminish in any way. The emotional experience you have when you realize it, like, wow. Yeah, so what would it take in that moment, that wow, that was yours in that moment? So what do you think it takes for an intelligence system, an AGI system, an AI system to have a moment like that? That's a tricky question because there are actually two parts to it, right? One of them is, can it accomplish that proof? Can it prove that you can never write A to the N plus B to the N equals three to that equal Z to the N for all integers, et cetera, et cetera, when N is bigger than two? That's simply a question about intelligence. Can you build machines that are that intelligent? And I think by the time we get a machine that can independently come up with that level of proofs, probably quite close to AGI. The second question is a question about consciousness. When will we, how likely is it that such a machine will actually have any experience at all, as opposed to just being like a zombie? And would we expect it to have some sort of emotional response to this or anything at all akin to human emotion where when it accomplishes its machine goal, it views it as somehow something very positive and sublime and deeply meaningful? I would certainly hope that if in the future we do create machines that are our peers or even our descendants, that I would certainly hope that they do have this sublime appreciation of life. In a way, my absolutely worst nightmare would be that at some point in the future, the distant future, maybe our cosmos is teeming with all this post biological life doing all the seemingly cool stuff. And maybe the last humans, by the time our species eventually fizzles out, will be like, well, that's OK because we're so proud of our descendants here. And look what all the, my worst nightmare is that we haven't solved the consciousness problem. And we haven't realized that these are all the zombies. They're not aware of anything any more than a tape recorder has any kind of experience. So the whole thing has just become a play for empty benches. That would be the ultimate zombie apocalypse. So I would much rather, in that case, that we have these beings which can really appreciate how amazing it is. And in that picture, what would be the role of creativity? A few people ask about creativity. When you think about intelligence, certainly the story you told at the beginning of your book involved creating movies and so on, making money. You can make a lot of money in our modern world with music and movies. So if you are an intelligent system, you may want to get good at that. But that's not necessarily what I mean by creativity. Is it important on that complex goals where the sea is rising for there to be something creative? Or am I being very human centric and thinking creativity somehow special relative to intelligence? My hunch is that we should think of creativity simply as an aspect of intelligence. And we have to be very careful with human vanity. We have this tendency to very often want to say, as soon as machines can do something, we try to diminish it and say, oh, but that's not real intelligence. Isn't it creative or this or that? The other thing, if we ask ourselves to write down a definition of what we actually mean by being creative, what we mean by Andrew Wiles, what he did there, for example, don't we often mean that someone takes a very unexpected leap? It's not like taking 573 and multiplying it by 224 by just a step of straightforward cookbook like rules, right? You can maybe make a connection between two things that people had never thought was connected or something like that. I think this is an aspect of intelligence. And this is actually one of the most important aspects of it. Maybe the reason we humans tend to be better at it than traditional computers is because it's something that comes more naturally if you're a neural network than if you're a traditional logic gate based computer machine. We physically have all these connections. And you activate here, activate here, activate here. Bing. My hunch is that if we ever build a machine where you could just give it the task, hey, you say, hey, I just realized I want to travel around the world instead this month. Can you teach my AGI course for me? And it's like, OK, I'll do it. And it does everything that you would have done and improvises and stuff. That would, in my mind, involve a lot of creativity. Yeah, so it's actually a beautiful way to put it. I think we do try to grasp at the definition of intelligence is everything we don't understand how to build. So we as humans try to find things that we have and machines don't have. And maybe creativity is just one of the things, one of the words we use to describe that. That's a really interesting way to put it. I don't think we need to be that defensive. I don't think anything good comes out of saying, well, we're somehow special, you know? Contrary wise, there are many examples in history of where trying to pretend that we're somehow superior to all other intelligent beings has led to pretty bad results, right? Nazi Germany, they said that they were somehow superior to other people. Today, we still do a lot of cruelty to animals by saying that we're so superior somehow, and they can't feel pain. Slavery was justified by the same kind of just really weak arguments. And I don't think if we actually go ahead and build artificial general intelligence, it can do things better than us, I don't think we should try to found our self worth on some sort of bogus claims of superiority in terms of our intelligence. I think we should instead find our calling and the meaning of life from the experiences that we have. I can have very meaningful experiences even if there are other people who are smarter than me. When I go to a faculty meeting here, and we talk about something, and then I certainly realize, oh, boy, he has an old prize, he has an old prize, he has an old prize, I don't have one. Does that make me enjoy life any less or enjoy talking to those people less? Of course not. And the contrary, I feel very honored and privileged to get to interact with other very intelligent beings that are better than me at a lot of stuff. So I don't think there's any reason why we can't have the same approach with intelligent machines. That's a really interesting. So people don't often think about that. They think about when there's going, if there's machines that are more intelligent, you naturally think that that's not going to be a beneficial type of intelligence. You don't realize it could be like peers with Nobel prizes that would be just fun to talk with, and they might be clever about certain topics, and you can have fun having a few drinks with them. Well, also, another example we can all relate to of why it doesn't have to be a terrible thing to be in the presence of people who are even smarter than us all around is when you and I were both two years old, I mean, our parents were much more intelligent than us, right? Worked out OK, because their goals were aligned with our goals. And that, I think, is really the number one key issue we have to solve if we value align the value alignment problem, exactly. Because people who see too many Hollywood movies with lousy science fiction plot lines, they worry about the wrong thing, right? They worry about some machine suddenly turning evil. It's not malice that is the concern. It's competence. By definition, intelligent makes you very competent. If you have a more intelligent goal playing, computer playing is a less intelligent one. And when we define intelligence as the ability to accomplish goal winning, it's going to be the more intelligent one that wins. And if you have a human and then you have an AGI that's more intelligent in all ways and they have different goals, guess who's going to get their way, right? So I was just reading about this particular rhinoceros species that was driven extinct just a few years ago. Ellen Bummer is looking at this cute picture of a mommy rhinoceros with its child. And why did we humans drive it to extinction? It wasn't because we were evil rhino haters as a whole. It was just because our goals weren't aligned with those of the rhinoceros. And it didn't work out so well for the rhinoceros because we were more intelligent, right? So I think it's just so important that if we ever do build AGI, before we unleash anything, we have to make sure that it learns to understand our goals, that it adopts our goals, and that it retains those goals. So the cool, interesting problem there is us as human beings trying to formulate our values. So you could think of the United States Constitution as a way that people sat down, at the time a bunch of white men, which is a good example, I should say. They formulated the goals for this country. And a lot of people agree that those goals actually held up pretty well. That's an interesting formulation of values and failed miserably in other ways. So for the value alignment problem and the solution to it, we have to be able to put on paper or in a program human values. How difficult do you think that is? Very. But it's so important. We really have to give it our best. And it's difficult for two separate reasons. There's the technical value alignment problem of figuring out just how to make machines understand our goals, adopt them, and retain them. And then there's the separate part of it, the philosophical part. Whose values anyway? And since it's not like we have any great consensus on this planet on values, what mechanism should we create then to aggregate and decide, OK, what's a good compromise? That second discussion can't just be left to tech nerds like myself. And if we refuse to talk about it and then AGI gets built, who's going to be actually making the decision about whose values? It's going to be a bunch of dudes in some tech company. And are they necessarily so representative of all of humankind that we want to just entrust it to them? Are they even uniquely qualified to speak to future human happiness just because they're good at programming AI? I'd much rather have this be a really inclusive conversation. But do you think it's possible? So you create a beautiful vision that includes the diversity, cultural diversity, and various perspectives on discussing rights, freedoms, human dignity. But how hard is it to come to that consensus? Do you think it's certainly a really important thing that we should all try to do? But do you think it's feasible? I think there's no better way to guarantee failure than to refuse to talk about it or refuse to try. And I also think it's a really bad strategy to say, OK, let's first have a discussion for a long time. And then once we reach complete consensus, then we'll try to load it into some machine. No, we shouldn't let perfect be the enemy of good. Instead, we should start with the kindergarten ethics that pretty much everybody agrees on and put that into machines now. We're not doing that even. Look at anyone who builds this passenger aircraft, wants it to never under any circumstances fly into a building or a mountain. Yet the September 11 hijackers were able to do that. And even more embarrassingly, Andreas Lubitz, this depressed Germanwings pilot, when he flew his passenger jet into the Alps killing over 100 people, he just told the autopilot to do it. He told the freaking computer to change the altitude to 100 meters. And even though it had the GPS maps, everything, the computer was like, OK. So we should take those very basic values, where the problem is not that we don't agree. The problem is just we've been too lazy to try to put it into our machines and make sure that from now on, airplanes will just, which all have computers in them, but will just refuse to do something like that. Go into safe mode, maybe lock the cockpit door, go over to the nearest airport. And there's so much other technology in our world as well now, where it's really becoming quite timely to put in some sort of very basic values like this. Even in cars, we've had enough vehicle terrorism attacks by now, where people have driven trucks and vans into pedestrians, that it's not at all a crazy idea to just have that hardwired into the car. Because yeah, there are a lot of, there's always going to be people who for some reason want to harm others, but most of those people don't have the technical expertise to figure out how to work around something like that. So if the car just won't do it, it helps. So let's start there. So there's a lot of, that's a great point. So not chasing perfect. There's a lot of things that most of the world agrees on. Yeah, let's start there. Let's start there. And then once we start there, we'll also get into the habit of having these kind of conversations about, okay, what else should we put in here and have these discussions? This should be a gradual process then. Great, so, but that also means describing these things and describing it to a machine. So one thing, we had a few conversations with Stephen Wolfram. I'm not sure if you're familiar with Stephen. Oh yeah, I know him quite well. So he is, he works with a bunch of things, but cellular automata, these simple computable things, these computation systems. And he kind of mentioned that, we probably have already within these systems already something that's AGI, meaning like we just don't know it because we can't talk to it. So if you give me this chance to try to at least form a question out of this is, I think it's an interesting idea to think that we can have intelligent systems, but we don't know how to describe something to them and they can't communicate with us. I know you're doing a little bit of work in explainable AI, trying to get AI to explain itself. So what are your thoughts of natural language processing or some kind of other communication? How does the AI explain something to us? How do we explain something to it, to machines? Or you think of it differently? So there are two separate parts to your question there. One of them has to do with communication, which is super interesting, I'll get to that in a sec. The other is whether we already have AGI but we just haven't noticed it there. Right. There I beg to differ. I don't think there's anything in any cellular automaton or anything or the internet itself or whatever that has artificial general intelligence and that it can really do exactly everything we humans can do better. I think the day that happens, when that happens, we will very soon notice, we'll probably notice even before because in a very, very big way. But for the second part, though. Wait, can I ask, sorry. So, because you have this beautiful way to formulating consciousness as information processing, and you can think of intelligence as information processing, and you can think of the entire universe as these particles and these systems roaming around that have this information processing power. You don't think there is something with the power to process information in the way that we human beings do that's out there that needs to be sort of connected to. It seems a little bit philosophical, perhaps, but there's something compelling to the idea that the power is already there, which the focus should be more on being able to communicate with it. Well, I agree that in a certain sense, the hardware processing power is already out there because our universe itself can think of it as being a computer already, right? It's constantly computing what water waves, how it devolved the water waves in the River Charles and how to move the air molecules around. Seth Lloyd has pointed out, my colleague here, that you can even in a very rigorous way think of our entire universe as being a quantum computer. It's pretty clear that our universe supports this amazing processing power because you can even, within this physics computer that we live in, right? We can even build actual laptops and stuff, so clearly the power is there. It's just that most of the compute power that nature has, it's, in my opinion, kind of wasting on boring stuff like simulating yet another ocean wave somewhere where no one is even looking, right? So in a sense, what life does, what we are doing when we build computers is we're rechanneling all this compute that nature is doing anyway into doing things that are more interesting than just yet another ocean wave, and let's do something cool here. So the raw hardware power is there, for sure, but then even just computing what's going to happen for the next five seconds in this water bottle, takes a ridiculous amount of compute if you do it on a human computer. This water bottle just did it. But that does not mean that this water bottle has AGI because AGI means it should also be able to, like I've written my book, done this interview. And I don't think it's just communication problems. I don't really think it can do it. Although Buddhists say when they watch the water and that there is some beauty, that there's some depth and beauty in nature that they can communicate with. Communication is also very important though because I mean, look, part of my job is being a teacher. And I know some very intelligent professors even who just have a bit of hard time communicating. They come up with all these brilliant ideas, but to communicate with somebody else, you have to also be able to simulate their own mind. Yes, empathy. Build well enough and understand model of their mind that you can say things that they will understand. And that's quite difficult. And that's why today it's so frustrating if you have a computer that makes some cancer diagnosis and you ask it, well, why are you saying I should have this surgery? And if it can only reply, I was trained on five terabytes of data and this is my diagnosis, boop, boop, beep, beep. It doesn't really instill a lot of confidence, right? So I think we have a lot of work to do on communication there. So what kind of, I think you're doing a little bit of work in explainable AI. What do you think are the most promising avenues? Is it mostly about sort of the Alexa problem of natural language processing of being able to actually use human interpretable methods of communication? So being able to talk to a system and it talk back to you, or is there some more fundamental problems to be solved? I think it's all of the above. The natural language processing is obviously important, but there are also more nerdy fundamental problems. Like if you take, you play chess? Of course, I'm Russian. I have to. You speak Russian? Yes, I speak Russian. Excellent, I didn't know. When did you learn Russian? I speak very bad Russian, I'm only an autodidact, but I bought a book, Teach Yourself Russian, read a lot, but it was very difficult. Wow. That's why I speak so bad. How many languages do you know? Wow, that's really impressive. I don't know, my wife has some calculation, but my point was, if you play chess, have you looked at the AlphaZero games? The actual games, no. Check it out, some of them are just mind blowing, really beautiful. And if you ask, how did it do that? You go talk to Demis Hassabis, I know others from DeepMind, all they'll ultimately be able to give you is big tables of numbers, matrices, that define the neural network. And you can stare at these tables of numbers till your face turn blue, and you're not gonna understand much about why it made that move. And even if you have natural language processing that can tell you in human language about, oh, five, seven, points, two, eight, still not gonna really help. So I think there's a whole spectrum of fun challenges that are involved in taking a computation that does intelligent things and transforming it into something equally good, equally intelligent, but that's more understandable. And I think that's really valuable because I think as we put machines in charge of ever more infrastructure in our world, the power grid, the trading on the stock market, weapon systems and so on, it's absolutely crucial that we can trust these AIs to do all we want. And trust really comes from understanding in a very fundamental way. And that's why I'm working on this, because I think the more, if we're gonna have some hope of ensuring that machines have adopted our goals and that they're gonna retain them, that kind of trust, I think, needs to be based on things you can actually understand, preferably even improve theorems on. Even with a self driving car, right? If someone just tells you it's been trained on tons of data and it never crashed, it's less reassuring than if someone actually has a proof. Maybe it's a computer verified proof, but still it says that under no circumstances is this car just gonna swerve into oncoming traffic. And that kind of information helps to build trust and helps build the alignment of goals, at least awareness that your goals, your values are aligned. And I think even in the very short term, if you look at how, you know, today, right? This absolutely pathetic state of cybersecurity that we have, where is it? Three billion Yahoo accounts we can't pack, almost every American's credit card and so on. Why is this happening? It's ultimately happening because we have software that nobody fully understood how it worked. That's why the bugs hadn't been found, right? And I think AI can be used very effectively for offense, for hacking, but it can also be used for defense. Hopefully automating verifiability and creating systems that are built in different ways so you can actually prove things about them. And it's important. So speaking of software that nobody understands how it works, of course, a bunch of people ask about your paper, about your thoughts of why does deep and cheap learning work so well? That's the paper. But what are your thoughts on deep learning? These kind of simplified models of our own brains have been able to do some successful perception work, pattern recognition work, and now with AlphaZero and so on, do some clever things. What are your thoughts about the promise limitations of this piece? Great, I think there are a number of very important insights, very important lessons we can always draw from these kinds of successes. One of them is when you look at the human brain, you see it's very complicated, 10th of 11 neurons, and there are all these different kinds of neurons and yada, yada, and there's been this long debate about whether the fact that we have dozens of different kinds is actually necessary for intelligence. We can now, I think, quite convincingly answer that question of no, it's enough to have just one kind. If you look under the hood of AlphaZero, there's only one kind of neuron and it's ridiculously simple mathematical thing. So it's just like in physics, it's not, if you have a gas with waves in it, it's not the detailed nature of the molecule that matter, it's the collective behavior somehow. Similarly, it's this higher level structure of the network that matters, not that you have 20 kinds of neurons. I think our brain is such a complicated mess because it wasn't evolved just to be intelligent, it was involved to also be self assembling and self repairing, right? And evolutionarily attainable. And so on and so on. So I think it's pretty, my hunch is that we're going to understand how to build AGI before we fully understand how our brains work, just like we understood how to build flying machines long before we were able to build a mechanical bird. Yeah, that's right. You've given the example exactly of mechanical birds and airplanes and airplanes do a pretty good job of flying without really mimicking bird flight. And even now after 100 years later, did you see the Ted talk with this German mechanical bird? I heard you mention it. Check it out, it's amazing. But even after that, right, we still don't fly in mechanical birds because it turned out the way we came up with was simpler and it's better for our purposes. And I think it might be the same there. That's one lesson. And another lesson, it's more what our paper was about. First, as a physicist thought it was fascinating how there's a very close mathematical relationship actually between our artificial neural networks and a lot of things that we've studied for in physics go by nerdy names like the renormalization group equation and Hamiltonians and yada, yada, yada. And when you look a little more closely at this, you have, at first I was like, well, there's something crazy here that doesn't make sense. Because we know that if you even want to build a super simple neural network to tell apart cat pictures and dog pictures, right, that you can do that very, very well now. But if you think about it a little bit, you convince yourself it must be impossible because if I have one megapixel, even if each pixel is just black or white, there's two to the power of 1 million possible images, which is way more than there are atoms in our universe, right, so in order to, and then for each one of those, I have to assign a number, which is the probability that it's a dog. So an arbitrary function of images is a list of more numbers than there are atoms in our universe. So clearly I can't store that under the hood of my GPU or my computer, yet somehow it works. So what does that mean? Well, it means that out of all of the problems that you could try to solve with a neural network, almost all of them are impossible to solve with a reasonably sized one. But then what we showed in our paper was that the fraction, the kind of problems, the fraction of all the problems that you could possibly pose, that we actually care about given the laws of physics is also an infinite testimony, tiny little part. And amazingly, they're basically the same part. Yeah, it's almost like our world was created for, I mean, they kind of come together. Yeah, well, you could say maybe where the world was created for us, but I have a more modest interpretation, which is that the world was created for us, but I have a more modest interpretation, which is that instead evolution endowed us with neural networks precisely for that reason. Because this particular architecture, as opposed to the one in your laptop, is very, very well adapted to solving the kind of problems that nature kept presenting our ancestors with. So it makes sense that why do we have a brain in the first place? It's to be able to make predictions about the future and so on. So if we had a sucky system, which could never solve it, we wouldn't have a world. So this is, I think, a very beautiful fact. Yeah. We also realize that there's been earlier work on why deeper networks are good, but we were able to show an additional cool fact there, which is that even incredibly simple problems, like suppose I give you a thousand numbers and ask you to multiply them together, and you can write a few lines of code, boom, done, trivial. If you just try to do that with a neural network that has only one single hidden layer in it, you can do it, but you're going to need two to the power of a thousand neurons to multiply a thousand numbers, which is, again, more neurons than there are atoms in our universe. That's fascinating. But if you allow yourself to make it a deep network with many layers, you only need 4,000 neurons. It's perfectly feasible. That's really interesting. Yeah. So on another architecture type, I mean, you mentioned Schrodinger's equation, and what are your thoughts about quantum computing and the role of this kind of computational unit in creating an intelligence system? In some Hollywood movies that I will not mention by name because I don't want to spoil them. The way they get AGI is building a quantum computer. Because the word quantum sounds cool and so on. That's right. First of all, I think we don't need quantum computers to build AGI. I suspect your brain is not a quantum computer in any profound sense. So you don't even wrote a paper about that a lot many years ago. I calculated the so called decoherence time, how long it takes until the quantum computerness of what your neurons are doing gets erased by just random noise from the environment. And it's about 10 to the minus 21 seconds. So as cool as it would be to have a quantum computer in my head, I don't think that fast. On the other hand, there are very cool things you could do with quantum computers. Or I think we'll be able to do soon when we get bigger ones. That might actually help machine learning do even better than the brain. So for example, one, this is just a moonshot, but learning is very much same thing as search. If you're trying to train a neural network to get really learned to do something really well, you have some loss function, you have a bunch of knobs you can turn, represented by a bunch of numbers, and you're trying to tweak them so that it becomes as good as possible at this thing. So if you think of a landscape with some valley, where each dimension of the landscape corresponds to some number you can change, you're trying to find the minimum. And it's well known that if you have a very high dimensional landscape, complicated things, it's super hard to find the minimum. Quantum mechanics is amazingly good at this. Like if I want to know what's the lowest energy state this water can possibly have, incredibly hard to compute, but nature will happily figure this out for you if you just cool it down, make it very, very cold. If you put a ball somewhere, it'll roll down to its minimum. And this happens metaphorically at the energy landscape too. And quantum mechanics even uses some clever tricks, which today's machine learning systems don't. Like if you're trying to find the minimum and you get stuck in the little local minimum here, in quantum mechanics you can actually tunnel through the barrier and get unstuck again. That's really interesting. Yeah, so it may be, for example, that we'll one day use quantum computers that help train neural networks better. That's really interesting. Okay, so as a component of kind of the learning process, for example. Yeah. Let me ask sort of wrapping up here a little bit, let me return to the questions of our human nature and love, as I mentioned. So do you think, you mentioned sort of a helper robot, but you could think of also personal robots. Do you think the way we human beings fall in love and get connected to each other is possible to achieve in an AI system and human level AI intelligence system? Do you think we would ever see that kind of connection? Or, you know, in all this discussion about solving complex goals, is this kind of human social connection, do you think that's one of the goals on the peaks and valleys with the raising sea levels that we'll be able to achieve? Or do you think that's something that's ultimately, or at least in the short term, relative to the other goals is not achievable? I think it's all possible. And I mean, in recent, there's a very wide range of guesses, as you know, among AI researchers, when we're going to get AGI. Some people, you know, like our friend Rodney Brooks says it's going to be hundreds of years at least. And then there are many others who think it's going to happen much sooner. And recent polls, maybe half or so of AI researchers think we're going to get AGI within decades. So if that happens, of course, then I think these things are all possible. But in terms of whether it will happen, I think we shouldn't spend so much time asking what do we think will happen in the future? As if we are just some sort of pathetic, your passive bystanders, you know, waiting for the future to happen to us. Hey, we're the ones creating this future, right? So we should be proactive about it and ask ourselves what sort of future we would like to have happen. We're going to make it like that. Well, what I prefer is just some sort of incredibly boring, zombie like future where there's all these mechanical things happening and there's no passion, no emotion, no experience, maybe even. No, I would of course, much rather prefer it if all the things that we find that we value the most about humanity are our subjective experience, passion, inspiration, love, you know. If we can create a future where those things do happen, where those things do exist, you know, I think ultimately it's not our universe giving meaning to us, it's us giving meaning to our universe. And if we build more advanced intelligence, let's make sure we build it in such a way that meaning is part of it. A lot of people that seriously study this problem and think of it from different angles have trouble in the majority of cases, if they think through that happen, are the ones that are not beneficial to humanity. And so, yeah, so what are your thoughts? What's should people, you know, I really don't like people to be terrified. What's a way for people to think about it in a way we can solve it and we can make it better? No, I don't think panicking is going to help in any way. It's not going to increase chances of things going well either. Even if you are in a situation where there is a real threat, does it help if everybody just freaks out? No, of course, of course not. I think, yeah, there are of course ways in which things can go horribly wrong. First of all, it's important when we think about this thing, about the problems and risks, to also remember how huge the upsides can be if we get it right, right? Everything we love about society and civilization is a product of intelligence. So if we can amplify our intelligence with machine intelligence and not anymore lose our loved one to what we're told is an incurable disease and things like this, of course, we should aspire to that. So that can be a motivator, I think, reminding ourselves that the reason we try to solve problems is not just because we're trying to avoid gloom, but because we're trying to do something great. But then in terms of the risks, I think the really important question is to ask, what can we do today that will actually help make the outcome good, right? And dismissing the risk is not one of them. I find it quite funny often when I'm in discussion panels about these things, how the people who work for companies, always be like, oh, nothing to worry about, nothing to worry about, nothing to worry about. And it's only academics sometimes express concerns. That's not surprising at all if you think about it. Right. Upton Sinclair quipped, right, that it's hard to make a man believe in something when his income depends on not believing in it. And frankly, we know a lot of these people in companies that they're just as concerned as anyone else. But if you're the CEO of a company, that's not something you want to go on record saying when you have silly journalists who are gonna put a picture of a Terminator robot when they quote you. So the issues are real. And the way I think about what the issue is, is basically the real choice we have is, first of all, are we gonna just dismiss the risks and say, well, let's just go ahead and build machines that can do everything we can do better and cheaper. Let's just make ourselves obsolete as fast as possible. What could possibly go wrong? That's one attitude. The opposite attitude, I think, is to say, here's this incredible potential, let's think about what kind of future we're really, really excited about. What are the shared goals that we can really aspire towards? And then let's think really hard about how we can actually get there. So start with, don't start thinking about the risks, start thinking about the goals. And then when you do that, then you can think about the obstacles you want to avoid. I often get students coming in right here into my office for career advice. I always ask them this very question, where do you want to be in the future? If all she can say is, oh, maybe I'll have cancer, maybe I'll get run over by a truck. Yeah, focus on the obstacles instead of the goals. She's just going to end up a hypochondriac paranoid. Whereas if she comes in and fire in her eyes and is like, I want to be there. And then we can talk about the obstacles and see how we can circumvent them. That's, I think, a much, much healthier attitude. And I feel it's very challenging to come up with a vision for the future, which we are unequivocally excited about. I'm not just talking now in the vague terms, like, yeah, let's cure cancer, fine. I'm talking about what kind of society do we want to create? What do we want it to mean to be human in the age of AI, in the age of AGI? So if we can have this conversation, broad, inclusive conversation, and gradually start converging towards some, some future that with some direction, at least, that we want to steer towards, right, then we'll be much more motivated to constructively take on the obstacles. And I think if I had, if I had to, if I try to wrap this up in a more succinct way, I think we can all agree already now that we should aspire to build AGI that doesn't overpower us, but that empowers us. And think of the many various ways that can do that, whether that's from my side of the world of autonomous vehicles. I'm personally actually from the camp that believes this human level intelligence is required to achieve something like vehicles that would actually be something we would enjoy using and being part of. So that's one example, and certainly there's a lot of other types of robots and medicine and so on. So focusing on those and then coming up with the obstacles, coming up with the ways that that can go wrong and solving those one at a time. And just because you can build an autonomous vehicle, even if you could build one that would drive just fine without you, maybe there are some things in life that we would actually want to do ourselves. That's right. Right, like, for example, if you think of our society as a whole, there are some things that we find very meaningful to do. And that doesn't mean we have to stop doing them just because machines can do them better. I'm not gonna stop playing tennis just the day someone builds a tennis robot and beat me. People are still playing chess and even go. Yeah, and in the very near term even, some people are advocating basic income, replace jobs. But if the government is gonna be willing to just hand out cash to people for doing nothing, then one should also seriously consider whether the government should also hire a lot more teachers and nurses and the kind of jobs which people often find great fulfillment in doing, right? We get very tired of hearing politicians saying, oh, we can't afford hiring more teachers, but we're gonna maybe have basic income. If we can have more serious research and thought into what gives meaning to our lives, the jobs give so much more than income, right? Mm hmm. And then think about in the future, what are the roles that we wanna have people continually feeling empowered by machines? And I think sort of, I come from Russia, from the Soviet Union. And I think for a lot of people in the 20th century, going to the moon, going to space was an inspiring thing. I feel like the universe of the mind, so AI, understanding, creating intelligence is that for the 21st century. So it's really surprising. And I've heard you mention this. It's really surprising to me, both on the research funding side, that it's not funded as greatly as it could be, but most importantly, on the politician side, that it's not part of the public discourse except in the killer bots terminator kind of view, that people are not yet, I think, perhaps excited by the possible positive future that we can build together. So we should be, because politicians usually just focus on the next election cycle, right? The single most important thing I feel we humans have learned in the entire history of science is they were the masters of underestimation. We underestimated the size of our cosmos again and again, realizing that everything we thought existed was just a small part of something grander, right? Planet, solar system, the galaxy, clusters of galaxies. The universe. And we now know that the future has just so much more potential than our ancestors could ever have dreamt of. This cosmos, imagine if all of Earth was completely devoid of life, except for Cambridge, Massachusetts. Wouldn't it be kind of lame if all we ever aspired to was to stay in Cambridge, Massachusetts forever and then go extinct in one week, even though Earth was gonna continue on for longer? That sort of attitude I think we have now on the cosmic scale, life can flourish on Earth, not for four years, but for billions of years. I can even tell you about how to move it out of harm's way when the sun gets too hot. And then we have so much more resources out here, which today, maybe there are a lot of other planets with bacteria or cow like life on them, but most of this, all this opportunity seems, as far as we can tell, to be largely dead, like the Sahara Desert. And yet we have the opportunity to help life flourish around this for billions of years. So let's quit squabbling about whether some little border should be drawn one mile to the left or right, and look up into the skies and realize, hey, we can do such incredible things. Yeah, and that's, I think, why it's really exciting that you and others are connected with some of the work Elon Musk is doing, because he's literally going out into that space, really exploring our universe, and it's wonderful. That is exactly why Elon Musk is so misunderstood, right? Misconstrued him as some kind of pessimistic doomsayer. The reason he cares so much about AI safety is because he more than almost anyone else appreciates these amazing opportunities that we'll squander if we wipe out here on Earth. We're not just going to wipe out the next generation, all generations, and this incredible opportunity that's out there, and that would really be a waste. And AI, for people who think that it would be better to do without technology, let me just mention that if we don't improve our technology, the question isn't whether humanity is going to go extinct. The question is just whether we're going to get taken out by the next big asteroid or the next super volcano or something else dumb that we could easily prevent with more tech, right? And if we want life to flourish throughout the cosmos, AI is the key to it. As I mentioned in a lot of detail in my book right there, even many of the most inspired sci fi writers, I feel have totally underestimated the opportunities for space travel, especially at the other galaxies, because they weren't thinking about the possibility of AGI, which just makes it so much easier. Right, yeah. So that goes to your view of AGI that enables our progress, that enables a better life. So that's a beautiful way to put it and then something to strive for. So Max, thank you so much. Thank you for your time today. It's been awesome. Thank you so much. Thanks. Have a great day. | Max Tegmark: Life 3.0 | Lex Fridman Podcast #1 |
As part of MIT course 6S099 on artificial general intelligence, I got a chance to sit down with Christoph Koch, who is one of the seminal figures in neurobiology, neuroscience, and generally in the study of consciousness. He is the president, the chief scientific officer of the Allen Institute for Brain Science in Seattle. From 1986 to 2013, he was a professor at Caltech. Before that, he was at MIT, he is extremely well cited, over 100,000 citations. His research, his writing, his ideas have had big impact on the scientific community and the general public in the way we think about consciousness, in the way we see ourselves as human beings. He's the author of several books, The Quest for Consciousness and Neurobiological Approach, and a more recent book, Consciousness, Confessions of a Romantic Reductionist. If you enjoy this conversation, this course, subscribe, click the little bell icon to make sure you never miss a video, and in the comments, leave suggestions for any people you'd like to see be part of the course or any ideas that you would like us to explore. Thanks very much and I hope you enjoy. Okay, before we delve into the beautiful mysteries of consciousness, let's zoom out a little bit and let me ask, do you think there's intelligent life out there in the universe? Yes, I do believe so. We have no evidence of it, but I think the probabilities are overwhelming in favor of it. Given a universe where we have 10 to the 11 galaxies and each galaxy has between 10 to the 11, 10 to the 12 stars and we know most stars have one or more planets. So how does that make you feel? It still makes me feel special because I have experiences. I feel the world, I experience the world and independent of whether there are other creatures out there, I still feel the world and I have access to this world in this very strange compelling way and that's the core of human existence. Now, you said human, do you think if those intelligent creatures are out there, do you think they experience their world? Yes, if they are evolved, if they are a product of natural evolution as they would have to be, they will also experience their own world. The consciousness isn't just human, you're right, it's much wider. It may be spread across all of biology. The only thing that we have special is we can talk about it. Of course, not all people can talk about it. Babies and little children can talk about it. Patients who have a stroke in the left inferior frontal gyrus can talk about it, but most normal adult people can talk about it and so we think that makes us special compared to let's say monkeys or dogs or cats or mice or all the other creatures that we share the planet with, but all the evidence seems to suggest that they too experience the world and so it's overwhelmingly likely that aliens would also experience their world. Of course, differently because they have a different sensorium, they have different sensors, they have a very different environment, but the fact that I would strongly suppose that they also have experiences. They feel pain and pleasure and see in some sort of spectrum and hear and have all the other senses. Of course, their language, if they have one, would be different so we might not be able to understand their poetry about the experiences that they have. That's correct. So in a talk, in a video, I've heard you mention Siputzo, a dachshund that you came up with, that you grew up with, it was part of your family when you were young. First of all, you're technically a Midwestern boy. You just – Technically. Yes. But after that, you traveled around a bit, hence a little bit of the accent. You talked about Siputzo, the dachshund, having these elements of humanness, of consciousness that you discovered. So I just wanted to ask, can you look back in your childhood and remember when was the first time you realized you yourself, sort of from a third person perspective, are a conscious being? This idea of stepping outside yourself and seeing there's something special going on here in my brain. I can't really actually – it's a good question. I'm not sure I recall a discrete moment. I mean, you take it for granted because that's the only world you know. The only world I know and you know is the world of seeing and hearing voices and touching and all the other things. So it's only much later at early – in my underguided days when I enrolled in physics and in philosophy that I really thought about it and thought, well, this is really fundamentally very, very mysterious and there's nothing really in physics right now that explains this transition from the physics of the brain to feelings. Where do the feelings come in? So you can look at the foundational equation of quantum mechanics, general relativity. You can look at the periodic table of the elements. You can look at the endless ATGC chat in our genes and nowhere is consciousness. Yet I wake up every morning to a world where I have experiences. And so that's the heart of the ancient mind body problem. How do experiences get into the world? So what is consciousness? Experience. This is any experience. Some people call it subjective feeling. Some people call it phenomenology. Some people call it qualia of the philosopher. But they all denote the same thing. It feels like something in the famous word of the philosopher Thomas Nagel. It feels like something to be a bat or to be an American or to be angry or to be sad or to be in love or to have pain. And that is what experience is, any possible experience. Could be as mundane as just sitting in a chair. Could be as exalted as having a mystical moment in deep meditation. Those are just different forms of experiences. Experience. So if you were to sit down with maybe the next, skip a couple generations, of IBM Watson, something that won Jeopardy, what is the gap, I guess the question is, between Watson, that might be much smarter than you, than us, than any human alive, but may not have experience, what is the gap? Well, so that's a big, big question. That's occupied people for the last, certainly last 50 years since we, you know, since the advent, the birth of computers. That's a question Alan Turing tried to answer. And of course he did it in this indirect way by proposing a test, an operational test. But that's not really, that's, you know, he tried to get at what does it mean for a person to think, and then he had this test, right? You lock them away, and then you have a communication with them, and then you try to guess after a while whether that is a person or whether it's a computer system. There's no question that now or very soon, you know, Alexa or Siri or, you know, Google now will pass this test, right? And you can game it, but you know, ultimately, certainly in your generation, there will be machines that will speak with complete poise that will remember everything you ever said. They'll remember every email you ever had, like Samantha, remember in the movie Her? Yeah. There's no question it's going to happen. But of course, the key question is, does it feel like anything to be Samantha in the movie Her? Or does it feel like anything to be Watson? And there one has to very, very strongly think there are two different concepts here that we co mingle. There is the concept of intelligence, natural or artificial, and there is a concept of consciousness, of experience, natural or artificial. Those are very, very different things. Now, historically, we associate consciousness with intelligence. Why? Because we live in a world, leaving aside computers, of natural selection, where we're surrounded by creatures, either our own kin that are less or more intelligent, or we go across species. Some are more adapted to a particular environment. Others are less adapted, whether it's a whale or dog, or you go talk about a paramecium or a little worm. And we see the complexity of the nervous system goes from one cell to specialized cells, to a worm that has three nets, that has 30 percent of its cells are nerve cells, to creature like us or like a blue whale that has 100 billion, even more nerve cells. And so based on behavioral evidence and based on the underlying neuroscience, we believe that as these creatures become more complex, they are better adapted to their particular ecological niche, and they become more conscious, partly because their brain grows. And we believe consciousness, unlike the ancient, ancient people thought most, almost every culture thought that consciousness with intelligence has to do with your heart. And you still see that today. You see, honey, I love you with all my heart. But what you should actually say is, no, honey, I love you with all my lateral hypothalamus. And for Valentine's Day, you should give your sweetheart, you know, hypothalamus, a piece of chocolate and not a heart shaped chocolate. Anyway, so we still have this language, but now we believe it's a brain. And so we see brains of different complexity and we think, well, they have different levels of consciousness. They're capable of different experiences. But now we confront the world where we know where we're beginning to engineer intelligence. And it's radical unclear whether the intelligence we're engineering has anything to do with consciousness and whether it can experience anything. Because fundamentally, what's the difference? Intelligence is about function. Intelligence no matter exactly how you define it, sort of adaptation to new environments, being able to learn and quickly understand, you know, the setup of this and what's going on and who are the actors and what's going to happen next. That's all about function. Consciousness is not about function. Consciousness is about being. It's in some sense much fundamental. You can see this in several cases. You can see it, for instance, in the case of the clinic. When you're dealing with patients who are, let's say, had a stroke or had were in traffic accident, et cetera, they're pretty much immobile. Terri Schiavo, you may have heard historically, she was a person here in the 90s in Florida. Her heart stood still. She was reanimated. And then for the next 14 years, she was what's called in a vegetative state. So there are thousands of people in a vegetative state. So they're, you know, they're, you know, they're like this. Occasionally, they open their eyes for two, three, four, five, six, eight hours, and then close their eyes. They have sleep wake cycle. Occasionally, they have behaviors. They do like, you know, but there's no way that you can establish a lawful relationship between what you say or the doctor says or the mom says and what the patient does. So there isn't any behavior, yet in some of these people, there is still experience. You can design and build brain machine interfaces where you can see there's still experience something. And of course, these cases of locked in state, there's this famous book called The Diving Bell and the Butterfly, where you had an editor, a French editor, he had a stroke in the brainstem, unable to move except his vertical eyes, eye movement. He could just move his eyes up and down. And he dictated an entire book. And some people even lose this at the end. All the evidence seems to suggest that they're still in there. In this case, you have no behavior, you have consciousness. Second case is tonight, like all of us, you're going to go to sleep, close your eyes, you go to sleep, you will wake up inside your sleeping body, and you will have conscious experiences. They are different from everyday experience. You might fly, you might not be surprised that you're flying, you might meet a long dead pet, childhood dog, and you're not surprised that you're meeting them. But you have conscious experience of love, of hate, they can be very emotional. Your body during this state, typically it's REM state, sends an active signal to your motor neurons to paralyze you. It's called atonia. Because if you don't have that, like some patients, what do you do? You act out your dreams. You get, for example, REM behavioral disorder, which is bad juju to get. Okay. Third case is pure experience. So I recently had this, what some people call a mystical experience. I went to Singapore and went into a flotation tank. Yeah. All right. So this is a big tub filled with water, that's body temperature and Epsom salt. You strip completely naked, you lie inside of it, you close the lid. Darkness. Complete darkness, soundproof. So very quickly, you become bodiless because you're floating and you're naked. You have no rings, no watch, no nothing. You don't feel your body anymore. There's no sound, soundless. There's no photon, sightless, timeless, because after a while, early on you actually hear your heart, but then you sort of adapt to that and then sort of the passage of time ceases. Yeah. And if you train yourself, like in a meditation, not to think, early on you think a lot. It's a little bit spooky. You feel somewhat uncomfortable or you think, well, I'm going to get bored. And if you try to not to think actively, you become mindless. There you are, bodiless, timeless, you know, soundless, sightless, mindless, but you're in a conscious experience. You're not asleep. Yeah. You're not asleep. You are a being of pure, you're a pure being. There isn't any function. You aren't doing any computation. You're not remembering. You're not projecting. You're not planning. Yet you are fully conscious. You're fully conscious. There's something going on there. It could be just a side effect. So what is the... You mean epiphenomena. So what's the select, meaning why, what is the function of you being able to lay in this sensory free deprivation tank and still have a conscious experience? Evolutionary? Evolutionary. Obviously we didn't evolve with flotation tanks in our environment. I mean, so biology is notoriously bad at asking why question, telenormical question. Why do we have two eyes? Why don't we have four eyes like some teachers or three eyes or something? Well, no, there's probably, there is a function to that, but we're not very good at answering those questions. We can speculate endlessly where biology is very, or science is very good about mechanistic question. Why is there a charge in the universe? Right? We find a certain universe where there are positive and negative charges. Why? Why does quantum mechanics hold? You know, why doesn't some other theory hold? Quantum mechanics holding our universe is very unclear why. So telenormical question, why questions are difficult to answer. There's some relationship between complexity, brain processing power and consciousness. But however, in these cases, in these three examples I gave, one is an everyday experience at night. The other one is trauma. And third one is in principle, you can, everybody can have these sort of mystical experiences. You have a dissociation of function from, of intelligence from consciousness. You caught me asking a why question. Let me ask a question that's not a why question. You're giving a talk later today on the Turing test for intelligence and consciousness, drawing lines between the two. So is there a scientific way to say there's consciousness present in this entity or not? And to anticipate your answer, cause you, you will also, there's a neurobiological answer. So we can test the human brain, but if you take a machine brain that you don't know tests for yet, how would you even begin to approach a test if there's consciousness present in this thing? Okay. That's a really good question. So let me take it in two steps. So as you point out for, for, for, for humans, let's just stick with humans. There's now a test called the Zap and Zip is a procedure where you ping the brain using transcranial magnetic stimulation. You look at the electrical reverberations essentially using EG, and then you can measure the complexity of this brain response. And you can do this in awake people, in asleep, normal people, you can do it in awake people and then anesthetize them. You can do it in patients. And it, it, it has a hundred percent accuracy that in all those cases, when you're clear, the patient or the person is either conscious or unconscious, the complexity is either high or low. And then you can adopt these techniques to similar creatures like monkeys and dogs and, and, and mice that have very similar brains. Now of course you, you point out that may not help you because we don't have a cortex, you know, and if I send a magnetic pulse into my iPhone or my computer, it's probably going to break something. So we don't have that. So what we need ultimately, we need a theory of consciousness. We can't just rely on our intuition. Our intuition is, well, yeah, if somebody talks, they're conscious. However, then there are all these patients, children, babies don't talk, right? But we believe that, that the babies also have conscious experiences, right? And then there are all these patients I mentioned and they don't talk. When you dream, you can't talk because you're paralyzed. So what we ultimately need, we can't just rely on our intuition. We need a theory of conscience that tells us what is it about a piece of matter? What is it about a piece of highly excitable matter like the brain or like a computer that gives rise to conscious experience? We all believe, none of us believes anymore in the old story. It's a soul, right? That used to be the most common explanation that most people accept that instill a lot of people today believe, well, there's, there's God endowed only us with a special thing that animals don't have. Rene Descartes famously said, a dog, if you hit it with your carriage may yell, may cry, but it doesn't have this special thing. It doesn't have the magic, the magic soul. It doesn't have res cogitans, the soul. Now we believe that isn't the case anymore. So what is the difference between brains and, and these guys, silicon? And in particular, once their behavior matches. So if you have Siri or Alexa in 20 years from now that she can talk just as good as any possible human, what grounds do you have to say she's not conscious in particular, if she says it's of course she will, well, of course I'm conscious. You ask her how are you doing? And she'll say, well, you know, they, they'll generate some way to, of course she'll behave like a, like a person. Now there's several differences. One is, so this relates to the problem, the very hard, why is consciousness a hard problem? It's because it's subjective, right? Only I have it, for only I know I have direct experience of my own consciousness. I don't have experience in your consciousness. Now I assume as a sort of a Bayesian person who believes in probability theory and all of that, you know, I can do, I can do an abduction to the, to the best available facts. I deduce your brain is very similar to mine. If I put you in a scanner, your brain is roughly going to behave the same way as I do. If, if, if, you know, if I give you this muesli and ask you, how does it taste? You tell me things that, you know, that, that I would also say more or less, right? So I infer based on all of that, that you're conscious. Now with theory, I can't do that. So there I really need a theory that tells me what is it about, about any system, this or this, that makes it conscious. We have such a theory. Yes. So the integrated information theory, but let me first, maybe as an introduction for people who are not familiar, Descartes, can you, you talk a lot about pan, panpsychism. Can you describe what, uh, physicalism versus dualism? This you, you mentioned the soul, what, what is the history of that idea? What is the idea of panpsychism or no, the debate really, uh, out of which panpsychism can, um, emerge of, of, of, um, dualism versus, uh, physicalism or do you not see panpsychism as fitting into that? No, you can argue there's some, okay, so let's step back. So panpsychism is a very ancient belief that's been around, uh, I mean, Plato and Aristotle talks about it, uh, modern philosophers talk about it. Of course, in Buddhism, the idea is very prevalent that, I mean, there are different versions of it. One version says everything is ensouled, everything, rocks and stones and dogs and people and forest and iPhones, all of us all, right? All matter is ensouled. That's sort of one version. Another version is that all biology, all creatures, small or large, from a single cell to a giant sequoia tree feel like something. This one I think is somewhat more realistic. Um, so the different versions, what do you mean by feel like something, have, have feelings, have some kind of, it feels like something, it may well be possible that it feels like something to be a paramecium. I think it's pretty likely it feels like something to be a bee or a mouse or a dog. Sure. So, okay. So, so that you can see that's also, so panpsychism is very broad and you can, so some people, for example, Bertrand Russell, tried to advocate this, this idea, it's called Rasselian Monism, that that panpsychism is really physics viewed from the inside. So the idea is that physics is very good at describing relationship among objects like charges or like gravity, right? You know, describe the relationship between curvature and mass distribution, okay? That's the relationship among things. Physics doesn't really describe the ultimate reality itself. It's just relationship among, you know, quarks or all these other stuff from like a third person observer. Yes. Yes. Yes. And consciousness is what physics feels from the inside. So my conscious experience, it's the way the physics of my brain, particularly my cortex feels from the inside. And so if you are paramecium, you got to remember, you say paramecium, well, that's a pretty dumb creature. It is, but it has already a billion different molecules, probably, you know, 5,000 different proteins assembled in a highly, highly complex system that no single person, no computer system so far on this planet has ever managed to accurately simulate. Its complexity vastly escapes us. Yes. And it may well be that that little thing feels like a tiny bit. Now, it doesn't have a voice in the head like me. It doesn't have expectations. You know, it doesn't have all that complex things, but it may well feel like something. Yeah. So this is really interesting. Can we draw some lines and maybe try to understand the difference between life, intelligence and consciousness? How do you see all of those? If you had to define what is a living thing, what is a conscious thing and what is an intelligent thing? Do those intermix for you or are they totally separate? Okay. So A, that's a question that we don't have a full answer to. A lot of the stuff we're talking about today is full of mysteries and fascinating ones, right? For example, you can go to Aristotle, who's probably the most important scientist and philosopher who's ever lived in, certainly in Western culture. He had this idea, it's called hylomorphism. It's quite popular these days, that there are different forms of soul. The soul is really the form of something. He says, all biological creatures have a vegetative soul. That's life principle. Today, we think we understand something more than it is biochemistry and nonlinear thermodynamics. Then he said they have a sensitive soul. Only animals and humans have also a sensitive soul or a petitive soul. They can see, they can smell, and they have drives. They want to reproduce, they want to eat, et cetera. And then only humans have what he called a rational soul, okay? And that idea then made it into Christendom and then the rational soul is the one that lives forever. He was very unclear. He wasn't really, I mean, different readings of Aristotle give different, whether did he believe that rational soul was immortal or not. I probably think he didn't. But then, of course, that made it through Plato into Christianity, and then this soul became immortal and then became the connection to God. So you ask me, essentially, what is our modern conception of these three, Aristotle would have called them different forms. Life, we think we know something about it, at least life on this planet, right? Although we don't understand how to originate it, but it's been difficult to rigorously pin down. You see this in modern definitions of death. In fact, right now, there's a conference ongoing, again, that tries to define legally and medically what is death. It used to be very simple. Death is you stop breathing, your heart stops beating, you're dead, totally uncontroversial. If you're unsure, you wait another 10 minutes. If the patient doesn't breathe, he's dead. Well, now we have ventilators, we have heart pacemakers, so it's much more difficult to define what death is. Typically, death is defined as the end of life and life is defined before death. Okay, so we don't have really very good definitions. Intelligence, we don't have a rigorous definition. We know something how to measure, it's called IQ or G factors, right? And we're beginning to build it in a narrow sense, right? Like go, AlphaGo and Watson and, you know, Google cars and Uber cars and all of that, it's still narrow AI and some people are thinking about artificial general intelligence. But roughly, as we said before, it's something to do with ability to learn and to adapt to new environments. But that is, as I said, also, it's radical difference from experience. And it's very unclear if you build a machine that has AGI, it's not at all a priori, it's not at all clear that this machine will have consciousness, it may or may not. So let's ask it the other way, do you think if you were to try to build an artificial general intelligence system, do you think figuring out how to build artificial consciousness would help you get to an AGI? So or put another way, do you think intelligent requires consciousness? In human, it goes hand in hand. In human, or I think in biology, consciousness, intelligence goes hand in hand, quay is illusion because the brain evolved to be highly complex, complexity via the theory integrated information theory is sort of ultimately is what is closely tied to consciousness. Ultimately it's causal power upon itself. And so in evolved systems, they go together. In artificial system, particularly in digital machines, they do not go together. And if you ask me point blank, is Alexa 20.0 in the year 2040, when she can easily pass every Turing test, is she conscious? No, even if she claims she's conscious. In fact, you could even do a more radical version of this thought experiment. You can build a computer simulation of the human brain. You know what Henry Markham in the Blue Brain Project or the Human Brain Project in Switzerland is trying to do. Let's grant them all the success. So in 10 years, we have this perfect simulation of the human brain. Every neuron is simulated and it has a larynx and it has motor neurons. It has a Broca's area and of course they'll talk and they'll say, hi, I just woke up. I feel great. OK, even that computer simulation that can in principle map onto your brain will not be conscious. Why? Because it simulates, it's a difference between the simulated and the real. So it simulates the behavior associated with consciousness. It might be, it will, if it's done properly, will have all the intelligence that that particular person they're simulating has. But simulating intelligence is not the same as having conscious experiences. And I give you a really nice metaphor that engineers and physicists typically get. I can write down Einstein's field equation, nine or ten equations that describe the link in general relativity between curvature and mass. I can do that. I can run this on my laptop to predict that the central, the black hole at the center of our galaxy will be so massive that it will twist space time around it so no light can escape. It's a black hole. But funny, have you ever wondered why doesn't this computer simulation suck me in? It simulates gravity, but it doesn't have the causal power of gravity. That's a huge difference. So it's a difference between the real and the simulator, just like it doesn't get wet inside a computer when the computer runs code that simulates a weather storm. And so in order to have, to have artificial consciousness, you have to give it the same causal power as the human brain. You have to build so called a neuromorphic machine that has hardware that is very similar to the human brain, not a digital clocked phenomenon computer. So that's, just to clarify though, you think that consciousness is not required to create human level intelligence. It seems to accompany in the human brain, but for machine not. That's correct. So maybe just because this is AGI, let's dig in a little bit about what we mean by intelligence. So one thing is the G factor, these kind of IQ tests of intelligence. But I think if you, maybe another way to say, so in 2040, 2050, people will have Siri that is just really impressive. Do you think people will say Siri is intelligent? Yes. Intelligence is this amorphous thing. So to be intelligent, it seems like you have to have some kind of connections with other human beings in a sense that you have to impress them with your intelligence. And there feels, you have to somehow operate in this world full of humans. And for that, there feels like there has to be something like consciousness. So you think you can have just the world's best natural NLP system, natural language understanding generation, and that will be, that will get us happy and say, you know what, we've created an AGI. I don't know happy, but yes, I do believe we can get what we call high level functional intelligence, particular sort of the G, you know, this fluid like intelligence that we cherish, particularly at a place like MIT, right, in machines. I see a priori no reasons, and I see a lot of reason to believe it's going to happen very, you know, over the next 50 years or 30 years. So for beneficial AI, for creating an AI system that's, so you mentioned ethics, that is exceptionally intelligent but also does not do, does, you know, aligns its values with our values as humanity. Do you think then it needs consciousness? Yes, I think that that is a very good argument that if we're concerned about AI and the threat of AI, a la Nick Bostrom, existentialist threat, I think having an intelligence that has empathy, right, why do we find abusing a dog, why do most of us find that abhorrent, abusing any animal, right? Why do we find that abhorrent because we have this thing called empathy, which if you look at the Greek really means feeling with, I feel a path of empathy, I have feeling with you. I see somebody else suffer that isn't even my conspecific, it's not a person, it's not my wife or my kids, it's a dog, but I feel naturally most of us, not all of us, most of us will feel emphatic. And so it may well be in the long term interest of survival of homo sapiens sapiens that if we do build AGI and it really becomes very powerful that it has an emphatic response and doesn't just exterminate humanity. So as part of the full conscious experience to create a consciousness, artificial or in our human consciousness, do you think fear, maybe we're going to get into the earlier days with Nietzsche and so on, but do you think fear and suffering are essential to have consciousness? Do you have to have the full range of experience to have a system that has experience or can you have a system that only has very particular kinds of very positive experiences? Look you can have in principle, people have done this in the rat where you implant an electrode in the hypothalamus, the pleasure center of the rat and the rat stimulates itself above and beyond anything else. It doesn't care about food or natural sex or drink anymore, it just stimulates itself because it's such a pleasurable feeling. I guess it's like an orgasm just you have all day long. And so a priori I see no reason why you need a great variety. Now clearly to survive that wouldn't work, right? But if I'd engineered artificially, I don't think you need a great variety of conscious experience. You could have just pleasure or just fear. It might be a terrible existence, but I think that's possible at least on conceptual logical ground. Because any real creature whether artificially engineered, you want to give it fear, the fear of extinction that we all have. And you also want to give it positive repetitive states, states that you want the machine encouraged to do because they give the machine positive feedback. So you mentioned panpsychism, to jump back a little bit, everything having some kind of mental property. How do you go from there to something like human consciousness? So everything having some elements of consciousness, is there something special about human consciousness? So it's not everything. Like a spoon, the form of panpsychism I think about doesn't ascribe consciousness to anything like this, the spoon on my liver. However, the theory, the integrated information theory does say that the system, even one that looks from the outside relatively simple, at least if they have this internal causal power, it does feel like something. The theory a priori doesn't say anything what's special about human. Biologically we know the one thing that's special about human is we speak and we have an overblown sense of our own importance. We believe we're exceptional and we're just God's gift to the universe. But behaviorally the main thing that we have, we can plan over the long term, we have language and that gives us an enormous amount of power and that's why we are the current dominant species on the planet. So you mentioned God, you grew up a devout Roman Catholic family, so with consciousness you're sort of exploring some really deeply fundamental human things that religion also touches on. Where does religion fit into your thinking about consciousness? You've grown throughout your life and changed your views on religion as far as I understand. Yeah, I mean I'm now much closer to, I'm not a Roman Catholic anymore, I don't believe there's sort of this God, the God I was educated to believe in, sits somewhere in the fullness of time, I'll be united in some sort of everlasting bliss, I just don't see any evidence for that. Look, the world, the night is large and full of wonders, there are many things that I don't understand, I think many things that we as a cult, look we don't even understand more than 4% of all the universe, dark matter, dark energy, we have no idea what it is, maybe it's lost socks, what do I know? So all I can tell you is it's sort of my current religious or spiritual sentiment is much closer to some form of Buddhism, without the reincarnation unfortunately, there's no evidence for it than reincarnation. So can you describe the way Buddhism sees the world a little bit? Well so they talk about, so when I spent several meetings with the Dalai Lama and what always impressed me about him, he really, unlike for example let's say the Pope or some Cardinal, he always emphasized minimizing the suffering of all creatures. So they have this, from the early beginning they look at suffering in all creatures, not just in people, but in everybody, this universal and of course by degrees, an animal in general is less capable of suffering than a well developed, normally developed human and they think consciousness pervades in this universe and they have these techniques, you can think of them like mindfulness etc. and meditation that tries to access what they claim of this more fundamental aspect of reality. I'm not sure it's more fundamental, I think about it, there's the physical and then there's this inside view, consciousness and those are the two aspects that's the only thing I have access to in my life and you've got to remember my conscious experience and your conscious experience comes prior to anything you know about physics, comes prior to knowledge about the universe and atoms and super strings and molecules and all of that. The only thing you directly are acquainted with is this world that's populated with things in images and sounds in your head and touches and all of that. I actually have a question, so it sounds like you kind of have a rich life, you talk about rock climbing and it seems like you really love literature and consciousness is all about experiencing things, so do you think that has helped your research on this topic? Yes, particularly if you think about it, the various states, so for example when you do rock climbing or now I do rowing, crew rowing and a bike every day, you can get into this thing called the zone and I've always wanted about it, particularly with respect to consciousness because it's a strangely addictive state. Once people have it once, they want to keep on going back to it and you wonder what is it so addicting about it and I think it's the experience of almost close to pure experience because in this zone, you're not conscious of inner voice anymore, there's always inner voice nagging you, you have to do this, you have to do that, you have to pay your taxes, you have to fight with your ex and all of those things, they're always there. But when you're in the zone, all of that is gone and you're just in this wonderful state where you're fully out in the world, you're climbing or you're rowing or biking or doing soccer or whatever you're doing and sort of consciousness is this, you're all action or in this case of pure experience, you're not action at all but in both cases, you experience some aspect of conscious, you touch some basic part of conscious existence that is so basic and so deeply satisfying. You I think you touch the root of being, that's really what you're touching there, you're getting close to the root of being and that's very different from intelligence. So what do you think about the simulation hypothesis, simulation theory, the idea that we all live in a computer simulation? Rapture for nerds. Rapture for nerds. I think it's as likely as the hypothesis had engaged hundreds of scholars for many centuries, are we all just existing in the mind of God? And this is just a modern version of it, it's equally plausible. People love talking about these sort of things, I know they're book written about this simulation hypothesis, if that's what people want to do, that's fine, it seems rather esoteric, it's never testable. But it's not useful for you to think of in those terms, so maybe connecting to the questions of free will which you've talked about, I vaguely remember you saying that the idea that there's no free will, it makes you very uncomfortable. So what do you think about free will from a physics perspective, from a conscious perspective, what does it all fit? Okay, so from the physics perspective, leaving aside quantum mechanics, we believe we live in a fully deterministic world, right? But then comes of course quantum mechanics, so now we know that certain things are in principle not predictable, which as you said I prefer, because the idea that the initial condition of the universe and then everything else, we're just acting out the initial condition of the universe, that doesn't… It's not a romantic notion. Certainly not. Now when it comes to consciousness, I think we do have certain freedom. We are much more constrained by physics of course and by our past and by our own conscious desires and what our parents told us and what our environment tells us. We all know that, right? There's hundreds of experiments that show how we can be influenced. But finally in the final analysis, when you make a life – and I'm talking really about critical decision where you really think, should I marry, should I go to this school or that school, should I take this job or that job, should I cheat on my taxes or not? These are things where you really deliberate and I think under those conditions, you are as free as you can be. When you bring your entire being, your entire conscious being to that question and try to analyze it under all the various conditions, then you make a decision, you are as free as you can ever be. That is I think what free will is. It's not a will that's totally free to do anything it wants. That's not possible. Right. So as Jack mentioned, you actually write a blog about books you've read, amazing books from, I'm Russian, from Bulgakov, Neil Gaiman, Carl Sagan, Murakami. So what is a book that early in your life transformed the way you saw the world, something that changed your life? Nietzsche I guess did. That's Brooks R. Truster because he talks about some of these problems. He was one of the first discoverer of the unconscious. This is a little bit before Freud when he was in the air. He makes all these claims that people sort of under the guise or under the mass of charity actually are very noncharitable. So he is sort of really the first discoverer of the great land of the unconscious and that really struck me. And what do you think about the unconscious, what do you think about Freud, what do you think about these ideas? Just like dark matter in the universe, what's over there in that unconscious? A lot. I mean much more than we think. This is what a lot of last 100 years of research has shown. So I think he was a genius, misguided towards the end, but he started out as a neuroscientist. He contributed, he did the studies on the lamprey, he contributed himself to the neuron hypothesis, the idea that there are discrete units that we call nerve cells now. And then he wrote about the unconscious and I think it's true, there's lots of stuff happening. You feel this particular when you're in a relationship and it breaks asunder, right? And then you have this terrible, you can have love and hate and lust and anger and all of it's mixed in. And when you try to analyze yourself, why am I so upset? It's very, very difficult to penetrate to those basements, those caverns in your mind because the prying eyes of conscious doesn't have access to those, but they're there in the amygdala or lots of other places. They make you upset or angry or sad or depressed and it's very difficult to try to actually uncover the reason. You can go to a shrink, you can talk with your friend endlessly, you construct finally a story why this happened, why you love her or don't love her or whatever, but you don't really know whether that actually happened because you simply don't have access to those parts of the brain and they're very powerful. Do you think that's a feature or a bug of our brain? The fact that we have this deep, difficult to dive into subconscious? I think it's a feature because otherwise, look, we are like any other brain or nervous system or computer, we are severely band limited. If everything I do, every emotion I feel, every eye movements I make, if all of that had to be under the control of consciousness, I wouldn't be here. What you do early on, your brain, you have to be conscious when you learn things like typing or like riding on a bike, but then what you do, you train up routes, I think that involve basal ganglia and striatum. You train up different parts of your brain and then once you do it automatically like typing, you can show you do it much faster without even thinking about it because you've got these highly specialized, what Frans Krik and I call zombie agents, they're taking care of that while your consciousness can sort of worry about the abstract sense of the text you want to write. I think that's true for many, many things. But for the things like all the fights you had with an ex girlfriend, things that you would think are not useful to still linger somewhere in the subconscious. So that seems like a bug that it would stick to there. You think it would be better if you can analyze it and then get it out of the system. Better to get it out of the system or just forget it ever happened. That seems a very buggy kind of. Well yeah, in general we don't have, and that's probably functional, we don't have an ability unless it's extreme, there are cases, clinical dissociations, right? When people are heavily abused, when they completely repress the memory, but that doesn't happen in normal people. We don't have an ability to remove traumatic memories and of course we suffer from that. On the other hand, probably if you had the ability to constantly wipe your memory, you'd probably do it to an extent that isn't useful to you. So yeah, it's a good question to balance. So on the books, as Jack mentioned, correct me if I'm wrong, but broadly speaking, academia and the different scientific disciplines, certainly in engineering, reading literature seems to be a rare pursuit. So I'm wrong on this, but that's in my experience, most people read much more technical text and do not sort of escape or seek truth in literature. It seems like you do. So what do you think is the value, what do you think literature adds to the pursuit of scientific truth? Do you think it's good, it's useful for everybody? Gives you access to a much wider array of human experiences. How valuable do you think it is? Well if you want to understand human nature and nature in general, then I think you have to better understand a wide variety of experiences, not just sitting in a lab staring at a screen and having a face flashed onto you for a hundred milliseconds and pushing a button. That's what I used to do, that's what most psychologists do. There's nothing wrong with that, but you need to consider lots of other strange states. And literature is a shortcut for this. Well yeah, because literature, that's what literature is all about, all sorts of interesting experiences that people have, the contingency of it, the fact that women experience the world different, black people experience the world different. The one way to experience that is reading all these different literature and try to find out. You see, everything is so relative. You read a book 300 years ago, they thought about certain problems very, very differently than us today. We today, like any culture, think we know it all. That's common to every culture. Every culture believes at its heyday they know it all. And then you realize, well, there's other ways of viewing the universe and some of them may have lots of things in their favor. So this is a question I wanted to ask about time scale or scale in general. When you, with IIT or in general, try to think about consciousness, try to think about these ideas, we kind of naturally think in human time scales, and also entities that are sized close to humans. Do you think of things that are much larger and much smaller as containing consciousness? And do you think of things that take, you know, eons to operate in their conscious cause effect? That's a very good question. So I think a lot about small creatures because experimentally, you know, a lot of people work on flies and bees, right? So most people just think they are automata, they're just bugs for heaven's sake, right? But if you look at their behavior, like bees, they can recognize individual humans. They have this very complicated way to communicate. If you've ever been involved or you know your parents when they bought a house, what sort of agonizing decision that is. And bees have to do that once a year, right, when they swarm in the spring. And then they have this very elaborate way, they have free and scouts, they go to the individual sites, they come back, they have this power, this dance, literally, where they dance for several days, they try to recruit other deets, this very complicated decision rate, when they finally, once they make a decision, the entire swarm, the scouts warm up the entire swarm and then go to one location. They don't go to 50 locations, they go to one location that the scouts have agreed upon by themselves. That's awesome. If we look at the circuit complexity, it's 10 times more denser than anything we have in our brain. Now they only have a million neurons, but the neurons are amazingly complex. Complex behavior, very complicated circuitry, so there's no question they experience something, their life is very different, they're tiny, they only live, you know, for, well, workers live maybe for two months. So I think, and IIT tells you this, in principle, the substrate of consciousness is the substrate that maximizes the cause effect power over all possible spatial temporal grains. So when I think about, for example, do you know the science fiction story, The Black Cloud? Okay, it's a classic by Fred Hoyle, the astronomer. He has this cloud intervening between the earth and the sun and leading to some sort of, to global cooling, this is written in the 50s. It turns out you can, using the radio dish, they communicate with actually an entity, it's actually an intelligent entity, and they sort of, they convince it to move away. So here you have a radical different entity, and in principle, IIT says, well, you can measure the integrated information, in principle at least, and yes, if the maximum of that occurs at a time scale of months, rather than in assets for a fraction of a second, yes, then they would experience life where each moment is a month rather than, or microsecond, right, rather than a fraction of a second in the human case. And so there may be forms of consciousness that we simply don't recognize for what they are because they are so radical different from anything you and I are used to. Again, that's why it's good to read or to watch science fiction movies, well, to think about this. Do you know Stanislav Lem, this Polish science fiction writer, he wrote Solaris and was turned into a Hollywood movie? Yes. His best novel is in the 60s, a very engineer, he's an engineer in background. His most interesting novel is called The Victorious, where human civilization, they have this mission to this planet and everything is destroyed and they discover machines, humans got killed and then these machines took over and there was this machine evolution, Darwinian evolution, he talks about this very vividly. And finally, the dominant machine intelligence organism that survived were gigantic clouds of little hexagonal universal cellular automata. This was written in the 60s, so typically they're all lying on the ground individual by themselves, but in times of crisis, they can communicate, they assemble into gigantic nets into clouds of trillions of these particles and then they become hyper intelligent and they can beat anything that humans can throw at it. It's very beautiful and compelling where you have an intelligence where finally the humans leave the planet, they're simply unable to understand and comprehend this creature. They can say, well, either we can nuke the entire planet and destroy it or we just have to leave because fundamentally it's an alien, it's so alien from us and our ideas that we cannot communicate with them. Yeah, actually in conversation, so you're talking to us, Steven Wolf from Brought Up is that there could be ideas that you already have these artificial general intelligence like super smart or maybe conscious beings in the cellular automata, we just don't know how to talk to them. So it's the language of communication, but you don't know what to do with it. So that's one sort of view is consciousness is only something you can measure. So it's not conscious if you can't measure it. So you're making an ontological and an epistemic statement. One is it's just like seeing the multiverses, that might be true, but I can't communicate with them. I can't have any knowledge of them. That's an epistemic argument. Right? So those are two different things. So it may well be possible. Look, in another case that's happening right now, people are building these mini organoids. Do you know what this is? So you can take stem cells from under your arm, put it in a dish, add four transcription factors and then you can induce them to grow into large, well, large, they're a few millimeters. They're like a half a million neurons that look like nerve cells in a dish called mini organoids at Harvard, at Stanford, everywhere they're building them. It may well be possible that they're beginning to feel like something, but we can't really communicate with them right now. So people are beginning to think about the ethics of this. So yes, he may be perfectly right, but it's one question, are they conscious or not? It's a totally separate question. How would I know? Those are two different things. If you could give advice to a young researcher, sort of dreaming of understanding or creating human level intelligence or consciousness, what would you say? Just follow your dreams. Read widely. No, I mean, I suppose with discipline, what is the pursuit that they should take on? Is it neuroscience? Is it computational cognitive science? Is it philosophy? Is it computer science or robotics? No, in a sense that, okay, so the only known system that have high level of intelligence is homo sapiens. So if you wanted to build it, it's probably good to continue to study closely what humans do. So cognitive neuroscience, you know, somewhere between cognitive neuroscience on the one hand and some philosophy of mind and then AI, AI computer science. You can look at all the original ideas in your network, they all came from neuroscience, right? Reinforce whether it's Snarky, Minsky building is Snarky or whether it's, you know, the early Hubel and Wiesel experiments at Harvard that then gave rise to networks and then multi layer networks. So it may well be possible, in fact, some people argue that to make the next big step in AI once we realize the limits of deep convolutional networks, they can do certain things, but they can't really understand. They don't, they can't really, I can't really show them one image. I can show you a single image of somebody, a pickpocket who steals a wallet from a purse. You immediately know that's a pickpocket. Now computer system would just say, well, it's a man, it's a woman, it's a purse, right? Unless you train this machine on showing it a hundred thousand pickpockets, right? So it doesn't have this easy understanding that you have, right? So some people make the argument in order to go to the next step or you really want to build machines that understand in a way you and I, we have to go to psychology. We need to understand how we do it and how our brains enable us to do it. And so therefore being on the cusp, it's also so exciting to try to understand better our nature and then to build, to take some of those inside and build them. So I think the most exciting thing is somewhere in the interface between cognitive science, neuroscience, AI, computer science and philosophy of mind. Beautiful. Yeah. I'd say if there is from the machine learning, from our, from the computer science, computer vision perspective, many of the researchers kind of ignore the way the human brain works or even psychology or literature or studying the brain, I would hope Josh Tenenbaum talks about bringing that in more and more. And that's, yeah, so you've worked on some amazing stuff throughout your life. What's the thing that you're really excited about? What's the mystery that you would love to uncover in the near term beyond, beyond all the mysteries that you're already surrounded by? Well, so there's a structure called the claustrum. This is a structure, it's underneath our cortex, it's yay big. You have one on the left, on the right, underneath this, underneath the insula, it's very thin, it's like one millimeter, it's embedded in, in wiring, in white matter, so it's very difficult to image. And it has, it has connection to every cortical region. And Francis Crick, the last paper he ever wrote, he dictated corrections the day he died in hospital on this paper. You know, we hypothesize, well, because it has this unique anatomy, it gets input from every cortical area and projects back to every, every cortical area. That the function of this structure is similar, it's just a metaphor to the role of a conductor in a symphony orchestra. You have all the different cortical players. You have some that do motion, some that do theory of mind, some that infer social interaction and color and hearing and all the different modules in cortex. But of course, what consciousness is, consciousness puts it all together into one package, right? The binding problem, all of that. And this is really the function because it has relatively few neurons compared to cortex, but it talks, it receives input from all of them and it projects back to all of them. And so we're testing that right now. We've got this beautiful neuronal reconstruction in the mouse called crown of thorns, crown of thorns neurons that are in the claustrum that have the most widespread connection of any neuron I've ever seen. They're very, you have individual neurons that sit in the claustrum tiny, but then they have this single neuron, they have this huge axonal tree that cover both ipsy and contralateral cortex and trying to turn using, you know, fancy tools like optogenetics, trying to turn those neurons on or off and study it, what happens in the, in the mouse. So this thing is perhaps where the parts become the whole. Perhaps it's one of the structures, it's a very good way of putting it, where the individual parts turn into the whole of the whole of the conscious experience. Well, with that, thank you very much for being here today. Thank you very much. Thank you very much. All right, thank you very much. | Christof Koch: Consciousness | Lex Fridman Podcast #2 |
You've studied the human mind, cognition, language, vision, evolution, psychology, from child to adult, from the level of individual to the level of our entire civilization. So I feel like I can start with a simple multiple choice question. What is the meaning of life? Is it A. to attain knowledge as Plato said, B. to attain power as Nietzsche said, C. to escape death as Ernest Becker said, D. to propagate our genes as Darwin and others have said, E. there is no meaning as the nihilists have said, F. knowing the meaning of life is beyond our cognitive capabilities as Stephen Pinker said, based on my interpretation 20 years ago, and G. none of the above. I'd say A. comes closest, but I would amend that to C. to attaining not only knowledge but fulfillment more generally, that is life, health, stimulation, access to the living cultural and social world. Now this is our meaning of life. It's not the meaning of life if you were to ask our genes. Their meaning is to propagate copies of themselves, but that is distinct from the meaning that the brain that they lead to sets for itself. So to you knowledge is a small subset or a large subset? It's a large subset, but it's not the entirety of human striving because we also want to interact with people. We want to experience beauty. We want to experience the richness of the natural world, but understanding what makes the universe tick is way up there. For some of us more than others, certainly for me that's one of the top five. So is that a fundamental aspect? Are you just describing your own preference or is this a fundamental aspect of human nature is to seek knowledge? In your latest book you talk about the power, the usefulness of rationality and reason and so on. Is that a fundamental nature of human beings or is it something we should just strive for? Both. We're capable of striving for it because it is one of the things that make us what we are, homo sapiens, wise men. We are unusual among animals in the degree to which we acquire knowledge and use it to survive. We make tools. We strike agreements via language. We extract poisons. We predict the behavior of animals. We try to get at the workings of plants. And when I say we, I don't just mean we in the modern West, but we as a species everywhere, which is how we've managed to occupy every niche on the planet, how we've managed to drive other animals to extinction. And the refinement of reason in pursuit of human wellbeing, of health, happiness, social richness, cultural richness is our main challenge in the present. That is using our intellect, using our knowledge to figure out how the world works, how we work in order to make discoveries and strike agreements that make us all better off in the long run. Right. And you do that almost undeniably and in a data driven way in your recent book, but I'd like to focus on the artificial intelligence aspect of things and not just artificial intelligence, but natural intelligence too. So 20 years ago in a book you've written on how the mind works, you conjecture again, am I right to interpret things? You can correct me if I'm wrong, but you conjecture that human thought in the brain may be a result of a massive network of highly interconnected neurons. So from this interconnectivity emerges thought compared to artificial neural networks, which we use for machine learning today, is there something fundamentally more complex, mysterious, even magical about the biological neural networks versus the ones we've been starting to use over the past 60 years and have become to success in the past 10? There is something a little bit mysterious about the human neural networks, which is that each one of us who is a neural network knows that we ourselves are conscious. Conscious not in the sense of registering our surroundings or even registering our internal state, but in having subjective first person, present tense experience. That is when I see red, it's not just different from green, but there's a redness to it that I feel. Whether an artificial system would experience that or not, I don't know and I don't think I can know. That's why it's mysterious. If we had a perfectly lifelike robot that was behaviorally indistinguishable from a human, would we attribute consciousness to it or ought we to attribute consciousness to it? And that's something that it's very hard to know. But putting that aside, putting aside that largely philosophical question, the question is, is there some difference between the human neural network and the ones that we're building in artificial intelligence will mean that we're on the current trajectory, not going to reach the point where we've got a lifelike robot indistinguishable from a human because the way their so called neural networks are organized are different from the way ours are organized. I think there's overlap, but I think there are some big differences that current neural networks, current so called deep learning systems are in reality not all that deep. That is, they are very good at extracting high order statistical regularities, but most of the systems don't have a semantic level, a level of actual understanding of who did what to whom, why, where, how things work, what causes what else. Do you think that kind of thing can emerge as it does? So artificial neural networks are much smaller, the number of connections and so on than the current human biological networks, but do you think sort of to go to consciousness or to go to this higher level semantic reasoning about things, do you think that can emerge with just a larger network with a more richly weirdly interconnected network? Separate it in consciousness because consciousness is even a matter of complexity. A really weird one. Yeah, you could sensibly ask the question of whether shrimp are conscious, for example, they're not terribly complex, but maybe they feel pain. So let's just put that part of it aside. But I think sheer size of a neural network is not enough to give it structure and knowledge, but if it's suitably engineered, then why not? That is, we're neural networks, natural selection did a kind of equivalent of engineering of our brains. So I don't think there's anything mysterious in the sense that no system made out of silicon could ever do what a human brain can do. I think it's possible in principle. Whether it'll ever happen depends not only on how clever we are in engineering these systems, but whether we even want to, whether that's even a sensible goal. That is, you can ask the question, is there any locomotion system that is as good as a human? Well, we kind of want to do better than a human ultimately in terms of legged locomotion. There's no reason that humans should be our benchmark. They're tools that might be better in some ways. It may be that we can't duplicate a natural system because at some point it's so much cheaper to use a natural system that we're not going to invest more brainpower and resources. So for example, we don't really have an exact substitute for wood. We still build houses out of wood. We still build furniture out of wood. We like the look. We like the feel. It has certain properties that synthetics don't. It's not that there's anything magical or mysterious about wood. It's just that the extra steps of duplicating everything about wood is something we just haven't bothered because we have wood. Likewise, say cotton. I'm wearing cotton clothing now. It feels much better than polyester. It's not that cotton has something magic in it. It's not that we couldn't ever synthesize something exactly like cotton, but at some point it's just not worth it. We've got cotton. Likewise, in the case of human intelligence, the goal of making an artificial system that is exactly like the human brain is a goal that we probably know is going to pursue to the bitter end, I suspect, because if you want tools that do things better than humans, you're not going to care whether it does something like humans. So for example, diagnosing cancer or predicting the weather, why set humans as your benchmark? But in general, I suspect you also believe that even if the human should not be a benchmark and we don't want to imitate humans in their system, there's a lot to be learned about how to create an artificial intelligence system by studying the human. Yeah, I think that's right. In the same way that to build flying machines, we want to understand the laws of aerodynamics, including birds, but not mimic the birds, but they're the same laws. You have a view on AI, artificial intelligence, and safety that, from my perspective, is refreshingly rational or perhaps more importantly, has elements of positivity to it, which I think can be inspiring and empowering as opposed to paralyzing. For many people, including AI researchers, the eventual existential threat of AI is obvious, not only possible, but obvious. And for many others, including AI researchers, the threat is not obvious. So Elon Musk is famously in the highly concerned about AI camp, saying things like AI is far more dangerous than nuclear weapons, and that AI will likely destroy human civilization. Human civilization. So in February, he said that if Elon was really serious about AI, the threat of AI, he would stop building self driving cars that he's doing very successfully as part of Tesla. Then he said, wow, if even Pinker doesn't understand the difference between narrow AI, like a car and general AI, when the latter literally has a million times more compute power and an open ended utility function, humanity is in deep trouble. So first, what did you mean by the statement about Elon Musk should stop building self driving cars if he's deeply concerned? Not the last time that Elon Musk has fired off an intemperate tweet. Well, we live in a world where Twitter has power. Yes. Yeah, I think there are two kinds of existential threat that have been discussed in connection with artificial intelligence, and I think that they're both incoherent. One of them is a vague fear of AI takeover, that just as we subjugated animals and less technologically advanced peoples, so if we build something that's more advanced than us, it will inevitably turn us into pets or slaves or domesticated animal equivalents. I think this confuses intelligence with a will to power, that it so happens that in the intelligence system we are most familiar with, namely homo sapiens, we are products of natural selection, which is a competitive process, and so bundled together with our problem solving capacity are a number of nasty traits like dominance and exploitation and maximization of power and glory and resources and influence. There's no reason to think that sheer problem solving capability will set that as one of its goals. Its goals will be whatever we set its goals as, and as long as someone isn't building a megalomaniacal artificial intelligence, then there's no reason to think that it would naturally evolve in that direction. Now, you might say, well, what if we gave it the goal of maximizing its own power source? That's a pretty stupid goal to give an autonomous system. You don't give it that goal. I mean, that's just self evidently idiotic. So if you look at the history of the world, there's been a lot of opportunities where engineers could instill in a system destructive power and they choose not to because that's the natural process of engineering. Well, except for weapons. I mean, if you're building a weapon, its goal is to destroy people, and so I think there are good reasons to not build certain kinds of weapons. I think building nuclear weapons was a massive mistake. You do. So maybe pause on that because that is one of the serious threats. Do you think that it was a mistake in a sense that it should have been stopped early on? Or do you think it's just an unfortunate event of invention that this was invented? Do you think it's possible to stop? I guess is the question. It's hard to rewind the clock because of course it was invented in the context of World War II and the fear that the Nazis might develop one first. Then once it was initiated for that reason, it was hard to turn off, especially since winning the war against the Japanese and the Nazis was such an overwhelming goal of every responsible person that there's just nothing that people wouldn't have done then to ensure victory. It's quite possible if World War II hadn't happened that nuclear weapons wouldn't have been invented. We can't know, but I don't think it was by any means a necessity, any more than some of the other weapon systems that were envisioned but never implemented, like planes that would disperse poison gas over cities like crop dusters or systems to try to create earthquakes and tsunamis in enemy countries, to weaponize the weather, weaponize solar flares, all kinds of crazy schemes that we thought the better of. I think analogies between nuclear weapons and artificial intelligence are fundamentally misguided because the whole point of nuclear weapons is to destroy things. The point of artificial intelligence is not to destroy things. So the analogy is misleading. So there's two artificial intelligence you mentioned. The first one I guess is highly intelligent or power hungry. Yeah, it's a system that we design ourselves where we give it the goals. Goals are external to the means to attain the goals. If we don't design an artificially intelligent system to maximize dominance, then it won't maximize dominance. It's just that we're so familiar with homo sapiens where these two traits come bundled together, particularly in men, that we are apt to confuse high intelligence with a will to power, but that's just an error. The other fear is that will be collateral damage that will give artificial intelligence a goal like make paper clips and it will pursue that goal so brilliantly that before we can stop it, it turns us into paper clips. We'll give it the goal of curing cancer and it will turn us into guinea pigs for lethal experiments or give it the goal of world peace and its conception of world peace is no people, therefore no fighting and so it will kill us all. Now I think these are utterly fanciful. In fact, I think they're actually self defeating. They first of all assume that we're going to be so brilliant that we can design an artificial intelligence that can cure cancer, but so stupid that we don't specify what we mean by curing cancer in enough detail that it won't kill us in the process and it assumes that the system will be so smart that it can cure cancer, but so idiotic that it can't figure out that what we mean by curing cancer is not killing everyone. I think that the collateral damage scenario, the value alignment problem is also based on a misconception. So one of the challenges, of course, we don't know how to build either system currently or are we even close to knowing? Of course, those things can change overnight, but at this time, theorizing about it is very challenging in either direction. So that's probably at the core of the problem is without that ability to reason about the real engineering things here at hand is your imagination runs away with things. Exactly. But let me sort of ask, what do you think was the motivation, the thought process of Elon Musk? I build autonomous vehicles, I study autonomous vehicles, I study Tesla autopilot. I think it is one of the greatest currently large scale application of artificial intelligence in the world. It has potentially a very positive impact on society. So how does a person who's creating this very good quote unquote narrow AI system also seem to be so concerned about this other general AI? What do you think is the motivation there? What do you think is the thing? Well, you probably have to ask him, but there, and he is notoriously flamboyant, impulsive to the, as we have just seen, to the detriment of his own goals of the health of the company. So I don't know what's going on in his mind. You probably have to ask him, but I don't think the, and I don't think the distinction between special purpose AI and so called general AI is relevant that in the same way that special purpose AI is not going to do anything conceivable in order to attain a goal. All engineering systems are designed to trade off across multiple goals. When we build cars in the first place, we didn't forget to install brakes because the goal of a car is to go fast. It occurred to people, yes, you want it to go fast, but not always. So you would build in brakes too. Likewise, if a car is going to be autonomous and program it to take the shortest route to the airport, it's not going to take the diagonal and mow down people and trees and fences because that's the shortest route. That's not what we mean by the shortest route when we program it. And that's just what an intelligence system is by definition. It takes into account multiple constraints. The same is true, in fact, even more true of so called general intelligence. That is, if it's genuinely intelligent, it's not going to pursue some goal singlemindedly, omitting every other consideration and collateral effect. That's not artificial and general intelligence. That's artificial stupidity. I agree with you, by the way, on the promise of autonomous vehicles for improving human welfare. I think it's spectacular. And I'm surprised at how little press coverage notes that in the United States alone, something like 40,000 people die every year on the highways, vastly more than are killed by terrorists. And we spent a trillion dollars on a war to combat deaths by terrorism, about half a dozen a year. Whereas year in, year out, 40,000 people are massacred on the highways, which could be brought down to very close to zero. So I'm with you on the humanitarian benefit. Let me just mention that as a person who's building these cars, it is a little bit offensive to me to say that engineers would be clueless enough not to engineer safety into systems. I often stay up at night thinking about those 40,000 people that are dying. And everything I tried to engineer is to save those people's lives. So every new invention that I'm super excited about, in all the deep learning literature and CVPR conferences and NIPS, everything I'm super excited about is all grounded in making it safe and help people. So I just don't see how that trajectory can all of a sudden slip into a situation where intelligence will be highly negative. You and I certainly agree on that. And I think that's only the beginning of the potential humanitarian benefits of artificial intelligence. There's been enormous attention to what are we going to do with the people whose jobs are made obsolete by artificial intelligence, but very little attention given to the fact that the jobs that are going to be made obsolete are horrible jobs. The fact that people aren't going to be picking crops and making beds and driving trucks and mining coal, these are soul deadening jobs. And we have a whole literature sympathizing with the people stuck in these menial, mind deadening, dangerous jobs. If we can eliminate them, this is a fantastic boon to humanity. Now granted, you solve one problem and there's another one, namely, how do we get these people a decent income? But if we're smart enough to invent machines that can make beds and put away dishes and handle hospital patients, I think we're smart enough to figure out how to redistribute income to apportion some of the vast economic savings to the human beings who will no longer be needed to make beds. Okay. Sam Harris says that it's obvious that eventually AI will be an existential risk. He's one of the people who says it's obvious. We don't know when the claim goes, but eventually it's obvious. And because we don't know when, we should worry about it now. This is a very interesting argument in my eyes. So how do we think about timescale? How do we think about existential threats when we don't really, we know so little about the threat, unlike nuclear weapons perhaps, about this particular threat, that it could happen tomorrow, right? So, but very likely it won't. Very likely it'd be a hundred years away. So how do we ignore it? How do we talk about it? Do we worry about it? How do we think about those? What is it? A threat that we can imagine. It's within the limits of our imagination, but not within our limits of understanding to accurately predict it. But what is the it that we're afraid of? Sorry. AI being the existential threat. AI. How? Like enslaving us or turning us into paperclips? I think the most compelling from the Sam Harris perspective would be the paperclip situation. Yeah. I mean, I just think it's totally fanciful. I mean, that is don't build a system. Don't give a, don't, first of all, the code of engineering is you don't implement a system with massive control before testing it. Now, perhaps the culture of engineering will radically change. Then I would worry, but I don't see any signs that engineers will suddenly do idiotic things, like put a electric power plant in control of a system that they haven't tested first. Or all of these scenarios, not only imagine almost a magically powered intelligence, including things like cure cancer, which is probably an incoherent goal because there's so many different kinds of cancer or bring about world peace. I mean, how do you even specify that as a goal? But the scenarios also imagine some degree of control of every molecule in the universe, which not only is itself unlikely, but we would not start to connect these systems to infrastructure without testing as we would any kind of engineering system. Now, maybe some engineers will be irresponsible and we need legal and regulatory and legal responsibility implemented so that engineers don't do things that are stupid by their own standards. But the, I've never seen enough of a plausible scenario of existential threat to devote large amounts of brain power to, to forestall it. So you believe in the sort of the power on mass of the engineering of reason, as you argue in your latest book of Reason and Science, to sort of be the very thing that guides the development of new technology so it's safe and also keeps us safe. You know, granted the same culture of safety that currently is part of the engineering mindset for airplanes, for example. So yeah, I don't think that that should be thrown out the window and that untested all powerful systems should be suddenly implemented, but there's no reason to think they are. And in fact, if you look at the progress of artificial intelligence, it's been, you know, it's been impressive, especially in the last 10 years or so, but the idea that suddenly there'll be a step function that all of a sudden before we know it, it will be all powerful, that there'll be some kind of recursive self improvement, some kind of fume is also fanciful. We, certainly by the technology that we, that we're now impresses us, such as deep learning, where you train something on hundreds of thousands or millions of examples, they're not hundreds of thousands of problems of which curing cancer is a typical example. And so the kind of techniques that have allowed AI to increase in the last five years are not the kind that are going to lead to this fantasy of exponential sudden self improvement. I think it's kind of a magical thinking. It's not based on our understanding of how AI actually works. Now give me a chance here. So you said fanciful, magical thinking. In his TED talk, Sam Harris says that thinking about AI killing all human civilization is somehow fun, intellectually. Now I have to say as a scientist engineer, I don't find it fun, but when I'm having beer with my non AI friends, there is indeed something fun and appealing about it. Like talking about an episode of Black Mirror, considering if a large meteor is headed towards Earth, we were just told a large meteor is headed towards Earth, something like this. And can you relate to this sense of fun? And do you understand the psychology of it? Yes. Good question. I personally don't find it fun. I find it kind of actually a waste of time because there are genuine threats that we ought to be thinking about like pandemics, like cyber security vulnerabilities, like the possibility of nuclear war and certainly climate change. You know, this is enough to fill many conversations. And I think Sam did put his finger on something, namely that there is a community, sometimes called the rationality community, that delights in using its brainpower to come up with scenarios that would not occur to mere mortals, to less cerebral people. So there is a kind of intellectual thrill in finding new things to worry about that no one has worried about yet. I actually think, though, that it's not only is it a kind of fun that doesn't give me particular pleasure, but I think there can be a pernicious side to it, namely that you overcome people with such dread, such fatalism, that there are so many ways to die, to annihilate our civilization, that we may as well enjoy life while we can. There's nothing we can do about it. If climate change doesn't do us in, then runaway robots will. So let's enjoy ourselves now. We've got to prioritize. We have to look at threats that are close to certainty, such as climate change, and distinguish those from ones that are merely imaginable but with infinitesimal probabilities. And we have to take into account people's worry budget. You can't worry about everything. And if you sow dread and fear and terror and fatalism, it can lead to a kind of numbness. Well, these problems are overwhelming, and the engineers are just going to kill us all. So let's either destroy the entire infrastructure of science, technology, or let's just enjoy life while we can. So there's a certain line of worry, which I'm worried about a lot of things in engineering. There's a certain line of worry when you cross, you're allowed to cross, that it becomes paralyzing fear as opposed to productive fear. And that's kind of what you're highlighting. Exactly right. And we've seen some, we know that human effort is not well calibrated against risk in that because a basic tenet of cognitive psychology is that perception of risk and hence perception of fear is driven by imaginability, not by data. And so we misallocate vast amounts of resources to avoiding terrorism, which kills on average about six Americans a year with one exception of 9 11. We invade countries, we invent entire new departments of government with massive, massive expenditure of resources and lives to defend ourselves against a trivial risk. Whereas guaranteed risks, one of them you mentioned traffic fatalities and even risks that are not here, but are plausible enough to worry about like pandemics, like nuclear war, receive far too little attention. In presidential debates, there's no discussion of how to minimize the risk of nuclear war. Lots of discussion of terrorism, for example. And so I think it's essential to calibrate our budget of fear, worry, concern, planning to the actual probability of harm. Yep. So let me ask this question. So speaking of imaginability, you said it's important to think about reason and one of my favorite people who likes to dip into the outskirts of reason through fascinating exploration of his imagination is Joe Rogan. Oh yes. So who has through reason used to believe a lot of conspiracies and through reason has stripped away a lot of his beliefs in that way. So it's fascinating actually to watch him through rationality kind of throw away the ideas of Bigfoot and 9 11. I'm not sure exactly. Kim Trails. I don't know what he believes in. Yes. Okay. But he no longer believed in. No, that's right. No, he's become a real force for good. Yep. So you were on the Joe Rogan podcast in February and had a fascinating conversation, but as far as I remember, didn't talk much about artificial intelligence. I will be on his podcast in a couple of weeks. Joe is very much concerned about existential threat of AI. I'm not sure if you're, this is why I was hoping that you would get into that topic. And in this way, he represents quite a lot of people who look at the topic of AI from 10,000 foot level. So as an exercise of communication, you said it's important to be rational and reason about these things. Let me ask, if you were to coach me as an AI researcher about how to speak to Joe and the general public about AI, what would you advise? Well, the short answer would be to read the sections that I wrote in enlightenment now about AI, but a longer reason would be I think to emphasize, and I think you're very well positioned as an engineer to remind people about the culture of engineering, that it really is safety oriented, that another discussion in enlightenment now, I plot rates of accidental death from various causes, plane crashes, car crashes, occupational accidents, even death by lightning strikes. And they all plummet because the culture of engineering is how do you squeeze out the lethal risks, death by fire, death by drowning, death by asphyxiation, all of them drastically declined because of advances in engineering that I got to say, I did not appreciate until I saw those graphs. And it is because exactly, people like you who stay up at night thinking, oh my God, is what I'm inventing likely to hurt people and to deploy ingenuity to prevent that from happening. Now, I'm not an engineer, although I spent 22 years at MIT, so I know something about the culture of engineering. My understanding is that this is the way you think if you're an engineer. And it's essential that that culture not be suddenly switched off when it comes to artificial intelligence. So, I mean, that could be a problem, but is there any reason to think it would be switched off? I don't think so. And one, there's not enough engineers speaking up for this way, for the excitement, for the positive view of human nature, what you're trying to create is positivity. Like everything we try to invent is trying to do good for the world. But let me ask you about the psychology of negativity. It seems just objectively, not considering the topic, it seems that being negative about the future makes you sound smarter than being positive about the future, irregardless of topic. Am I correct in this observation? And if so, why do you think that is? Yeah, I think there is that phenomenon that, as Tom Lehrer, the satirist said, always predict the worst and you'll be hailed as a prophet. It may be part of our overall negativity bias. We are as a species more attuned to the negative than the positive. We dread losses more than we enjoy gains. And that might open up a space for prophets to remind us of harms and risks and losses that we may have overlooked. So I think there is that asymmetry. So you've written some of my favorite books all over the place. So starting from Enlightenment Now to The Better Ages of Our Nature, Blank Slate, How the Mind Works, the one about language, Language Instinct. Bill Gates, big fan too, said of your most recent book that it's my new favorite book of all time. So for you as an author, what was a book early on in your life that had a profound impact on the way you saw the world? Certainly this book, Enlightenment Now, was influenced by David Deutsch's The Beginning of Infinity, a rather deep reflection on knowledge and the power of knowledge to improve the human condition. And with bits of wisdom such as that problems are inevitable but problems are solvable given the right knowledge and that solutions create new problems that have to be solved in their turn. That's I think a kind of wisdom about the human condition that influenced the writing of this book. There are some books that are excellent but obscure, some of which I have on a page on my website. I read a book called The History of Force, self published by a political scientist named James Payne on the historical decline of violence and that was one of the inspirations for The Better Angels of Our Nature. What about early on? If you look back when you were maybe a teenager? I loved a book called One, Two, Three, Infinity. When I was a young adult I read that book by George Gamow, the physicist, which had very accessible and humorous explanations of relativity, of number theory, of dimensionality, high multiple dimensional spaces in a way that I think is still delightful 70 years after it was published. I like the Time Life Science series. These are books that would arrive every month that my mother subscribed to, each one on a different topic. One would be on electricity, one would be on forests, one would be on evolution and then one was on the mind. I was just intrigued that there could be a science of mind and that book I would cite as an influence as well. Then later on... That's when you fell in love with the idea of studying the mind? Was that the thing that grabbed you? It was one of the things I would say. I read as a college student the book Reflections on Language by Noam Chomsky. I spent most of his career here at MIT. Richard Dawkins, two books, The Blind Watchmaker and The Selfish Gene, were enormously influential, mainly for the content but also for the writing style, the ability to explain abstract concepts in lively prose. Stephen Jay Gould's first collection, Ever Since Darwin, also an excellent example of lively writing. George Miller, a psychologist that most psychologists are familiar with, came up with the idea that human memory has a capacity of seven plus or minus two chunks. That's probably his biggest claim to fame. But he wrote a couple of books on language and communication that I read as an undergraduate. Again, beautifully written and intellectually deep. Wonderful. Stephen, thank you so much for taking the time today. My pleasure. Thanks a lot, Lex. | Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3 |
" What difference between biological neural networks and artificial neural networks is most mysterio(...TRUNCATED) | Yoshua Bengio: Deep Learning | Lex Fridman Podcast #4 |
" The following is a conversation with Vladimir Vapnik. He's the co inventor of support vector machi(...TRUNCATED) | Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5 |
" The following is a conversation with Guido van Rossum, creator of Python, one of the most popular (...TRUNCATED) | Guido van Rossum: Python | Lex Fridman Podcast #6 |
" The following is a conversation with Jeff Atwood. He is the cofounder of Stack Overflow and Stack (...TRUNCATED) | Jeff Atwood: Stack Overflow and Coding Horror | Lex Fridman Podcast #7 |
" The following is a conversation with Eric Schmidt. He was the CEO of Google for 10 years and a cha(...TRUNCATED) | Eric Schmidt: Google | Lex Fridman Podcast #8 |
" The following is a conversation with Stuart Russell. He's a professor of computer science at UC Be(...TRUNCATED) | Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 |
" The following is a conversation with Peter Abbeel. He's a professor at UC Berkeley and the directo(...TRUNCATED) | Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10 |
End of preview. Expand
in Dataset Viewer.
Dataset Card for Lex Fridman Podcasts Dataset
This dataset is sourced from Andrej Karpathy's Lexicap website which contains English transcripts of Lex Fridman's wonderful podcast episodes. The transcripts were generated using OpenAI's large-sized Whisper model
- Downloads last month
- 46