MIT professor: GPT-4 sounds alarm, where will humanity be in 100 years?

Publisher: EAIOT Time: 2023-04-30 Category: ChatGPT 496Views 0Comments

An open letter calling for a six-month moratorium on research on large models has put a company called Future of Life Institute (FLI) on the front burner.


The institute's co-founder, Max Tegmark, is a physicist and artificial intelligence researcher from MIT and author of the book Life 3.0 Becoming Human in the Age of Artificial Intelligence.


He is the very person who led the open letter calling for a six-month moratorium on training large AI experiments such as GPT-4.


In the latest episode of the conversation show with famous AI anchor Lex Fridman, Max expresses his views on GPT-4, intelligent alien civilizations, Life 3.0, open letter, how AI can kill humans and other topics.


I. Intelligent alien civilization


L (Lex Fridman): The current is a decisive moment in the history of human civilization, where the power struggle between humans and AI is beginning to change.


Max's thoughts and voice are most valuable and influential in such times, and his support, wisdom and friendship have been a gift for which I am forever and deeply grateful.


Let me ask you again, the question I asked in the first episode, do you think there is intelligent life in the universe? What do you think of when you look up at the stars?


Max (Max Tegmark):


When we look up at the stars, if we define our universe the way most astrophysicists do, not all of space, but the spherical region of space that we can observe with our astronomical telescopes, starting from the point where light can have time to reach us.


Since the Big Bang, one prediction says that we are the only life in all of space, the only life that has invented the Internet, the radio and reached such a high level of technology.


If that is true, then we have an even greater obligation not to screw up, because life is so rare and we are the stewards of this fire of higher consciousness, to nurture it, to make it grow, and eventually life can spread from here to most of our universe and we can have this wonderful future.


The bad thing is that if we are reckless with technology and kill it out of stupidity or fighting, then the rest of the history of the universe will be a play for empty benches (a play for empty benches).


But I think we will actually be visited by extraterrestrial intelligence soon. Even we will create our own alien intelligence.


So we will breed an intelligent extraterrestrial civilization, unlike anything they have as humans, and evolution on Earth will be able to create such a biological path.


And it would be much more "alien" than cats, or even the most exotic animals on the planet right now, because it would not be created through the usual Darwinian competition, no longer caring about things like self-preservation, fear of death, etc.


You can build a much wider space of extraterrestrial thinking than evolution has given you. With that comes a great responsibility to make sure that the kind of minds we create, minds that share our values and benefit humanity and life, and that the minds that are created don't suffer.


L: If you generalize about all types of intelligence, has it been considered that AI might fall into the category of extraterrestrial minds?


Max: I tried, but failed.


I mean, it's hard for the human brain to really deal with something that's still completely new. I mean, imagine what it would feel like if we were completely indifferent to death or personality. For example, you can replicate what I know about how to speak Swedish.


It wouldn't take much effort to learn something new, because it could just be copied. You might be less afraid of dying, and if the plane were to crash, you might think, "I haven't backed up my brain for these four hours, and I'll lose all these wonderful experiences from this flight.


We also become more compassionate towards other people because we get to experience other people's experiences firsthand and it feels more like a hive mind.


L: The entire written history of humanity is described through poetry, fiction, philosophy, etc., the human condition and what is embedded in it. Like you said, the fear of death, the definition of love, etc.


All of that would change if it were replaced by another form of intelligence (other than human). All of that, including all those poems where they are creeping closer to what it means to be born human, all of that changes.


How the existential crisis that AI is going through will clash with the human existential crisis, the human condition. It's hard to understand deeply, and it's hard to predict.


Max: Shockingly, the GPT-4 commercial produced by Microsoft shows a woman who is showing that she will be giving a speech at her daughter's graduation, and she has the GPT-4 do the writing, writing almost 200 words.


If it were me, I would be very angry to find out that Mom and Dad won't even write 200 words and have to outsource it to a computer. So I also wonder if this strife about AI takes away part of what it means to be born as human beings.


L: Someone recently told me that they started using ChatGPT and GPT-4 to write about how they really feel about another person.


And they have emotional issues themselves and are basically trying to get ChatGPT to express their point of view in a better way of expression. So we're erasing the more jerkish parts of our inner being from our communication, which of course has a positive impact, but mainly symbolizes a shift in the way humans communicate.


This is actually scary, because much of our society is built on this glue of communication.


If we now use AI as a medium of communication and let it provide us with language, how would that change everything when so much of the emotion, so much of the intent carried in human communication is outsourced to AI to handle.


It's going to change the internal state of how we feel about other people, what makes us feel alone, what makes us excited, what makes us scared, what makes us fall in love, all of that.


Max: It reminds me of the things that make my life feel meaningful.


For example, when I go hiking with my wife Maya, I don't want to push a button and get to the top of the mountain. I want to feel the struggle of the process, the sweat, and then finally make it to the top.


Likewise, I want to constantly improve myself and become a better person. If I say something in anger that I regret, I want to really learn to improve myself, not tell the AI to filter what I write from now on so I don't have to work at it and I'm not really growing.


L: But then again, just like in chess, the AI can absolutely outperform humans, it will live in its own world and potentially provide a thriving civilization for humanity.


And we humans just continue to climb mountains and play games, even though AI is smarter, more powerful, and superior in every way, I mean, it's a promising trajectory where humans remain human and AI becomes a medium that enables the human experience to flourish.


Max: I would phrase it as reinventing us from homo sapiens to higher intelligent beings (Homosentients). So we branded ourselves as the most intelligent information processing entity on the planet.


Of course this will obviously change as AI continues to evolve. So maybe we should focus on the subjective experiences we have as higher intelligent beings, the love, the links, which are the things that are really valuable. To let go of our arrogance and hubris.


L: So consciousness, subjective experience, is the most fundamental value of what makes people human. It should be put at the top of the list.


Max: To me, that seems like a hopeful direction. But it also requires more compassion, not just for the most intelligent human beings on the planet, but for all the other fellow creatures on the planet as well.


Right now, for example, we are treating many farm animals horribly, on the excuse that they are not as smart as humans. But if we admit to not being so smart in the grand scheme of things, in the post-artificial intelligence era, perhaps we should pay more attention to the subjective experience of cattle.


II. Life 3.0 and hyper-artificial intelligence


L: Looking back at the book Life 3.0, the ideas in this book are becoming more and more visionary. So first of all, what is Life 1.0, 2.0, 3.0? How has this vision evolved to where it is today?


Max: Life version 1.0 is still stupid, like a bacterium that can't learn anything at all in its lifetime. And this learning process comes from a genetic inheritance process from generation to generation. Life 2.0 is humans and animals that have brains and can learn a very large number of things in their lifetime.


For example, you are born not speaking English, but at some point you suddenly decide you want to upgrade your software and install a module that speaks English. This is the goal of Life 3.0, to replace not only the software, but also the hardware.


Currently, we may be in Life 2.1 because we can implant artificial knees, pacemakers, etc. If neuralink or other companies are successful, it could well be Life 2.2 and so on.


But what the companies that are trying to build AGI are trying to make is of course the full 3.0, injecting intelligence into something that has no biological basis.


L: But could it be understood that with respect to Life 2.0 and 3.0, with respect to what is really powerful about life, intelligence and consciousness, it was already there in 1.0. Is it possible to say that?


Max: Of course it's not black and white. There's obviously a range.


There's even controversy over whether a single-celled organism like an amoeba can learn a little bit (I apologize if I offended any bacteria).


My intent was more about how cool it is to have a brain that can learn in its lifetime. From 1.0, 2.0 to 3.0, as you continue to evolve, you become more and more the master of your own destiny, and no longer a willing slave to evolution.


With constant software upgrades, we can be very different from previous generations and even our parents. If you can also swap out hardware and adopt any physical form you want.


Since the last podcast, I have lost both parents. Thinking about them in this way has actually given me a lot of comfort.


In a sense, they're not really dead, their values, ideas, even jokes, etc., are not gone. I can carry these things with me as I go on with my life.


In that sense, even with Life 2.0, we can already transcend flesh and death to some extent. Especially if you can share your own messages and thoughts with as many other people as you can in a podcast, then that's the closest we can get to immortality through our biology.


L: Do you miss your parents? What lessons did you learn about life from them?


Max: So many. My fascination with math and the physical mysteries of the universe came from my father, and my thinking about consciousness and other such big questions actually came mostly from my mother.


And the really core part that I got from both of them was to do what I thought was right, no matter what anyone else said. They both just did their own thing, and although they sometimes got blamed for it, they did it anyway.


A good reason for wanting to do scientific research is that you are really curious and want to find out the truth.


I wrote a crazy paper once when I was in school about the nature of the universe as mathematics, which we don't see today.


A very famous professor at the time told me that it wasn't just garbage, it would affect your future career. Don't write this kind of stuff anymore.


Then I sent it to my father, and guess what he said? He even quoted Dante, "follow your own course and let people talk". He may have passed away, but his attitude is still there.


L: How did their passing change you as a man, as a person? How has it expanded your worldview? Like we're talking about, man creates another sentient being itself.


Max: One of the two main things to do is to examine all the stuff that's come up since they passed away and then think about what they spent a lot of time doing and should they really have spent so much time on that or could they have done something more meaningful.


So, I'm looking at my life more now and asking myself what I'm doing now that I'm doing something meaningful. It should be something that I really enjoy doing or something that is really meaningful because it benefits humanity.


L: Do you fear your own death from now on? Makes death feel more real to you?


Max: (Their death) makes the thing real. I'm next in the family, and I have a brother. They faced death with dignity and never complained.


When you get older and your health starts to fail, the complaining gets worse. They can also focus on what is meaningful to do instead of wasting time talking or even thinking about what they are disappointed about.


When you start your day with meditation and think about things to be grateful for, you basically choose to be a happy person. Because there are not many days left and each day needs to be lived meaningfully.


III. Open Letter


L: That said, AI is really the thing that can have the greatest impact on multi-human civilization, both on a detailed technical level and on a high philosophical level, and you mentioned that you were writing an open letter.


Max: Have you seen the movie Don't look up, the 2021 American satirical science fiction film, where we're playing out the plot, almost like life is imitating art?


It's just like the movie, except we're building an asteroid ourselves. What people are arguing about is almost insignificant compared to this asteroid that is about to hit the earth. Most politicians don't realize it's imminent, they think it's 100 years away.


Currently, we are at a fork in the road. 100,000 years, this is the most important fork in the road that humanity has reached.


On this planet, we are effectively building a new species that is smarter than us. It doesn't quite seem like a species yet, because it's not yet embodied in robots, but the technical details of that will soon be worked out.


Artificial intelligence is coming to do all the work as efficiently as we do, and it may not be long before there is a superintelligence that greatly exceeds our cognitive abilities. This will be the best thing that has ever happened to humanity, and possibly the worst. There is no middle ground.


L: What we are seeing now, the GPT-4 class of development, may lead to superintelligence AGI in the short term. when superhuman level intelligence is reached, there are still many questions that need to be explored. Is the content of the open letter saying that we are suspending all development of similar AI systems?


Max: I remember around 2014 or 2015, AI security was the mainstream topic.


The idea at the time was that even if there were risks, measures could be taken. But at the time, many people felt it was odd to talk about AI security.


Many AI researchers felt it was quaint and probably not conducive to funding, and I'm glad that phase is behind us. Now there are AI security topics inside all the major AI top conferences, and it's a nerdy technical field full of formulas.


About slowing down AI development has become almost a taboo topic recently. I've been chomping at the bit to say that maybe we don't need to slow down AI development. We just need to win the race, which is between the growing power of AI, and the ability to manage intelligence.


Instead of trying to slow down AI, let's try to accelerate the wisdom in managing the technology, figuring out how you can really make sure that powerful AI will do what you want it to do, with the appropriate social incentives and regulations to accompany it so that the technology is put to good use. Sadly, that didn't work out.


When we started this project (Future Life Institute) in 2014, we didn't expect AI to progress by leaps and bounds.


Why?


Many people are part of an afterthought, just like building flying machines. People spend a lot of time studying how birds fly, which proves to be really hard.


And compared to the Wright brothers building the first airplane, the evolutionary path was actually more complex, even tied up. From an evolutionary standpoint, the flying machine (bird) needed to be able to assemble and repair itself, use only the most common atoms in the periodic table of elements, and be fuel efficient (a bird consumes almost half the fuel of a small remote-controlled flying machine). The Wright brothers didn't care, they used steel and iron atoms to build the plane.


The same is true of the Big Language Model. The brain is very complex.


Some people argue that it is important to figure out how the brain does human-level intelligence before building a machine. I think that's completely wrong, and it's possible to take a very simple computational system, the transformer network, and train it to do simple things. Like reading a lot of text and trying to predict the next word.


With enough computation and data, it would produce a model as good as GPT-4.


L: What do you think about GPT-4, can GPT-4 reason? How about intuition? What capabilities are most impressive from a human explainable perspective?


Max: I'm both excited and scared. It is capable of reasoning and also serves a large number of people at the same time.


There are recurrent neural networks in the brain (recurrent neural network). Information is passed between neurons and circulated. You can ramble and self-reflect.


The transformer architecture of GPT-4, which is like a one-way channel of information, is called a feed-forward neural network. Its depth dictates that it can only do so many steps of logical reasoning. In fact, it's pretty amazing to do such amazing things with this minimalist architecture.


IV. how can AI kill humans?


L: Are there any actual mechanisms that lead to AI actually killing all humans? You've been outspoken about autonomous weapons systems, and with AI getting more and more powerful, does this concern of yours still exist?


Max: That concern is not that everyone will be killed by slaughter robots. I'm trying to say an Orwellian anti-utopian world where the minority will kill the majority.


If you want to know how AI will kill humans, how about looking at how humans are making other species extinct, where we (humans) don't really go and shoot them, but rather say plunder their habitat to take over for our own use. This is killing along the way.


For example, it's like a machine that finds oxygen a bit annoying because it causes more corrosion, so it decides to pump it away (think about how people live without oxygen).


The basic problem is that you don't want to give up control of the planet to some other, more intelligent entity with a different goal than your survival.


It's as simple as that. This brings up another key challenge that AI security researchers have long struggled with.


How do you get AI to understand human goals? First, understand and adopt human goals, and continue to retain those goals even if they become smart in the future. All three of these are difficult. It's like dealing with a human child that isn't smart enough to understand our goals and can't communicate verbally.


Then eventually they grow up, become adolescents, and understand our goals, are smart enough and malleable enough. With a good education, they can be taught right from wrong. These are also the challenges that computers face.


L: Even with time, the AI alignment problem seems really hard.


Max: But it is also the most rewarding problem, the most important problem ever faced by humanity. Once the problem of aligning with AI is solved, it will be followed by all the other problems.


Thanks to GPT-4, it may be just the wake-up call humanity really needs to tell people to stop imagining that 100 years from now things are likely to be uncontrollable and unpredictable.


ChatGPT Previously trying to convince a journalist to divorce his wife, the engineers built it without anticipation. Just trying to build a giant black box to train and predict the next word resulted in a flood of unimagined properties.


L: Speaking of teaching computers, when I was growing up, programmer was a great profession. And as the nature of programming has changed, why should we invest so much time in becoming good programmers?


Max: The fact is that the nature of our entire education system has changed. English teachers are the ones who are really freaking out, assigning an essay assignment and getting a bunch of Hemingway-style professional writing as a result. They're going to have to completely rethink it.


I am an educator myself and it pains me to say this, but I feel that our current education is completely obsolete by what is happening at the moment.


You put a kid in first grade and imagine they're going to graduate from high school in 12 years. Everything he's going to learn is already pre-planned.


It's clear that we need a more opportunistic education system that is constantly adapting itself as society readjusts. The curriculum is written to cover the skills that are really useful.


I would ask how many of these skills learned in school will help students get a job 12 years from now.


V. Awareness and AGI


L: Do you consider the GPT-4 to be conscious?


Max: Let's start by defining consciousness, because in my experience, 90% of the arguments about consciousness that two people argue about are complete chicken-and-egg.


I define consciousness as a subjective experience. I am currently experiencing colors, sounds and emotions, but will a self-driving car experience anything? That's the question about whether it is conscious.


Is the GPT-4 conscious? Does it have a subjective experience? The short answer is, I don't know, because what brings about this wonderful subjective experience. Because our life itself is a subjective experience. Joy, love are subjective experiences.


Neuroscientist Professor Giulio Tononi once wrote his bold mathematical conjecture about what the nature of consciousness is in processing information. He hypothesized that consciousness is related to recycling in the brain's information processing. It's called a recurrent neural network on the computer.


He argued that feed-forward neural networks, which just transmit information in one direction, say from the retina into the back of the brain, that is the act of not being conscious. The retina is like a camera, which is not conscious in itself.


GPT-4 is also a one-way flow of information, so if Tononi is right, GPT-4 is like a very smart zombie that can do smart things, but has no ability to experience the world itself.


That way, for example, I wouldn't feel guilty when I shut down GPT-4 or erase its memory. It's creepy.


They're all just Transformer-like one-way neural networks, all ushering in the zombie apocalypse like zombies. We have such a great ongoing universe, but no life in the experience. How frustrating that kind of future would be.


So I think it's important to figure out what kind of information processing will inspire experience as we move toward higher levels of AI. Many people say that consciousness equals intelligence. Not at all. You can accomplish some intelligent behavior, but at the same time it may be in an unconscious state.


L: How do you see the timeline for AGI, do you think it will be one year from now, 5 years, 10 years, 20 years or 50 years? What does your intuition say?


Max: AGI is probably very close, which is why we decided to publish an open letter.


If there was ever a time to stop (the development of AI), it's definitely today, and the version after GPT-4 is not AGI, so maybe the version after that is. And there are many companies trying, and their basic architecture is not super-secret.


One of the voices over the years has been that there must come a time when a slight pause (in the development of AI) is needed, and clearly that time is now.


Tags: GPTChatGPT