Leading U.S. scientist Jaron Lanier calls for an end to the deification of AI and the dignity of dat

Published By: EAIOT Time: Apr 23, 2023 18:42:02 Categories: AI 395 Views Total: 0Comments

In recent years, Jaron Lanier, an American computer scientist, visual artist, computer philosophy writer and futurist, and Glen Weyl, an economist at Microsoft Research New England, have developed the concept of "data dignity," which emphasizes the right of individuals to control and manage their own data to ensure that it is secure, private, and protected from misuse or unauthorized access.


bb.jpg


In an April 20 article in The New Yorker titled "There Is No AI," Raniere argues for an end to the deification of AI as an innovative form of social collaboration. He argues against the recent joint letter calling for an end to the training of more advanced AI, and renews the notion of "data dignity": an end to the AI black box, a record of where the bits come from, and that "people can get paid for what they create, even if it's filtered and recombined through big models. " and "when a big model provides valuable output, the data dignity approach will track the most unique and influential contributors."


According to Lanier, every successful introduction of a new AI or robotics application can involve the start of a new kind of creative work. No matter how big or small, this can help ease the transition to an economy that integrates big models.


Jaron Lanier is considered a pioneer in the field of virtual reality and was named one of the top 50 thinkers in the world by Prospect magazine in 2014. in 2018, he was named one of the 25 most influential people in the history of technology in the last 25 years by Wired. The following is a translation of the above New Yorker article, which has been slightly abridged for ease of reading and understanding.


Jaron Lanier left Atari in 1985 to found VPL Research, the first company to sell VR glasses and wired gloves. in 2006, he started working at Microsoft, and since 2009 has worked at Microsoft Research as an interdisciplinary scientist. As a computer scientist, I don't like the term "artificial intelligence". In fact, I think it's misleading - maybe even a little dangerous. Everyone is already using the term, and it may seem a little late to be arguing about it. But we are at the beginning of a new technological era - and misunderstandings can easily lead to misinformation.


The term "artificial intelligence" has a long history - it was coined in the early days of computers in the 1950s. More recently, computer scientists have grown up with characters like the Terminator and Matrix movies, and Commander Data in Star Trek: The Next Generation. These cultural touchstones have become an almost religious myth in technology culture. It's natural for computer scientists to aspire to create artificial intelligence and fulfill a long-held dream.


But alarmingly, many who pursue the dream of AI also fear that it could mean the end of humanity. It is widely believed, even by scientists at the center of today's work, that what AI researchers are doing could lead to the destruction of our species, or at least cause great harm to humanity, and that it will happen soon. In recent polls, half of AI scientists agree that there is at least a 10 percent chance that humanity will be destroyed by AI. Even my counterpart, Sam Altman, who runs OpenAI, has made similar comments. Walk into any Silicon Valley coffee shop and you can hear the same arguments: one person says that the new code is just code and that everything is in the hands of people, but another argues that anyone who holds that view just doesn't understand the profundity of the new technology. These arguments aren't entirely rational: When I asked my scientist friends who felt most scared to name what might happen in the AI apocalypse, they said, "Accelerated progress will fly past us, and we won't be able to imagine what's happening."


I disagree with this way of speaking. Many of my friends and peers are impressed with the experience of the latest big models, like GPT-4, and are waiting like a vigil for deeper intelligence to emerge. My position is not that they are wrong, but that we can't be sure; we reserve the option to classify software differently.


The most pragmatic position is to see AI as a tool, not a creature. My attitude does not eliminate the possibility of danger: no matter how one thinks about it, we can still design and operate new technologies badly in ways that harm us or even lead to our extinction. Deifying technology is more likely to keep us from operating it well, a kind of thinking that limits our imagination and binds it to yesterday's dreams. We can work better without the assumption that there is such a thing as artificial intelligence, and the sooner we understand this, the sooner we can begin to manage new technologies intelligently.


If new technology isn't really AI, then what is it? It seems to me that the most accurate way to understand what we are building today is to think of it as an innovative form of social collaboration.


A program like OpenAI's GPT-4 that writes sentences in order is like a version of Wikipedia that includes more data, mixed together statistically. A program that creates images in order is like a version of online image search, but with a system to combine images. In both cases, it is a human who writes the text and provides the images. These new programs take the work of a human and do it the way the human brain does. The innovation is that the mashup process becomes guided and constrained, so that the results are usable and often compelling. This is an important achievement and deserves to be celebrated - but it can be thought of as illuminating a once-hidden consistency among human creations, rather than as inventing a new mind.


As far as I can tell, my point is in praise of technology. After all, what is civilization but social collaboration? Seeing AI as a way to collaborate, rather than a technology to create independent, intelligent beings, might make it less mysterious, unlike HAL 9000 (the robot from 2001: A Space Odyssey) or Commander Data. But that's a good thing, because mystery only makes mismanagement more likely.


It's easy to categorize intelligence as new systems that have the flexibility and unpredictability we don't normally associate with computer technology. But this flexibility arises from simple mathematics. A large language model like GPT-4 contains a cumulative record of how particular words overlap in a large amount of text processed by the program. This large table makes the system inherently close to many grammatical patterns, as well as aspects of what is called authorial style. When you enter a query consisting of certain words in a certain order, your input is associated with what is in the model. Due to the complexity of correlating billions of entries, the results can be somewhat different each time.


The non-repetitive nature of the process can make it feel very lively. And in a sense, it can make the new system more human-centric. When you synthesize a new image with an AI tool, you might get a bunch of similar options and then have to choose from them; if you're a student cheating with an LLM (Large Language Model), you might read the options generated by the model and choose one. A technique that generates non-repetitive content requires a bit of human choice.


Many of the uses of AI that I like are the advantages that computers give us when they are not so rigid. There's a brittleness to digital stuff that forces people to go along with it rather than evaluate it first. The need to conform to digital design creates an expectation that demands human conformity. One positive aspect of artificial intelligence is that if we make good use of it, it could mean the end of this ordeal. We can now imagine a website that reworks itself for colorblind people, or a website that tailors itself to a person's particular cognitive abilities and style. Humanists like me want people to have more control rather than be overly influenced or guided by technology. Flexibility could allow us to regain some agency.


Yet, despite these possible benefits, there is a very legitimate fear that new technology will drive us away in ways we don't like or don't understand. Recently, some friends of mine circulated a petition calling for a moratorium on the most ambitious AI developments. Their idea was that during the pause, we would work on policy. The petition got signatures from some in our circle, but not from others. I found the concept too vague - what level of progress meant the moratorium could end? Each week, I receive vague new mission statements from organizations that are seeking to start the process of developing AI policy.


These efforts are well-intentioned, but in my opinion hopeless. Having worked on EU privacy policy for many years, I've come to realize that we don't know what privacy is. It's a term we use every day, and it makes sense in context, but we can't quite pin it down enough to generalize. The closest we can come to a definition of privacy is probably "the right to be alone," but in an age where we are constantly dependent on digital services, that seems odd. In the context of artificial intelligence, the "right not to be manipulated by computers" certainly seems right, but it doesn't quite say everything we want it to.


The AI policy conversation is being "consistent" (does AI "want" the same things that humans want?) , "safety" (can we anticipate guardrails to stop bad AI?), "security" (can we anticipate guardrails to stop bad AI?) , "fairness" (can we stop a program from potentially being unfriendly to some people?) Such terms rule. By pursuing these ideas of course the circle has gained a lot, but it hasn't removed our fears.


Recently, I called my peers and asked if there was anything they could agree on. I found that there was a basis for agreement. We all seemed to agree that depth artifacts - false but real-looking images, videos, etc. - should be labeled by the creator. Communications from avatars, and automated interactions designed to manipulate human thought or action, should also be labeled. People should understand what they are seeing and should have reasonable choices in return.


How can all this be done? I find that there is near unanimity that the current black box nature of AI tools must end. These systems must become more transparent. We need to do a better job of saying what is happening within the system and why. That's not going to be easy. The problem is that we are talking about big model AI systems that are not made up of explicit ideas. There is no clear representation of what the system "wants," and there is no label for when it does a particular thing, such as manipulate a person. There is only a giant sea of jelly - a vast mathematical hybrid. A writers' rights group has proposed that real human authors should be paid in full when tools like GPT are used to write scripts - after all, the system is drawing on the scripts of real people. But when we use AI to produce film clips, or perhaps even entire movies, there isn't necessarily a screenwriting phase. A movie is made that may appear to have a script, a soundtrack, etc., but it will be calculated as a whole. Trying to open the black box by having the system spit out unnecessary items such as scripts, sketches, or intentions would involve building another black box to explain the first one - an infinite regress.


At the same time, it is not necessarily the case that the interior of a large model is a wilderness off the beaten path. At some point in the past, a real person created an illustration that was fed into the model as data, and with the contributions of others, this became a fresh image. Big model AI is made up of people, and the way to open the black box is to reveal them.


I have been involved in developing a concept commonly referred to as "data dignity". It emerged long before the rise of big-model "AI," in which people gave their data for free in exchange for free services such as Internet searches or social networking. This familiar arrangement proved to have a dark side: due to the "network effect", a few platforms took over, eliminating smaller players such as local newspapers. Worse, because the direct online experience was free, the only business left was peddling influence. Users experience what appears to be a collectivist paradise, but they are targeted by insidious and addictive algorithms that make people vain, irritable and paranoid.


In a world with the dignity of data, the digital thing is usually associated with those who wish to be known for making it. In some versions of this idea, people can get paid for what they create, even if that stuff is filtered and recombined through big models, and technology centers will make money for facilitating what people want to do. Some people are horrified by the idea of online capitalism, but it would be a more honest capitalism. The familiar "free" arrangement is already a disaster.


One of the reasons the tech community fears that artificial intelligence could become an existential threat is that it could be used to toy with humans, just as the previous wave of digital technology did. Considering the power and potential impact of these new systems, the fear of possible extinction is not unreasonable. As this danger is widely recognized, the arrival of big model AI may be an opportunity to make changes for the betterment of the technology industry.


Implementing data dignity will require technical research and policy innovation. In this sense, as a scientist, this topic excites me. Opening the black box will only make the model more interesting. And it may help us learn more about language, which is a truly impressive human invention and one that we are still exploring after all these hundreds of thousands of years.


Can data dignity address the economic concerns often expressed about AI? The main concern is that workers will be devalued or replaced. In public, technicians sometimes say that people working with AI will be more productive in the coming years and will find new types of jobs in a more productive economy. (For example, it may be possible to become a prompt engineer for an AI program-someone who works with or controls AI.) In private, however, the same people will often say, "No, AI will transcend this idea of collaboration." Today's accountants, radiologists, truck drivers, writers, movie directors or musicians will never make money again.


When a large model provides valuable output, a data dignity approach will track the most unique and influential contributors. For example, if you ask a model to produce an animated movie: My children's adventures in the world of oil paint with talking cats. Then the oil painters, cat portraitists, voice actors and writers who played a key role - or their legacies - might be counted as uniquely important to the new creation. They would be recognized and incentivized, and perhaps even paid.


At first, Data Dignity may focus on only a few special contributors who emerge in specific situations. Over time, however, more people may be included, as intermediate rights organizations-unions, guilds, professional groups, and so on-begin to come into play. People in data dignity circles sometimes refer to these groups as mediators of personal data (MIDs) or data trusts. People need the power of collective bargaining in order to have value in the online world - especially when they might be lost in a giant artificial intelligence model. When people share responsibility in a group, they self-police, reducing the need or temptation for government and corporate censorship or control. Recognizing the human nature of big models may lead to the blossoming of positive new social institutions.


Data dignity is not just for white-collar roles. Consider what would happen if AI-powered tree-trimming robots were introduced. Tree trimmers might find themselves devalued or even lose their jobs. But the robots could end up using a new type of landscaping art. Some workers may invent creative methods, such as holographic patterns that look different from different angles, that go into models of tree pruning. With the dignity of data, these models might create new sources of income to be distributed through collective organizations. Over time, tree pruning will become more functional and interesting; there will be a community motivated out of value. Each successful introduction of a new AI or robotics application may involve the beginning of a new kind of creative work. Large or small, this can help ease the transition to an economy that integrates large models.


Many in Silicon Valley see a universal basic income as a solution to the potential economic problems caused by AI, but a universal basic income is the equivalent of putting everyone on the dole to preserve the idea of black box AI. I think this is a terrible idea, in part because bad actors will want to seize the center of power in an all-welfare system. I doubt that data dignity will ever grow enough to sustain an entire society, but I also doubt that any social or economic principle will ever be made whole. Whenever possible, the goal should be to create at least one new class of creators, not a new class of dependents.


Models are only as good as their inputs. Only through a system like Data Dignity can we extend the model to new domains. Right now, it is much easier to get a large language model to write an article than to get a program to generate an interactive virtual world, because there are very few virtual worlds already available. Why not solve this problem by giving people who develop more virtual worlds a chance to gain prestige and income?


Can data dignity help solve any kind of human extinction scenario? A large model can render us incompetent or confuse us so much that society collectively goes off the rails; a powerful, malicious person can use artificial intelligence to do great harm to us all; and some believe that the model itself can "jailbreak" our machines or weapons and use them against us.


We can find precedents for some of these scenarios not only in science fiction, but also in more mundane market and technology failures. One example is the 2019 Boeing 737 MAX airplane crash. This aircraft has a flight path correction feature that can confront pilots in certain situations, leading to two crashes with mass casualties. The problem is not the technology in isolation, but the way it is integrated into the sales cycle, training courses, user interfaces and documentation. Pilots believed they were right to try to resist the system in some cases, but they were doing exactly the wrong thing, and they had no way of knowing it. Boeing failed to clearly communicate how the technology worked, and the resulting confusion led to disaster.


Any engineering design - cars, bridges, buildings - can cause harm to people, yet we have built a civilization on engineering. It is by raising and expanding human awareness, responsibility, and involvement that we can make automation safe; conversely, we can hardly be good engineers if we treat our inventions as mysterious objects. It is more actionable to think of AI as a form of social collaboration: it gives us access to the machine room, which is made up of people.


Let's consider an apocalyptic scenario in which AI derails our society. One way this could happen is through deep forgery. Suppose an evil person, perhaps working for a hostile government in a state of war, decides to incite mass panic by sending compelling videos of our loved ones being tortured or kidnapped to everyone. (In many cases, the data needed to produce such videos is easily available through social media or other sources.) Confusion will ensue, even if it soon becomes clear that the videos were faked. How can we prevent this from happening? The answer is obvious: make sure that digital information has context (context).


The network was not originally designed to record the origin of the bits, probably to make it easier for the network to grow quickly. (Computers and bandwidth were poor at the beginning.) Why didn't we start recording the origin (or near origin) of bits when it became more feasible to remember them? It seems to me that we always want the network to be more mysterious than it needs to be. For whatever reason, the network was born to remember everything, while forgetting the source.


Today, most people take for granted that the Web, and the Internet it is built on, is by its very nature anti-contextual and without provenance. We believe that decontextualization is inherent in the very concept of the digital web. However, this is not the case. The original proposals for the architecture of the digital network made by the immortal scientist Vannevar Bush in 1945 and by computer scientist Ted Nelson in 1960 protected provenance. Now, artificial intelligence is revealing the true cost of ignoring this approach. Without provenance, we have no way to control our AI and no way to make them economically fair. And this has the potential to push our society over the edge.


If a chatbot appears manipulative, mean, bizarre, or deceptive, what kind of answer do we want when we ask why? Revealing the source of the bot as it learns its behavior would provide an explanation: we would learn that it draws on a particular novel, or a soap opera. We could respond differently to this output and adjust the model's inputs to improve it. Why not always provide this type of explanation? In some cases, it may not be appropriate to reveal provenance in order to prioritize privacy, but provenance is often more beneficial to individuals and society than an exclusive commitment to privacy.


The technical challenges of data dignity are real and must inspire serious scientific ambition. The policy challenges will also be substantial. But we need to change our mindset and embrace the hard work of transformation. By clinging to the ideas of the past - including a fascination with the independent possibilities of artificial intelligence - we risk using new technologies in ways that make the world worse. If social, economic, cultural, technological, or any other field of activity is to serve people, it can only be because we have decided that people enjoy a special status of being served.


Comments

Great Review