What does the leaked Google AI memo really imply?

Published By: EAIOT Time: May 17, 2023 07:57:46 Categories: AI 404 Views Total: 0Comments

Whether Google, and leading companies of its kind, have really lost their moat in the field of artificial intelligence will soon be clear to us. But as with previous memes, this unknown author seems to have written about another turning point in the computer age.


The Economist article says that open source AI is growing rapidly, with both good and bad implications.


Technicians are changing the world by writing software, but the gang is also happy to show off their words with long memes, a few of the most famous of which could represent some of the twists and turns of the computer age.


Consider Bill Gates' 1995 "Internet Wave" memo, which turned Microsoft's direction toward the Web as soon as it was available, and Bezos' 2002 "API licensing" memo, which opened up Amazon's digital infrastructure and paved the way for modern cloud services.


Now, the tech world is talking about another one, this time leaked from inside Google and titled "We don't have a moat. The memo's author is unknown, but it details the amazing progress being made in the field of artificial intelligence (AI) and questions some of the accepted assumptions in this fast-growing industry.


AI burst into the public eye in late 2022 with the launch of ChatGPT, which will chat like a human, and OpenAI, which makes this chatbot powered by a "Large Language Model" (LLM), which in turn is closely associated with Microsoft.


Once ChatGPT caught fire, Google and other tech companies panicked and rushed to release their own intelligent chatbots, each fearing that their products wouldn't chat. Such systems can generate text and carry out realistic-looking conversations, based largely on training based on trillions of words extracted from the Internet.


It takes months and at least tens of millions of dollars to train an LLM. With money flowing like water, people are starting to worry that artificial intelligence will be monopolized by a few wealthy companies.


But Google's memo says that assumption is wrong, pointing out that researchers in the open-source community are now achieving results comparable to the largest proprietary models, using free online resources.


It turns out that LLMs can be "fine-tuned" using a technique called low-rank adaptation ("Low-rank Adaptation, LoRa), which allows existing LLMs to be optimized for a specific task much faster and cheaper than training an LLM from scratch.


Low-rank Adaptation" (LORA) is a technique used in machine learning for data adaptation. It transforms and adapts the original data by building a low-rank structure. This low-rank structure effectively captures and exploits the underlying structure in the data, thereby improving the predictive performance of the model.


Open source AI exploded in March when LLaMA, a model created by Facebook's parent company Meta, was leaked online. While smaller than the largest LLMS (the smallest version has 7 billion parameters, compared to Google's PaLM with 540 billion parameters), the model was quickly fine-tuned to produce results comparable to the original ChatGPT version on certain tasks.


"As open source researchers collaborate with each other on LLaMA, a flood of innovation will follow," the Google memo authors write.


"This could have a seismic impact on the future of the AI industry," the Google memo says, adding that "the threshold for training and experimentation has been reduced from the total output of a major research institution to one person, one night, and a powerful laptop. An LLM can now be fine-tuned in a few hours for $100."


With its fast-moving, collaborative and low-cost model, the memo says, "open source has some significant advantages that we cannot replicate." Hence the memo's headline: This could mean no "moat" for Google in the face of open source competitors.


The good news for Google is that the same is true for OpenAI.


Not everyone agrees on this point. The Internet does run on open source software, but people are also using paid proprietary software, from Adobe Photoshop to Microsoft Windows.


In addition, benchmarking AI systems is notoriously difficult. However, even the memo is only partially correct. The powerful LLMS can run on a laptop and anyone who wants to can now fine tune their AI.


This has both positive and negative implications.


On the positive side, it will make monopoly control of AI by a few companies much less likely, make access to AI much cheaper, accelerate innovation across the field, make it easier for researchers to analyze the behavior of AI systems (they have limited access to proprietary models), and improve transparency and security.


However, easier access to AI also means that people with bad motives will be able to fine-tune systems for nefarious purposes, such as generating false information. This would make AI harder to regulate, as the genie is already out of the bottle.


We'll soon see if Google, and leading companies like it, have really lost their moat in the AI space. But as with the previous memo, this unknown author seems to have written about another turning point in the computer age.


Tags: Google AI

Comments

Great Review