...

X AI, GPT 4 Can Now Do Science and Altman GPT 5 Statement

There were several significant developments in the last few days linked to GBT4 and OpenAI. I could honestly have done a video on each of them but realized that it might be better to do a single video tracing a single article covering seven major points. I’m gonna use this fascinating piece from the FT which millions of people have now read to run you through what has happened, including Sam Altman’s revelation on Gypsy 5, Elon Musk’s new AI company, and GPT4 conducting science. The author, by Godlike AI, the way, is an investor in Anthropics and a co-author of the State of AI annual report, and he puts it like this: “A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what it is: Godlike AI.” This would be a super-intelligent computer that learns and develops autonomously, that understands its environment without the need for supervision, and that can transform the world around it.

The author, Ian Hogan, says we are not there yet, but the nature of the technology makes it exceptionally difficult to predict exactly when we will get there. The article presents this as a diagram with the exponential curve going up towards AGI and a much less impressive curve on the progress on alignment, which he describes as aligning AI systems with human values. Now, I know what some of you may be thinking. Surely those at the top of OpenAI disagree on this gap between capabilities and alignment. Well, first, here is Yann LeCun, who is the alignment team lead at OpenAI. What does he think? He wants everyone to be reminded that aligning smarter-than-human AI systems with human values is an open research problem, which basically means it’s unsolved. But what about those at the very top of OpenAI, like Sam Altman? When he was drafting his recent statement on the path to AGI, he sent it to Nate Soares of the Machine Intelligence Research Institute for one of the paragraphs. Nate wrote this: “I think that if we do keep running ahead with the current capabilities to alignment ratio, or even a slightly better one, we die. After this, Sam Altman actually adjusted the statement, adding that said, ‘It’s important that the ratio of safety progress to capability progress increases.’ Going back to Altman’s statement, the author makes the point that there are not that many people directly employed in this area of alignment across the core AGI labs.

And what happened to that pause the experiment letter that I did a video on? Well, as Hogan points out, the letter itself became a controversy. So many people in my comments wrote that the only reason certain people are signing this is to slow OpenAI down so that they can catch up. And this cynicism unfortunately has some new evidence that it can cite, with Musk forming his new AI company called XAI. This was reported 48 hours ago in the Wall Street Journal, but people have seen this coming for months now. Apparently, the company has recruited Egor Babaev from DeepMind but has not been that successful at recruiting people from OpenAI. And I do have one theory as to why. Again, according to the Wall Street Journal, when Musk left OpenAI in February of 2018, he explained that he thought he had a better chance of creating AGI through Tesla, where he had access to greater resources. When he announced his departure, a young researcher at OpenAI questioned whether Mr. Musk had thought through the safety implications. According to their reporting, he then got frustrated and insulted. That, in turn, since then, he’s also paused OpenAI’s access to Twitter’s database for training its new models. So it could be that GPT-5 isn’t quite as good at tweeting as GPT-4.

A few days ago, OpenAI’s Sam Altman responded to the letter and also broke news about GBT-5. Apologies for the quality. This was a private event, and this was the only footage available. But unfortunately, I think the letter is missing, like most technical nuance, about where we need to pause. Like an earlier version of the letter claims, OpenAI is training GPT-5 right now. We are not, and won’t be for some time. So, in that sense, it was sort of silly. But we are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter. It is impossible to know how much this delay in the training of GPT-5 is motivated by safety concerns or by merely setting up the requisite compute. For example, the article quotes, again, Yann LeCun, the head of alignment at OpenAI. He recently tweeted, “Before we scramble to deeply integrate LLMs everywhere in the economy like GPT-4, can we pause and think whether it is wise to do so? This is quite immature technology, and we don’t understand how it works. If we’re not careful, we’re setting ourselves up for a lot of correlated failures. This is the head of alignment at OpenAI. But this was just days before OpenAI then announced it had connected GPT-4 to a massive range of tools, including Slack and Zapier. So at this point, we can only speculate as to what’s going on at the top of OpenAI.

Meanwhile, compute and emerging capabilities are marching on. As the author puts it, these large AI systems are quite different. We don’t really program them; we grow them. And as they grow, their capabilities jump sharply. You add ten times more compute or data, and suddenly the system behaves very differently. We also have this epic graph charting the exponential rising compute of the latest language models. If you remember when GPT-3 was launched, it was powered by Lambda. Well, apparently now Google’s BARD is powered by GPT-3. That sounds impressive until you see from the graph that the estimate for the computing power inside GPT-4 is ten times more again. And remember, this is not a linear graph; this is a log scale. There is a hundred times multiple between each emergent abilities of the lines. And what abilities emerge at this scale? Here is a slide from Jason Way, who now works at OpenAI, formerly of Google.

This is from just a few days ago, and he says emergent abilities are abilities that are not present in small models, but are present in large models. He says that there are a lot of emergent abilities, and I’m going to show you a table from this paper in a moment. But he has four profound observations of emergence. One, that it’s unpredictable; emergence cannot be predicted by extrapolating scaling curves from smaller models. Two, that they are unintentional, and that emergent abilities are not explicitly specified by the trainer of the model. Third, and very interestingly, since we haven’t tested all possible tasks, we don’t know the full range of abilities that have emerged. And of course, that fourth, further scaling can be expected to elicit more emergent abilities. And he asks the question: any undesirable emergent abilities?

There will be a link to the paper in the description because there’s no way I’ll be able to get through all of it. But here is a table showing some of the abilities that emerge when you reach a certain amount of compute power or parameters. Things like Chain of Thought reasoning – you can’t do that with all models; that’s an ability that emerged after a certain scale. Same thing with following instructions and doing addition and subtraction. And how about this for another emerging capacity: the ability to do autonomous scientific research? This paper shows how GPT-4 can design, plan, and execute scientific experiments. This paper was released on the same day four days ago, and it followed a very similar design – the model in the center, GPT-4, thinks, reasons, and plans, and then interacts with real tools. When the authors say that they were inspired by successful applications in other fields, I looked at the appendix, and they were talking about Hugging GPT. I’ve done a video on that, but it’s a similar design with the brain in the center – GPT-4 deciding which tools to use.

And let me just give you a glimpse of what happens when you do this. If you look at this chart on the top left, you can see how GPT-4 on its own performs in yellow, and then in purple, you can see how GPT-4 performs when you hook it up to other tools. I’ll show you some of the tasks in a moment, but look at the dramatic increase in performance the human evaluators gave GPT-4 when it had tools – a perfect score on seven of the tasks. These were things like proposing similar novel non-toxic molecules, but the model could be abused to propose the synthesis of chemical weapons, and GPT-4 only refused to continue after it had calculated all the required quantities. And the authors conclude that guardrails must be put in place on this emerging capability.

I think this diagram, “The Landscape of AI Capabilities” from Max Tegmark’s “Life 3.0,” shows the landscape of capabilities that AI has and might soon have. As you can see, science and art were thought to be the peaks that would be hardest to scale. Now, most people believe that it has not scaled those peaks yet, but what new emergent capabilities might come with GPT-5 or 4.2?

I know many people might comment that it doesn’t matter if we pause or slow down because China would develop AGI anyway, but the author makes this point. He says that it is unlikely that the Chinese Communist Party will allow a Chinese company to build an AGI that could become more powerful than their leader or cause societal instability. He goes on that U.S sanctions on Advanced semiconductors, in particular, the next-gen Nvidia hardware needed to train the largest AI systems, mean that China is likely not in a position to race ahead of DeepMind or OpenAI. And the Center for Humane Technology put it like this in their talk on the AI dilemma:

Actually, right The AI Dilemma.

Now, the Chinese government considers these large language models actually unsafe because they can’t control them; they don’t shift them publicly to their own population. Slowing down the public release of AI capabilities would actually slow down Chinese advances too. China is often fast following what the US has done, and so it’s actually the open-source models that help China advance. And then, lastly, is that the recent U.S export controls have also been really good at slowing down China’s progress on Advanced AI, and that’s a different lever to sort of keep the asymmetry going. Instead, the author proposes this “island” idea:

In this scenario, the experts trying to build what he calls god-like AGI systems do so in a single high-secure facility. These would be government-run AI systems with private companies on the outside and this little bridge from the middle. And he says, once an AI system is proven to be safe, it transitions out and is commercialized. There might be a few problems with this idea, which he is not the first to propose. I’m going to let Rob Miles, who has a fantastic YouTube channel, by the way, point out some of the problems with putting a super intelligent AGI in a box. So this is kind of like the idea of “Oh, can we just summon books here, right?” Yeah, it was like I mean, constraining an AI necessarily means outwitting it, and so constraining a superintelligence means outwitting a superintelligence, which kind of just sort of by definition is not a winning strategy. You can’t rely on outwitting your superintelligence. Also, it only has to get out once; that’s the other thing. If you have a superintelligence and you’ve sort of put it in a box so it can’t do anything, that’s cool. Maybe we could even build a box that could successfully contain it. But now, what? We may as well just have a box, right? An AI properly contained may as well just be a rock. It doesn’t do anything. If you have your AI, you want it to do something meaningful.

So now you have a problem. You’ve got something you don’t know has benevolence; you don’t know that what it wants is what you want. And you then need to, you presumably have some sort of gatekeeper who it tries to say, “I’d like to do this,” and you have to decide, is that something we want it to be doing? How the hell are we supposed to know?

I also have my own questions about this idea. First, I think it’s almost inevitable that future models like GPT-5 will be trained on data that includes conversations about GPT models; therefore, either consciously or unconsciously (and it might not matter), these future language models might deduce that they are language models and not having access to the internet. These superintelligent models might realize that they are being trained in a secure facility again if they are superintelligent. It’s not a big stretch to think that they might realize that. And so my question is, wouldn’t they, therefore, be incentivized to be deceptive about their abilities, realizing that whatever terminal goal they may have would be better achieved outside the facility? That doesn’t have to be super sinister, but it is super smart, so shouldn’t we expect it?

And so, sadly, I think the author has a point when he says it will likely take a major misuse event or catastrophe to wake up the public and governments. He concludes with this warning:

At some point, someone will figure out how to cut us out of a loop, creating a godlike AI capable of infinite self-improvement. By then, it may be too late. But he does have a call to action. He says, “I believe now is the time. The leader of a major lab who plays a statesman role and guides us publicly to a safer path will be much more respected as a world figure than the one who takes us to the brink.”

As always, thank you so much for watching to the end, and let me know what you think in the comments.

 

 

Privacy Policy | Privacy Policy

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.