These Clues Hint at the True Nature of OpenAI’s Shadowy Q* Project

Share

There are other clues to what Q* could be. The name may be an allusion to Q-learning, a form of reinforcement learning that involves an algorithm learning to solve a problem through positive or negative feedback, which has been used to create game-playing bots and to tune ChatGPT to be more helpful. Some have suggested that the name may also be related to the A* search algorithm, widely used to have a program find the optimal path to a goal.

The Information throws another clue into the mix: “Sutskever’s breakthrough allowed OpenAI to overcome limitations on obtaining enough high-quality data to train new models,” its story says. “The research involved using computer-generated [data], rather than real-world data like text or images pulled from the internet, to train new models.” That appears to be a reference to the idea of training algorithms with so-called synthetic training data, which has emerged as a way to train more powerful AI models.

Subbarao Kambhampati, a professor at Arizona State University who is researching the reasoning limitations of LLMs, thinks that Q* may involve using huge amounts of synthetic data, combined with reinforcement learning, to train LLMs to specific tasks such as simple arithmetic. Kambhampati notes that there is no guarantee that the approach will generalize into something that can figure out how to solve any possible math problem.

For more speculation on what Q* might be, read this post by a machine-learning scientist who pulls together the context and clues in impressive and logical detail. The TLDR version is that Q* could be an effort to use reinforcement learning and a few other techniques to improve a large language model’s ability to solve tasks by reasoning through steps along the way. Although that might make ChatGPT better at math conundrums, it’s unclear whether it would automatically suggest AI systems could evade human control.

That OpenAI would try to use reinforcement learning to improve LLMs seems plausible because many of the company’s early projects, like video-game-playing bots, were centered on the technique. Reinforcement learning was also central to the creation of ChatGPT, because it can be used to make LLMs produce more coherent answers by asking humans to provide feedback as they converse with a chatbot. When WIRED spoke with Demis Hassabis, the CEO of Google DeepMind, earlier this year, he hinted that the company was trying to combine ideas from reinforcement learning with advances seen in large language models.

Rounding up the available clues about Q*, it hardly sounds like a reason to panic. But then, it all depends on your personal P(doom) value—the probability you ascribe to the possibility that AI destroys humankind. Long before ChatGPT, OpenAI’s scientists and leaders were initially so freaked out by the development of GPT-2, a 2019 text generator that now seems laughably puny, that they said it could not be released publicly. Now the company offers free access to much more powerful systems.

OpenAI refused to comment on Q*. Perhaps we will get more details when the company decides it’s time to share more results from its efforts to make ChatGPT not just good at talking but good at reasoning too.