What is OpenAI Q*? The mysterious breakthrough that could ‘t…

Amid the whirlwind of speculation surrounding the sudden dismissal and reinstatement of OpenAI CEO Sam Altman, there has been a central question mark at the center of the controversy. Why was Altman fired by the board initially?

We may finally have part of the answer, and it has to do with the handling of a mysterious OpenAI project with the internal codename, “Q*” – or Q Star. Information is limited, but here’s everything we know so far about the potentially game-changing development.

What is Project Q*?

Before we proceed, it should be noted that all details about Project Q* – including its existence – come from a single report. Reuters journalists said on 22 November that the information was given by “two people familiar with the matter”. According to the article, Project Q* was a new model that excelled in math performance, which current LLMs (large language models) like ChatGPT struggled with. Reportedly it was still only at the level of solving grade-school math, but as a starting point, it looked promising.

Don’t miss:

Seems harmless enough, right? Well, not so fast. The existence of Q* was reportedly so scary that several staff researchers wrote a letter to the board to warn about the project, claiming it could be a “threat to humanity”.

Beyond that, there is no further information about how large the Q* project is, what its objectives are, or how long it has been in development.

Is that really why Sam Altman was fired?

Sam Altman at the OpenAI developer conference.

From the beginning of speculation about Sam Altman’s dismissal, one of the main suspects was his attitude towards protectionism. Altman was the one who inspired OpenAI to move away from its roots as a non-profit and move toward commercialization. It started with the public launch of ChatGPT and the eventual roll-out of ChatGPT Plus, both of which ushered in this new era of generative AI, leading to even companies like Google going public with their technology.

There have always been ethical and security concerns about this technology being publicly available, despite how it has already changed the world. Major concerns about how quickly the technology is evolving have also been well documented, especially with the jump from GPT-3.5 to GPT-4. Some think the technology is advancing too quickly without adequate regulation or oversight, and cited “commercializing progress before understanding the consequences” as one of the reasons for Altman’s initial firing, according to a Reuters report. Was listed in.

Although we do not know whether Altman was specifically mentioned in the letter regarding Q* mentioned above, it is also being cited as one of the reasons for the board’s decision to fire Altman – Which has later been reversed.

It’s notable that just days before being fired, Altman mentioned at the AI ​​Summit that he had been “in the room” a few weeks earlier when a major “limit of discovery” was pushed. The timing checks out that this could be in the context of a breakthrough in Q*, and if so, would confirm Altman’s intimate involvement in the project.

Putting the pieces together, it seems that concerns about commercialization have been present from the beginning, and their treatment of Q* was merely the last straw. The fact that the board was so concerned about the rapid growth (and perhaps Altman’s own attitude toward it) that it would fire its all-star CEO is shocking.

The fact that Altman is now back in charge puts the current state of Q* and its future into question.

Is this really the beginning of AGI?

AGI, which stands for artificial general intelligence, has been the driving force of OpenAI since its inception. Although the term means different things to different people, as Reuters reports, OpenAI has always described AGI as “autonomous systems that surpass humans in the most economically valuable tasks.” ” is defined as. Nothing about that definition contains any reference to “self-aware systems”, which is what AGI is often thought to mean.

Yet, on the surface, advances in AI mathematics may not seem like a huge step in that direction. After all, we’ve had computers helping us with math for decades. But the powers given to Q* aren’t just a calculator. Being literate in math requires human logic and reasoning, and researchers think that’s a big deal. With writing and language, LLMs are allowed to be more fluid in their answers and responses, often giving a wide range of answers to questions and prompts. But mathematics is exactly the opposite, where there is often only one correct answer to a problem. The Reuters report states that AI researchers believe it “can also be applied to novel scientific research.”

Obviously, Q* still seems to be early in development, but it appears to be the biggest progress we’ve seen since GPT-4. If the hype is to be believed, this should definitely be considered a major step towards AGI, as defined by OpenAI. Depending on your perspective, this is cause for either optimistic excitement or existential dread.

Leave a Comment