🧠 [Brain Food #16] How AGI is Different Than Gen AI 🆚

Rumors Swirl Around an OpenAI Breakthrough 🌪️

GM Legends! ☀️

Welcome to the 16th issue of Evolving Internet Insights’ 🧠 Brainfood — a weekly deep dive into a relevant emerging tech topic.

In ⚡ Insights 21, we recently covered rumors around a groundbreaking project at OpenAI called Project Q* (“Q-Star”). If the rumors are true, Q* would represent an AI model powerful enough to solve math problems, implying greater reasoning capabilities resembling human intelligence. This would be one step closer to the pinnacle of AI research—artificial general intelligence (AGI).

Thanks for reading!

Liang and Dan 🙌

Image Made With AI

🧠 Brain Food

One focus topic to feed your brain.

The recent turmoil involving Sam Altman and the OpenAI Board, which began with Altman's dismissal and concluded with his return under a new board, has ignited a flurry of speculation about the direction and implications of OpenAI's research. After this saga, rumors swirled around OpenAI achieving a breakthrough with its Q* — an AI model that could solve basic math problems.

Wait… so you may be asking, “how does a model that can solve basic math problems help us get to AGI? Don’t calculators already do that?” 🤣

What LLMs Actually Do?

Large Language Models (LLMs) such as ChatGPT function as advanced prediction machines. These models are trained on vast amounts of content, enabling them to generate text based on statistical probabilities. In other words, when prompted to create sentences or paragraphs, these models calculate the likelihood of one word following another and string them together coherently. A real world example would be if you ask ChatGPT for a recipe, then it will compile the steps and ingredients by predicting the most probable words and phrases that align with the content found in the typical recipes that are most similar to your prompt.

But, today LLMs do not “reason” in the way that a human reasons. For example, when it comes to math, LLMs provide answers based on patterns they observe in training data. This approach lacks a deeper and conceptual understanding of mathematical principles, and the outputs of LLMs do not actually solve the problem the way a human would through step-by-step reasoning.

Double clicking into the math example, when you ask ChatGPT what “1+1” equals it returns “2”. Though, importantly, it is returning “2” because it has seen enough examples of “1 + 1 = 2” in the data sets it has been trained on.

Despite advancements in language processing, current LLMs struggle with tasks requiring deep, logical reasoning or complex problem-solving. Fundamentally, their reliance on pattern recognition limits their effectiveness in dealing with less common or intricate problems that demand genuine human reasoning. 

What is AGI and What is All the Hype About?

Given the above, an AI model that mirrors human-like problem-solving ability indicates advanced generalization abilities—thereby, allowing it to apply learned concepts to new, unseen problems. This reasoning is a critical aspect of AGI, because if the model can reason, then it could do what humans are good at. Areas like problem-solving in dynamic environments, adapting to new situations, and creative thinking across art, science, and technology now become within the domain of expertise for AGI-powered models.

More broadly and notably, this would also mean AGI can improve upon itself by learning, much like how humans learn but at a much faster rate and unsupervised. 🤯

Most breakthroughs seem basic at first, but the rate of growth of these inventions is what we should be focused on (especially, when discussing AGI). Theoretically, if Q* can reason, then conceivably future iterations of it would likely be able to handle more advanced reasoning capabilities, with its ability growing with each subsequent version. This would represent a huge paradigm shift from AI just being prediction machines today to AI becoming advanced reasoning machines tomorrow.

If machines that can actually reason exist (like humans do), the way we think about work (and life, but this will be left to another 🧠 Brain Food edition 😉) will completely change. Some areas where AGI will disrupt include:

  • Finance: AI with human-like reasoning abilities could revolutionize the finance sector by taking over tasks like investment strategy development, risk assessment, market analysis, and complex financial modeling, potentially outperforming humans in terms of speed and accuracy.

  • Creative industries: AI could disrupt fields such as graphic design, advertising, and media production. It could generate innovative designs, create targeted advertising campaigns, and even produce content like articles, videos, and music, challenging the traditional roles of human designers and content creators.

  • Academic research: In academia, AI could revolutionize research by autonomously conducting literature reviews, formulating hypotheses, designing experiments, and analyzing complex data sets, greatly accelerating the pace of scientific discovery and innovation.

This means with AGI, you have human level reasoning but with the proven scalability of software and machines. If (and we think) when this happens, saying the world will look very different is an understatement.

If the speculation around Q* is true, then the key tension and resulting turmoil at OpenAI stems from one side wanting to slow down and put guardrails in place, and the other side wanting to go full steam ahead and share all new AI developments with the world and let the relevant stakeholders (users and institutions) regulate through trial and error. 

Regardless, Altman hinted at some major breakthrough when he spoke at the Asia-Pacific Economic Cooperation summit (one day before he was fired) saying, "Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime…"

It’s starting to feel like science fiction is becoming reality.

🧮🤖

🗳 We’d Love your Feedback

What did you think of today's issue?

Login or Subscribe to participate in polls.

🙏 Shameless Asks

It takes us days to put this together, sharing it takes you 19.45 seconds 😘

An easy way to support us and the newsletter is through the referral program.

⚡️ Share Evolving Internet Insights with a friend and ask them to subscribe

⚡️ Share on Twitter and Linkedin with a short note

⚡️ Share on your company Slack/Teams channels and communities

DISCLAIMER: This post is provided strictly for educational and informational purposes only. Nothing written in this post should be taken as financial advice or advice of any kind. The content of this post are the opinions of the authors and not representative of other parties. Empower yourself, DYOR (do your own research).

Join the conversation

or to participate.