AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits
nothing to do with actual capabilities… just the ability to make piles and piles of money.
Guess we’re never getting AGI then, there’s no way they end up with that much profit before this whole AI bubble collapses and their value plummets.
AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.
The vast majority of those implementations are worthless. Mostly ignored by it’s intended users, seen as a useless gimmick.
LLM have it’s uses but companies are pushing them into every areas to see what sticks at the moment.
To be fair, a bubble is more of an economic thing and not necessarily tied to product/service features.
LLMs clearly have utility, but is it enough to turn them into a profitable business line?
It’s a bubble. It doesn’t mean the tech does not have its uses. And it is exactly like the .com situation.
The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it’s not a philosophical term but a business one.
Right but that’s not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you’ve developed something that you might not have developed.
honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other industry in the world but alas it doesn’t have anything to do with “AGI”.
I’ve listened to enough interviews to know that all of AI leaders want this holy grail title of “inventor of AGI” more than anything else so I don’t think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.
For fun, I used this idea in chatgpt using similar logic:
Hello, can you explain to me how a strict profit generated number would result in a specific goal, such as if I make 1000 dollars I would suddenly be capable of owning a house. I find the idea strange and backwards.
You’re correct to find this idea odd—it reflects a misunderstanding of how financial goals, like owning a house, relate to income. Here’s why this thinking is flawed:
-
Profit vs. Net Worth: Making $1,000 in profit doesn’t directly equate to being able to afford a house because buying a house requires not just income but also savings, creditworthiness, and a down payment. Profit is only a small piece of the puzzle.
-
Costs and Expenses: Owning a house involves more than just the purchase price. You need to consider closing costs, property taxes, maintenance, and potentially ongoing mortgage payments. $1,000 would likely not be enough to cover even a fraction of these.
-
Leverage and Financing: Most people don’t buy houses outright; they use loans or mortgages. To secure a mortgage, lenders look at your total financial picture: income, savings, debt, and credit score. Simply generating $1,000 won’t improve these factors significantly.
-
Specificity of the Goal: “Making $1,000” is a general financial milestone, not a specific strategy tailored to the requirements of owning a home. A more actionable goal would involve budgeting, saving for a down payment, and improving credit.
-
The Backwards Logic: The idea that reaching an arbitrary profit threshold suddenly makes you eligible for something like owning a house conflates short-term gains with long-term financial stability. A better approach is to reverse-engineer the goal: start by determining the total cost of the house, then work backwards to figure out how much you need to save monthly, how much income you need to earn, and what steps are required to secure financing.
Example of Goal-Oriented Thinking:
If you wanted to own a $200,000 house:
You might need a 10% down payment ($20,000).
Closing costs could add $5,000–$10,000.
Monthly income would need to support a mortgage payment (let’s say $1,200/month).
Steps would include saving regularly, reducing debt, and improving credit, rather than focusing on an isolated profit milestone like $1,000.
Summary:
Focusing on a single, arbitrary profit number like $1,000 doesn’t align with the comprehensive planning required for significant financial goals like home ownership. Instead, success depends on a holistic view of your finances and structured goal-setting.
Does anyone have a real link to the non-stalkerware version of:
https://www.theinformation.com/articles/microsoft-and-openais-secret-agi-definition
-and the only place with the reference this article claims to cite but doesn’t quote?
That’s not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It’s reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.
Good observation. Could it be that Microsoft lowers profits by including unnecessary investments like acquisitions?
So it’d take a 100M users to sign up for the $200/mo plan. All it’d take is for the US government to issue vouchers for video generators to encourage everyone to become a YouTuber instead of being unemployed.
I suppose that by that point, the AI will be running Microsoft rather than simply being a Microsoft product.
If they actually achieve AGI I don’t understand what money would even mean anymore. It essentially is just a mechanism for getting people to do things they don’t otherwise want to do, if the AI can do it just as well as the human, but for free other than the electricity costs, why the hell would you pay a human to do it?
It’s like saving up money, in case of nuclear war. There are a few particular moments in history where the state of the world on the far side of the event is so different to the world on this side of the event that there’s no point making any kind of plans based on today systems.
I see what you’re saying and I agree that if, for example, we get an AI god then money won’t be useful. However, that’s not the only possible near-future outcome and if the world as we know it doesn’t end then money can be used by AIs to get other AIs to do something they don’t otherwise want to do.
My point is if AI takes over all of the work there won’t be any jobs for humans. So they won’t have any money.
So who are all the AI companies going to sell their products to? The whole system doesn’t work in an AI future and we don’t need AI gods to be able to do our jobs, after all most humans are idiots.
Also AI doesn’t need motivation.
There are jobs that require hands still.
AGI hell regular “Ai” (LLM) trained on all the automotive repair books should be able to diagnose a fault but it still needs a human to go repair the vehicle.
On board diagnostics are smart, they can tell you the rear tail lights are in open circuit etc. What they can’t tell is the back half of the car was ripped off by a train and a set of bulbs just won’t cut it
This is just so they can announce at some point in the future that they’ve achieved AGI to the tune of billions in the stock market.
Except that it isn’t AGI.
But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved
The real motivation is to not be beholden to Microsoft
So they don’t actually have a definition of a AGI they just have a point at which they’re going to announce it regardless of if it actually is AGI or not.
Great.