Nevertheless, like the funding-hungry CEO he is, Altman quickly turned the thread around to OpenAI promising jam tomorrow, with the execution of the firm’s roadmap, amazing next-gen AI models, and “bringing you all AGI and beyond.”
AGI and beyond?
Artificial General Intelligence, the pipedream of a technological intelligence that is not producing a single thing but generally capable, like a human.
Edit: recommended reading is “Life 3.0”. While I think it is overly positive about AI, it gives a good overview of AI industry and innovation, and the ideas behind it. You will have to swallow a massive chunk of Musk-fanboism, although to be fair it predates Musk’s waving the fasces.
I get it. I just didn’t know that they are already using “beyond AGI” in their grifting copytext.
Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.
Yeah, that started a week or two ago. Altman dropped the AGI promise too soon now he’s having to become a sci-fi author to keep the con cooking.
https://en.m.wikipedia.org/wiki/Superintelligence#Feasibility_of_artificial_superintelligence
Artificial Superintelligence is a term that is getting banded about nowadays
The fact that Microsoft and OpenAI define Artificial General Intelligence in terms of profit suggests they’re not confident about achieving the real thing:
The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. (Source)
Given this definition, when they say they’ll achieve AGI and beyond, they simply mean they’ll achieve more than $100 billion in profit. It says nothing about what they expect to achieve technically.
This should be its own post. Very interesting. People are not aware of this I think.
If you throw billions of dollars at a problem, you will always get the most expensive solution.
I mean I get the DeepSeek launch exposes what NVIDIA and OPENAI have been pushing as the only roadmap to AI as incorrect, but doesn’t DeepSeek’s ability to harness less lower quality processors thereby allow companies like NVIDIA and OPENAI to reconfigure expanding their infrastructure’s abilities to push even further faster? Not sure why the selloff occurred, it’s like someone got a PC to post quicker with a x286, and everybody said hey those x386 sure do look nice, but we’re gonna fool around with these instead.
I believe this will ultimately be good news for Nvidia, terrible news for OpenAI.
Better access to software is good for hardware companies. Nvidia is still the world leader when it comes to delivering computing power for AI. That hasn’t changed (yet). All this means is that more value can be made from Nvidia gpus.
For OpenAI, their entire business model is based on the moat they’ve built around ChatGPT. They made a $1B bet on this idea - which they now have lost. All their competitive edge is suddenly gone. They have no moat anymore!
but doesn’t DeepSeek’s ability to harness less lower quality processors thereby allow companies like NVIDIA and OPENAI to reconfigure expanding their infrastructure’s abilities to push even further faster?
Not that much if the problem is NP-hard and they were already hitting against the asymptote.
The fact that you can run it locally with good perfomance on 4+ years old machine (an M1 Max for example), is not exactly a good news for them. I think deepseek just made their 500 billion investment project, which was already absurd, incredibly stupid. I’m gonna say it again, the GAFAM economy is based on a whole lot of nothing. Now more then even, we can the web back and destroy their system. Fuck the tech-bros and their oligarch friend.
The reason for the correction is that the “smart money” that breathlessly invested billions on the assumption that CUDA is absolutely required for a good AI model is suddenly looking very incorrect.
I had been predicting that AMD would make inroads with their OpenCL but this news is even better. Reportedly, DeepSeek doesn’t even necessarily require the use of either OpenCL or CUDA.
IMO they’re way too much fixated on making a single model AGI.
Some people tried to combine multiple specialized models (voice recognition + image recognition + LLM, + controls + voice synthesis) to get quite compelling results.
I’m just impressed how snappy it was, I wish he had the ability to let it listen longer without responding right away though.
80% time she’s just a bot, but there are these flashes of brilliance that makes me think we’re closer to general purpose intelligence than we think
And this is just one dude using commercially available tooling. Well funded company could do infinitely better, if they were willing to give up some of the political correctness when training the model
EDIT: When he removed the word filter last time it got really hilarious quickly
What I am 100% certain of, because humanity is terrible, is that if a true AI is created that fact will be ignored for being inconvenient to profit seeking.
If you’re the programmer, it’s not hard to use a key press to enable TTS and then send it in chunks. I made a very similar version of this project, but my GPU didn’t stream the responses nearly as seamlessly.
Does the elephant call the ant hopeless?
One of them is threatened with extinction. ;-)
The greatest irony would be if OpenAI was killed by an open AI