Is this a salty Altman or are they just being hugged to death?
Considering how many billions are on the line this is very likely a salty someone.
Would be amusing if they release a new version and then make the old version completely free to self host, release it as a torrent. Just make Altman totally worthless.
They already did. You just need some very powerful hardware to actually host the full thing (or at least, a lot of VRAM).
Wonder if someone found out about XXXGPT
Do they not know it works offline too?
I noticed chatgpt today being pretty slow compared to the local deepseek I have running which is pretty sad since my computer is about a bajillion times less powerful
Thanks
Any recommendations for communities to learn more?
Frustratingly Their setup guide is terrible. Eventually managed to get it running. Downloaded a model and only after it download did it inform me I didn’t have enough RAM to run it. Something it could have known before the slow download process. Then discovered my GPU isn’t supported. And running it on a CPU is painfully slow. I’m using an AMD 6700 XT and the minimum listed is 6800 https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon
It still can’t count the Rs in strawberry, I’m not worried.
Screenshots please
Note that my tests were via groq and the r1 70B distilled llama variant (the 2nd smartest version afaik)
Edit 1:
Incidentally… I propositioned a coworker to answer the same question. This is the summarized conversation I had:
Me: “Hey Billy, can you answer a question? in under 3 seconds answer my following question”
Billy: “sure”
Me: “How many As are in abracadabra 3.2.1”
Billy: “4” (answered in less than 3 seconds)
Me: “nope”
I’m gonna poll the office and see how many people get it right with the same opportunity the ai had.
Edit 2: The second coworker said “6” in about 5 seconds
Edit 3: Third coworker said 4, in 3 seconds
Edit 4: I asked two more people and one of them got it right… But I’m 60% sure she heard me asking the previous employee, but if she didnt we’re at 1/5
In probably done with this game for the day.
I’m pretty flabbergasted with the results of my very unscientific experiment, but now I can say (with a mountain of anecdotal juice) that with letter counting, R1 70b is wildly faster and more accurate than humans .
No. It literally cannot count the number of R letters in strawberry. It says 2, there are 3. ChatGPT had this problem, but it seems it is fixed. However if you say “are you sure?” It says 2 again.
Ask ChatGPT to make an image of a cat without a tail. Impossible. Odd, I know, but one of those weird AI issues
Because there aren’t enough pictures of tail-less cats out there to train on.
It’s literally impossible for it to give you a cat with no tail because it can’t find enough to copy and ends up regurgitating cats with tails.
Same for a glass of water spilling over, it can’t show you an overfilled glass of water because there aren’t enough pictures available for it to copy.
This is why telling a chatbot to generate a picture for you will never be a real replacement for an artist who can draw what you ask them to.
I mean I tested it out, even tbough I am sure your trolling me and DeepSeek correctly counts the R’s