10 points

The “1 trillion” never existed in the first place. It was all hype by a bunch of Tech-Bros, huffing each other’s farts.

permalink
report
reply
81 points

Almost like yet again the tech industry is run by lemming CEOs chasing the latest moss to eat.

permalink
report
reply
65 points

The best part is that it’s open source and available for download

permalink
report
reply
24 points

So can I have a private version of it that doesn’t tell everyone about me and my questions?

permalink
report
parent
reply
27 points
2 points

Thank you very much. I did ask chatGPT was technical questions about some… subjects… but having something that is private AND can give me all the information I want/need is a godsend.

Goodbye, chatGPT! I barely used you, but that is a good thing.

permalink
report
parent
reply
4 points

Yep, lookup ollama

permalink
report
parent
reply
3 points

Yeah, but you have to run a different model if you want accurate info about China.

permalink
report
parent
reply
2 points

Unfortunately it’s trained on the same US propaganda filled english data as any other LLM and spits those same talking points. The censors are easy to bypass too.

permalink
report
parent
reply
5 points

Yeah but China isn’t my main concern right now. I got plenty of questions to ask and knowledge to seek and I would rather not be broadcasting that stuff to a bunch of busybody jackasses.

permalink
report
parent
reply
2 points

Yes

permalink
report
parent
reply
2 points

Can someone with the knowledge please answer this question?

permalink
report
parent
reply
5 points
*

I watched one video and read 2 pages of text. So take this with a mountain of salt. From that I gathered that deepseek R1 is the model you interact with when you use the app. The complexity of a model is expressed as the number of parameters (though I don’t know yet what those are) which dictate its hardware requirements. R1 contains 670 bn Parameter and requires very very beefy server hardware. A video said it would be 10th of GPUs. And it seems you want much of VRAM on you GPU(s) because that’s what AI crave. I’ve also read 1BN parameters require about 2GB of VRAM.

Got a 6 core intel, 1060 6 GB VRAM,16 GB RAM and Endeavour OS as a home server.

I just installed Ollama in about 1/2 an hour, using docker on above machine with no previous experience on neural nets or LLMs apart from chatting with ChatGPT. The installation contains the Open WebUI which seems better than the default you got at ChatGPT. I downloaded the qwen2.5:3bn model (see https://ollama.com/search) which contains 3 bn parameters. I was blown away by the result. It speaks multiple languages (including displaying e.g. hiragana), knows how much fingers a human has, can calculate, can write valid rust-code and explain it and it is much faster than what i get from free ChatGPT.

The WebUI offers a nice feedback form for every answer where you can give hints to the AI via text, 10 score rating thumbs up/down. I don’t know how it incooperates that feedback, though. The WebUI seems to support speech-to-text and vice versa. I’m eager to see if this docker setup even offers APIs.

I’ll probably won’t use the proprietary stuff anytime soon.

permalink
report
parent
reply
8 points

Yes, you can run a downgraded version of it on your own pc.

permalink
report
parent
reply
8 points

I asked it about Tiananmen Square, it told me it can’t answer that because it can only respond with “harmless” responses.

permalink
report
parent
reply
24 points

Yes the online model has those filters. Some one tried it with one of the downloaded models and it answers just fine

permalink
report
parent
reply
1 point

You misspelled “lies”. Or were you trying to type “psyops tool”??

permalink
report
parent
reply
5 points

When running locally, it works just fine without filters

permalink
report
parent
reply
2 points

This was a local instance.

permalink
report
parent
reply
-1 points
Removed by mod
permalink
report
parent
reply
6 points

Yes but your server can’t handle the biggest LLM.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
3 points
*

They’d need to do some pretty fucking advanced hackery to be able to do surveillance on you just via the model. Everything’s possible I guess, but … yeah perhaps not.

If they could do that, essentially nothing you do on your computer would be safe.

permalink
report
parent
reply
87 points

The economy rests on a fucking chatbot. This future sucks.

permalink
report
reply
25 points

On the brightside, the clear fragility and lack of direct connection to real productive forces shows the instability of the present system.

permalink
report
parent
reply
9 points

And no matter how many protectionist measures that the US implements we’re seeing that they’re losing the global competition. I guess protectionism and oligarchy aren’t the best ways to accomplish the stated goals of a capitalist economy. How soon before China is leading in every industry?

permalink
report
parent
reply
11 points

This conclusion was foregone when China began to focus on developing the Productive Forces and the US took that for granted. Without a hard pivot, the US can’t even hope to catch up to the productive trajectory of China, and even if they do hard pivot, that doesn’t mean they even have a chance to in the first place.

In fact, protectionism has frequently backfired, and had other nations seeking inclusion into BRICS or more favorable relations with BRICS nations.

permalink
report
parent
reply
7 points

Economy =/= stock market

permalink
report
parent
reply
5 points

That’s the thing: if the cost of AI goes down , and AI is a valuable input to businesses that should be a good thing for the economy. To be sure, not for the tech sector that sells these models, but for all of the companies buying these services it should be great.

permalink
report
parent
reply
7 points

Sure workers will reap a big chunk of that value right?

permalink
report
parent
reply
7 points

Right?.jpg

permalink
report
parent
reply
2 points

Only thanks to the PRC

permalink
report
parent
reply
59 points

This just shows how speculative the whole AI obsession has been. Wildly unstable and subject to huge shifts since its value isn’t based on anything solid.

permalink
report
reply
7 points

It’s based on guessing what the actual worth of AI is going to be, so yeah, wildly speculative at this point because breakthroughs seem to be happening fairly quickly, and everyone is still figuring out what they can use it for.

There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

If out of the blue comes a new model that delivers similar results on a fraction of the hardware, then it’s going to chop it down by a lot.

If someone finds another use case, for example a model with new capabilities, boom value goes up.

It’s a rollercoaster…

permalink
report
parent
reply
10 points

There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

I would disagree on that. There are a few niche uses, but OpenAI can’t even make a profit charging $200/month.

The uses seem pretty minimal as far as I’ve seen. Sure, AI has a lot of applications in terms of data processing, but the big generic LLMs propping up companies like OpenAI? Those seems to have no utility beyond slop generation.

Ultimately the market value of any work produced by a generic LLM is going to be zero.

permalink
report
parent
reply
-9 points

It’s difficult to take your comment serious when it’s clear that all you’re saying seems to based on ideological reasons rather than real ones.

Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.

permalink
report
parent
reply
-2 points

Language learning, code generatiom, brainstorming, summarizing. AI has a lot of uses. You’re just either not paying attention or are biased against it.

It’s not perfect, but it’s also a very new technology that’s constantly improving.

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 3.5K

    Monthly active users

  • 2.9K

    Posts

  • 43K

    Comments

Community moderators