You are viewing a single thread.
View all comments View context
8 points

Well I’m guessing they actually did testing on local AI using a 4GB and 8GB RAM laptop and realized it would be an awful user experience. It’s just too slow.

I wish they rolled it in as an option though.

permalink
report
parent
reply
5 points

They wanted to use fast small language models, not LLMs like Llama

permalink
report
parent
reply
1 point

Llamafile with tinyllama model is 640mb. It could be a flag to enable or an extension

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 3.7K

    Monthly active users

  • 2.6K

    Posts

  • 41K

    Comments

Community moderators