I have an rx 6600 and 16gb of ram and an i5 10400f

I am using oobabooga web-ui and I happened to have a gguf file of LLama2-13B-Tiefighter.Q4_K_S .

But it always says that the connection errored out when I load the model.

Anyway, please suggest any good model that I can get started with.

You are viewing a single thread.
View all comments
3 points

I’d suggest checking out WolframRavenwolf on raddit, he does regular LLM tests.

I’m looking at Beyonder 4x7B, Mistral Instruct 2x7B, Laser Dolphin 2x7B, and previously used Una Cybertron.

permalink
report
reply
3 points

Hey thanks ! I’ll check these out.

permalink
report
parent
reply

LocalLLaMA

!localllama@sh.itjust.works

Create post

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

Community stats

  • 23

    Monthly active users

  • 191

    Posts

  • 734

    Comments