Avatar

darkeox

darkeox@kbin.social
Joined
1 posts • 19 comments
Direct message

If true, this is kind of explosive…

permalink
report
parent
reply

On Ublock + Umatrix, hasn’t seen anything weird at least for now on Firefox.

permalink
report
reply

Let’s not kid ourselves. LTT comes out on the top because their way of operating reflects the community: as long as we get our daily shot of tech/geek stuff, we ignore the rest.

Not mentionning the significant amount of people in the community who are always eager to defend a “bro” against them “woke bitches”.

permalink
report
parent
reply

How can an AMD sponsored game that litteraly runs better on all AMD GPU vs their NVIDIA counterpart, doesn’t embark any tech that may unfavor AMD GPU can be less QA-ed on AMD GPUs because of market share?

This game IS better optimized on AMD. It has FSR2 enabled by default on all graphics presets. That particular take especially doesn’t work for this game.

permalink
report
parent
reply

Can confirm it’s the same on Proton / Linux. This game keeps being a joke on the technical side.

permalink
report
reply

I’ll try that Model. However, your option doesn’t work for me:

koboldcpp.py: error: argument model_param: not allowed with argument --model

permalink
report
parent
reply

The MythoMax looks nice but I’m using it in story mode and it seems to have problems progressing once it’s reached the max token, it appears stuck:

Generating (1 / 512 tokens)
(EOS token triggered!)
Time Taken - Processing:4.8s (9ms/T), Generation:0.0s (1ms/T), Total:4.8s (0.2T/s)
Output:

And then stops when I try to prompt it to continue the story.

permalink
report
parent
reply

Thanks a lot for your input. It’s a lot to stomach but very descriptive which is what I need.

I run this Koboldcpp in a container.

What I ended up doing and which was semi-working is:

  • --model "/app/models/mythomax-l2-13b.ggmlv3.q5_0.bin" --port 80 --stream --unbantokens --threads 8 --contextsize 4096 --useclblas 0 0

In the Kobboldcpp UI, I set max response token to 512 and switched to an Instruction/response model and kept prompting with “continue the writing”, with the MythoMax model.

But I’ll be re-checking your way of doing it because the SuperCOT model seemed less streamlined and more qualitative in its story writing.

permalink
report
parent
reply

Don’t be sorry, you’re being so helpful, thank you a lot.

I finally replicated your config:

localhost/koboldcpp:v1.43 --port 80 --threads 4 --contextsize 8192 --useclblas 0 0 --smartcontext --ropeconfig 1.0 32000 --stream "/app/models/mythomax-l2-kimiko-v2-13b.Q5_K_M.gguf"

And had satisfying results! The performance of LLaMA2 really is nice to have here as well.

permalink
report
parent
reply