Avatar

darkeox

darkeox@kbin.social
Joined
1 posts • 19 comments
Direct message

Well the thing with those “enabled EAC on Linux to see where it gets us” is it’s non-binding and non-commital. And it’s made explicitely that way so that support cannot be demanded from Linux users unlike Windows users who are explicitely mentioned in the systems supported by the game.

We legally don’t have any ground to be supported the same as Windows users.

permalink
report
parent
reply

That’s the problem with non official support. You’re basically in an unending beta-testing phase. Theres’ no easy solution here I’m afraid.

permalink
report
parent
reply
Deleted by creator
permalink
report
parent
reply

This. It’s not easy or trivial but as a long term strategy, they should already plan investing efforts into consolidating something like Godot or another FOSS engine. They should play like you calm down an abuser you can’t just escape yet while planning their demise when the time has come.

permalink
report
parent
reply

My bad. I think I confused this with the previous popular Unigine benchmarks.

permalink
report
parent
reply

I’ll try that Model. However, your option doesn’t work for me:

koboldcpp.py: error: argument model_param: not allowed with argument --model

permalink
report
parent
reply

Alright, thanks for the info & additional pointers.

permalink
report
parent
reply

Don’t be sorry, you’re being so helpful, thank you a lot.

I finally replicated your config:

localhost/koboldcpp:v1.43 --port 80 --threads 4 --contextsize 8192 --useclblas 0 0 --smartcontext --ropeconfig 1.0 32000 --stream "/app/models/mythomax-l2-kimiko-v2-13b.Q5_K_M.gguf"

And had satisfying results! The performance of LLaMA2 really is nice to have here as well.

permalink
report
parent
reply

Thanks a lot for your input. It’s a lot to stomach but very descriptive which is what I need.

I run this Koboldcpp in a container.

What I ended up doing and which was semi-working is:

  • --model "/app/models/mythomax-l2-13b.ggmlv3.q5_0.bin" --port 80 --stream --unbantokens --threads 8 --contextsize 4096 --useclblas 0 0

In the Kobboldcpp UI, I set max response token to 512 and switched to an Instruction/response model and kept prompting with “continue the writing”, with the MythoMax model.

But I’ll be re-checking your way of doing it because the SuperCOT model seemed less streamlined and more qualitative in its story writing.

permalink
report
parent
reply