darkeox
Well the thing with those “enabled EAC on Linux to see where it gets us” is it’s non-binding and non-commital. And it’s made explicitely that way so that support cannot be demanded from Linux users unlike Windows users who are explicitely mentioned in the systems supported by the game.
We legally don’t have any ground to be supported the same as Windows users.
This. It’s not easy or trivial but as a long term strategy, they should already plan investing efforts into consolidating something like Godot or another FOSS engine. They should play like you calm down an abuser you can’t just escape yet while planning their demise when the time has come.
Don’t be sorry, you’re being so helpful, thank you a lot.
I finally replicated your config:
localhost/koboldcpp:v1.43 --port 80 --threads 4 --contextsize 8192 --useclblas 0 0 --smartcontext --ropeconfig 1.0 32000 --stream "/app/models/mythomax-l2-kimiko-v2-13b.Q5_K_M.gguf"
And had satisfying results! The performance of LLaMA2 really is nice to have here as well.
Thanks a lot for your input. It’s a lot to stomach but very descriptive which is what I need.
I run this Koboldcpp in a container.
What I ended up doing and which was semi-working is:
--model "/app/models/mythomax-l2-13b.ggmlv3.q5_0.bin" --port 80 --stream --unbantokens --threads 8 --contextsize 4096 --useclblas 0 0
In the Kobboldcpp UI, I set max response token to 512 and switched to an Instruction/response model and kept prompting with “continue the writing”, with the MythoMax model.
But I’ll be re-checking your way of doing it because the SuperCOT model seemed less streamlined and more qualitative in its story writing.