librecat
Knowledge level: Enthusiastic spectator, I don’t make or finetune llms, but I do watch AI news, try out local llms, and use things like Github copilot and chat gpt.
Question: Is it better to use code llama 34b or llama2 13b for a non coding related task?
Context: I’m able to run either model locally, but I can’t run the larger 70b model. So I was wondering if running the 34b code llama would be better since it is larger. I heard that models with better coding abilities are better for other types of tasks too and that they are better with logic (I don’t know if this is true I just head l heard it somewhere).
Sudoku, specifically 6x6 libresudoku (available on f-droid)
Do you already know other programming languages, or is Python your first one?
Wow, I was just about to start another bevy project too!
Three times?!?
If you have a high end GPU, or lots of RAM you can run some good quality LLMs offline. I recommend watching Matthew Berman for tutorials (there are some showing paid hosting aswell).
From what I understand if you let someone do their job, you are a piece of shit. I don’t agree with that statement whatsoever.