Conversely, you can now have your manifesto written by a locally run LLM.
“Revise generically: [manifesto]” (certainly not the best prompt)
Folks seem to like Ollama per HackerNews threads: in a coding context here:
Not using Codestral (yet) but check out Continue.dev[1] with Ollama[2] running llama3:latest and starcoder2:3b. It gives you a locally running chat and edit via llama3 and autocomplete via starcoder2.
It’s not perfect but it’s getting better and better.
Please no unabombing though
Oh wow he wrote a 35k word manifesto… feel like that’s so rare you’d still stand a solid chance at being identified somehow.
You could, but even then you need to put some thought on how to prompt and review/edit the output.
I’ve noticed from usage that LLMs are extremely prone to repeat verbatim words and expressions from the prompt. So if you ask something like “explain why civilisation is bad from the point of view of a cool-headed logician”, you’re likely outing yourself already.
A lot of the times the output will have “good enough” synonyms. That you could replace with more accurate words… and then you’re outing yourself already. Or simply how you fix it so it sounds like a person instead of a chatbot, we all have writing quirks and you might end leaking them into the review.
And more importantly you need to aware that it is an issue, and that you can be tracked based on how and what you write.