As much as I love Mozilla, I know theyâre going to censor it (sorry, the word is âalignmentâ now) the hell out of it to fit their perceived values. Luckily if itâs open source then people will be able to train uncensored models
What in the world would an âuncensoredâ model even imply? And give me a break, private platforms choosing to not platform something/someone isnât âcensorshipâ, you donât have a right to anotherâs platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.
This is something I think a lot of people donât get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? Thereâs a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.
Talking about an âuncensoredâ LLM basically just comes down to saying youâd like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless youâre actively trying to produce a model to do illegal or unethical things I donât quite see the point of contention or what âcensorshipâ could actually mean in this context.
Thatâs not at all how a uncensored LLM is. That sounds like an untrained model. Have you actually tried an uncensored model? Itâs the same thing as regular, but it doesnât attempt to block itself for saying stupid stuff, like âI cannot generate a scenario where Obama and Jesus battle because that would be deemed offensive to culturesâ. Itâs literally just removing the safeguard.
Iâm from your camp but noticed I used ChatGPT and the like less and less over the past months. I feel they became less and less useful and more generic. In Februar or March, they were my go to tools for many tasks. I reverted back to old-fashioned search engines and other methods, because it just became too tedious to dance around the ethics landmines, to ignore the verbose disclaimers, to convince the model my request is a legit use case. Also the error ratio went up by a lot. It may be a tame lapdog, but it also lacks bite now.
I fooled around with some uncensored LLaMA models, and to be honest if you try to hold a conversation with most of them they tend to get cranky after a while - especially when they hallucinate a lie and you point it out or question it.
I will never forget when one of the models tried to convince me that photosynthesis wasnât real, and started getting all snappy when I said I wasnât accepting that answer đ
Most of the censorship âfine tuningâ data that Iâve seen (for LoRA models anyway) appears to be mainly scientific data, instructional data, and conversation excerpts
Thereâs a ton of stuff ChatGPT wonât answer, which is supremely annoying.
Iâve tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.
Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isnât an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.
Sarcasm is, for the most part, very difficult to do⌠If ChatGPT thinks what youâre trying to write is mean-spirited, it just wonât do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and itâs fine, and often unintentionally very funny.
Thereâs plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. Iâm running Wizard 30B uncensored locally, and ChatGPT for everything else. Iâd like to think Iâm not a weirdo, I just like D&d⌠a lot, lol⌠and even with my use case Iâm bumping my head on some of the censorship issues with LLMs.
Interesting, may I ask you a question regarding uncensored local / censored hosted LLMs in comparison?
There is this idea censorship is required to some degree to generate more useful output. In a sense, we somehow have to tell the model which output we appreciate and which we donât, so that it can develop a bias to produce more of the appreciated stuff.
In this sense, an uncensored model would be no better than a million monkeys on typewriters. Do we differentiate between technically necessary bias, and political agenda, is that possible? Do uncensored models produce more nonsense?
Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I donât want it to be censored. Itâs gathering this from public data they donât own after all. I agree with Mozillaâs principles, but also LLMs are tools and should be treated as such.
shit just went from 0 to 100 real fucking quick
for real though, if you ask an LLM how to make a bomb, itâs not the LLM thatâs the problem
If you ask how to build a bomb and it tells you, wouldnât Mozilla get in trouble?
My brother in Christ, building a bomb and doing terrorism is not a form of protected speech, and an overwrought search engine with a poorly attached ability to hold a conversation refusing to give you bomb making information is not censorship.
As an aside Iâm in corporate. I love how gung ho we are on AI meanwhile there are lawsuits and potential lawsuits and investigative journalism coming out on all the shady shit AI and their companies are doing. Meanwhile you know the SMT ainât dumb they know about all this shit and we are still driving forward.
If âcensoredâ means that underpaid workers in developing countries donât need to sift through millions of images of gore, violence, etc, then Iâm for it
A LLM-based system cannot produce results that it hasnât explicitly been trained on, and even making its best approximation with given data will never give results based on the real thing. That, and most of the crap that LLMs âââcensorâââ are legal self-defense