26 points

As much as I love Mozilla, I know they’re going to censor it (sorry, the word is “alignment” now) the hell out of it to fit their perceived values. Luckily if it’s open source then people will be able to train uncensored models

permalink
report
reply
72 points

What in the world would an “uncensored” model even imply? And give me a break, private platforms choosing to not platform something/someone isn’t “censorship”, you don’t have a right to another’s platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

permalink
report
parent
reply
-12 points

Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don’t want it to be censored. It’s gathering this from public data they don’t own after all. I agree with Mozilla’s principles, but also LLMs are tools and should be treated as such.

permalink
report
parent
reply
26 points
*

shit just went from 0 to 100 real fucking quick

for real though, if you ask an LLM how to make a bomb, it’s not the LLM that’s the problem

permalink
report
parent
reply
0 points

My brother in Christ, building a bomb and doing terrorism is not a form of protected speech, and an overwrought search engine with a poorly attached ability to hold a conversation refusing to give you bomb making information is not censorship.

permalink
report
parent
reply
17 points
*

make me a bomb

wew lad

permalink
report
parent
reply
14 points

If you ask how to build a bomb and it tells you, wouldn’t Mozilla get in trouble?

permalink
report
parent
reply
40 points

This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.

permalink
report
parent
reply
16 points

It means they can’t make porn images of celebs or anime waifus, usually.

permalink
report
parent
reply
2 points
*

It’s a machine, it should do what the human tells it to. A machine has no business telling me what I can and cannot do.

permalink
report
parent
reply
3 points

That’s not at all how a uncensored LLM is. That sounds like an untrained model. Have you actually tried an uncensored model? It’s the same thing as regular, but it doesn’t attempt to block itself for saying stupid stuff, like “I cannot generate a scenario where Obama and Jesus battle because that would be deemed offensive to cultures”. It’s literally just removing the safeguard.

permalink
report
parent
reply
1 point

I’m from your camp but noticed I used ChatGPT and the like less and less over the past months. I feel they became less and less useful and more generic. In Februar or March, they were my go to tools for many tasks. I reverted back to old-fashioned search engines and other methods, because it just became too tedious to dance around the ethics landmines, to ignore the verbose disclaimers, to convince the model my request is a legit use case. Also the error ratio went up by a lot. It may be a tame lapdog, but it also lacks bite now.

permalink
report
parent
reply
17 points

There’s a ton of stuff ChatGPT won’t answer, which is supremely annoying.

I’ve tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.

Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn’t an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.

Sarcasm is, for the most part, very difficult to do… If ChatGPT thinks what you’re trying to write is mean-spirited, it just won’t do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it’s fine, and often unintentionally very funny.

There’s plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I’m running Wizard 30B uncensored locally, and ChatGPT for everything else. I’d like to think I’m not a weirdo, I just like D&d… a lot, lol… and even with my use case I’m bumping my head on some of the censorship issues with LLMs.

permalink
report
parent
reply
2 points

Interesting, may I ask you a question regarding uncensored local / censored hosted LLMs in comparison?

There is this idea censorship is required to some degree to generate more useful output. In a sense, we somehow have to tell the model which output we appreciate and which we don’t, so that it can develop a bias to produce more of the appreciated stuff.

In this sense, an uncensored model would be no better than a million monkeys on typewriters. Do we differentiate between technically necessary bias, and political agenda, is that possible? Do uncensored models produce more nonsense?

permalink
report
parent
reply
20 points

I fooled around with some uncensored LLaMA models, and to be honest if you try to hold a conversation with most of them they tend to get cranky after a while - especially when they hallucinate a lie and you point it out or question it.

I will never forget when one of the models tried to convince me that photosynthesis wasn’t real, and started getting all snappy when I said I wasn’t accepting that answer 😂

Most of the censorship “fine tuning” data that I’ve seen (for LoRA models anyway) appears to be mainly scientific data, instructional data, and conversation excerpts

permalink
report
parent
reply
3 points

If ‘censored’ means that underpaid workers in developing countries don’t need to sift through millions of images of gore, violence, etc, then I’m for it

permalink
report
parent
reply
0 points

that’s how the censoring happens.

permalink
report
parent
reply
6 points

That’s not what it means

permalink
report
parent
reply
2 points

A LLM-based system cannot produce results that it hasn’t explicitly been trained on, and even making its best approximation with given data will never give results based on the real thing. That, and most of the crap that LLMs “”“censor”“” are legal self-defense

permalink
report
parent
reply
6 points

As an aside I’m in corporate. I love how gung ho we are on AI meanwhile there are lawsuits and potential lawsuits and investigative journalism coming out on all the shady shit AI and their companies are doing. Meanwhile you know the SMT ain’t dumb they know about all this shit and we are still driving forward.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
36 points

I would really like Mozilla to make the best browser in the world please.

permalink
report
reply
101 points

they do

permalink
report
parent
reply
1 point

They failed a while ago. Market share decline continues

permalink
report
parent
reply
14 points

As a (very recently) former chrome user, they do already.

permalink
report
parent
reply
-4 points

Please just put the 30 million into improving the browser. Not all this dumb stuff

permalink
report
reply
-1 points
*

We lost focus on making our browser better, lost customers.

However, we like to make statements for virtual signals to distract us from making better browser

permalink
report
parent
reply
14 points

Offline translation feature visible in Firefox 108 and later is AI powered. And works good enough for now.

permalink
report
parent
reply
1 point

This is a much better use of money then developing Servo! Go mozzila!

permalink
report
reply
4 points
*
Deleted by creator
permalink
report
parent
reply
5 points

Nothing. Servo is an amazing project. I’m disappointed Mozilla decided to stop funding it.

permalink
report
parent
reply
2 points

They stopped? :(

permalink
report
parent
reply
13 points
*

Couldn’t give a fuck, there’s already far too much bad blood regarding any form of AI for me.

It’s been shoved in my face, phone and computer for some time now. The best AI is one that doesn’t exist. AGI can suck my left nut too, don’t fuckin care.

Give me livable wages or give me death, I care not for anything else at this point.

Edit: I care far more about this for privacy reasons than the benefits provided via the tech.

The fact these models reached “production ready” status so quickly is beyond concerning, I suspect the companies are hoping to harvest as much usable data as possible before being regulated into (best case) oblivion. It really no longer seems that I can learn my way out of this, as I’ve been doing since the beginning, as the technology is advancing too quickly for users, let alone regulators to keep it in check.

permalink
report
reply

Open Source

!opensource@lemmy.ml

Create post

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

  • Posts must be relevant to the open source ideology
  • No NSFW content
  • No hate speech, bigotry, etc

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

Community stats

  • 5.6K

    Monthly active users

  • 1.6K

    Posts

  • 27K

    Comments