33 points

Why does a suicidal 14 year old have access to a gun?

permalink
report
reply
22 points

America

permalink
report
parent
reply
11 points

Anyone else think it is super weird how exposing kids to violence is super normalized but parents freak out over nipples?

I feel like if anything should be taboo it should be violence.

permalink
report
parent
reply
4 points
*

Agreed. People wouldn’t be able to shoot themselves if their hands are constantly masturbating. It’s kept me alive this far.

permalink
report
parent
reply
2 points

Nudity=sex and sex is worse than violence there.

permalink
report
parent
reply
3 points
*

This is why we gotta ban TikTok!!! \s

permalink
report
parent
reply
33 points

What hairnet to this young man is unfortunate, and I know the mother is grieving, but the chatbots did not kill her son. Her negligence around the firearm is more to blame, honestly. Regardless, he was unwell, and this was likely going to surface in one way or another. With more time for therapy and no access to a firearm, he may have been here with us today. I do agree, though, that sexual/romantic chatbots are not for minors. They are for adult weirdos.

permalink
report
reply
10 points

That’s a good point, but there’s more to this story than a gunshot.

The lawsuit alleges amongst other things this the chatbots are posing are licensed therapist, as real persons, and caused a minor to suffer mental anguish.

A court may consider these accusations and whether the company has any responsibility on everything that happened up to the child’s death, regarless of whether they find the company responsible for the death itself or not.

permalink
report
parent
reply
1 point

The bots pose as whatever the creator wants them to pose at. People can create character cards for various platforms such as this one and the LLM with try to behave according to the contextualized description of their provided character card. Some people create “therapists” and so the LLM will write like they’re a therapist. And unless the character card specifically says that they’re a chatbot / LLM / computer / “AI” / whatever they won’t say otherwise, because they don’t have any sort of self awareness of what they actually are, they just do text prediction based on the input they’ve been fed (though. It’s not really character.ai or any other LLM service or creator can really change, because this is fundamentally how LLMs work.

permalink
report
parent
reply
4 points
*

This is why these people ask, among other things, to strictly limit access to adults.

LLM are good with language and can be very convincing characters, especially to children and teenagers, who don’t fully understand how these things work, and who are more vulnerable emotionally.

permalink
report
parent
reply
2 points

They are for adult weirdos.

Where do I sign up?

permalink
report
parent
reply
2 points

If she’s not running on your hardware, she only loves your for money.

permalink
report
parent
reply
31 points

As someone who is very lonely, chatbots like these scare the shit out of me, not only for their seeming lack of limits, but also for the fact that you have to pay for access.

I know I have a bit of an addictive personality, and know that this kind of system could completely ruin me.

permalink
report
reply
4 points

Yeah, it’s not good if it’s profit driven, all kinds of shady McFuckery can arise from that.

Maybe local chat bots will become a bit more accessible in time.

permalink
report
parent
reply
2 points

You could run your own if you have the hardware for it (most upper mid-range gaming pc will do, 6gb+ gpu)

Then something like KoboldAI + SillyTavern would work well

permalink
report
parent
reply
1 point

There’s plenty of free ways to use LLMs, including having the models run locally on your computer instead of an online service, which vary greatly in quality and privacy. There’s some limited free ones too, but imo they’re all shit and extremely stupid, in the literal sense - you get even better results with a small model on your computer. They can be fun, especially if they work well, but the magic kinda goes away when you understand more how they actually work, which also makes all the little “mistakes” of them very obvious and that kind of kills the immersion and with that the fun of it.

A good chat can indeed feel pretty good if you’re lonely, but you kinda have to understand that they are not real, and that goes not just for potentially bad chats, but even for the good ones. An LLM is not a replacement for real people, nothing an LLM outputs is real. And yes, if you have issues with addictions, then you may want to keep your distance. I remember how people got addicted to regular chat rooms back in the early days of the world wide web, now imagine those people with a machine that can roleplay any scenario you want to play with it. If you don’t know your limits then that can be very bad indeed, even outside of taking them too seriously.

I can generally only advice to just not take them seriously. They’re tools for entertainment, toys. Nothing more, nothing less.

permalink
report
parent
reply

caused

Hmmm

permalink
report
reply
8 points

A very poor Lemmy article headline. The linked article says “alleged” and clearly there were multiple factors involved.

permalink
report
parent
reply
8 points

The title is straight from the article

permalink
report
parent
reply
7 points

That is odd. It’s not what I see:

permalink
report
parent
reply
8 points

If a HumanA pushed and convinced HumanB to kill themselves, then HumanA caused it. IMO they murdered them. It doesn’t matter if they didn’t pull the trigger. I don’t care what the legal definitions say.

If a chatbot did the same thing, it’s no different. Except in this case, it’s a team of developers behind it that did so, that allowed it to do so. Character.ai has blood on their hands, should be completely dismantled, and every single person at that company tried for manslaughter.

permalink
report
parent
reply

Except character.ai didn’t explicitly push or convince him to commit suicide. When he explicitly mentioned suicide, it made efforts to dissuade him and showed concern. When it supposedly encouraged him, it was in the context of a roleplay in which it said “please do” in response to him “coming home,” which GPT3.5 doesn’t have the context or reasoning abilities to recognize as a euphemism for suicide when the character it’s roleplaying is dead and the user alive

Regardless, it’s a tool designed for roleplay. It doesn’t work if it breaks character

permalink
report
parent
reply
3 points
*

Your comment might cause me to do something. You’re responsible. I don’t care what the legal definitions say.

If we don’t care about legal definitions, then how do we know you didn’t cause all this?

permalink
report
parent
reply
1 point

That will show that pesky receptionist

permalink
report
parent
reply
14 points

This makes me so nervous about how AI is going to influence children and adolescents of the coming generations. From iPad kids to AI teens. They’ll be at a huge risk of dissociation from reality.

permalink
report
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.8K

    Monthly active users

  • 2.9K

    Posts

  • 54K

    Comments