If you paste plaintext passwords into ChatGPT, the problem is not ChatGPT; the problem is you.
Well tbf chatGPT also shouldn’t remember and then leak those passwords lol.
Did you read the article? It didn’t. Someone received someone else’s chat history appended to one of their own chats. No prompting, just appeared overnight.
How ? How it should be implemented? It’s just a llm. It has no true intelligence.
A huge value add of.chatgpt is that you can have running, contextual conversation. That requires memory.
All of these LLMs should have walls between individual users, though, so that the chat history of one user is never accessible to any other user. Applying some kind of restriction to the LLM training and how chats are used is a conversation we can have, but the article and the example given is a much, much simpler problem that a user checking his own chat history was able to see other user’s chats.
It doesn’t actually have memory in that sense. It can only remember things that are in the training data and within its limited context (4-32k tokens, depending on model). But when you send a message, ChatGPT does a semantic search of everything in the conversation and tries to fit the relevant parts inside the context, if there’s room.
ChatGPT doesn’t leak passwords. Chat history is leaking which one of those happens to contain a plain text password. What’s up with the current trend of saying AI did this and that while the AI really didn’t?
Fear mongering. Remember all the people raging and freaking out about Disney’s “AI generated background actors”? Just plain bad CG.
Tons of articles come up Googling “Disney AI extras.”
https://www.unilad.com/film-and-tv/news/disney-prom-pact-ai-actors-851337-20231013
https://www.cbr.com/disney-prom-pact-ai-actors/
https://www.looper.com/1420587/disney-prom-pact-ai-extras-twitter-reactions/
https://nypost.com/2023/10/15/disneys-prom-pact-has-audiences-cringing-at-ai-actors/
Comparatively few articles were scrupulous enough to report this for what it actually was.
https://www.dailydot.com/unclick/disney-prom-pact-cgi-ai-extras/
https://www.hollywoodreporter.com/movies/movie-news/disney-prom-pact-mocked-1235617940/
AT headlines aren’t usually so click bait-ey, but capitalism grows like weeds. Every last news article, we’ve GOT to all ask, who does this serve? Who paid for this irresponsible headline to be run? Whose income is it meant to harm?
Every newsroom boss, like every judge, needs to pay for healthcare (at best, or at worst, or whatever will give them access to some billionaire’s climate survival bunker.) This IS late stage surveillance capitalism. Every decision now is based on that.
That’s funny, all I see is ********
you can go hunter2 my hunter2-ing hunter2.
haha, does that look funny to you?
Back in the RuneScape days people would do dumb password scams. My buddy was introducing me to the game. We were sitting in his parents garage and he was playing and showing me his high lvl guy. Anyway, he walks around the trading area and someone says something like “omg you can’t type your password backwards *****”. In total disbelief he tries it out. Instantly freaks out, logs out to reset his password, and fails due to to the password already being changed
Relevant RuneScape short from Jackson Field
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
So what actually happened seems to be this.
- a user was exposed to another users conversation.
thats a big ooof and really shouldn’t happen
- the conversations that where exposed contained sensitive userinformation
unresponsible user error, everyone and their mom should know better by now
Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it’s the users who are idiots?
Except it’s never just about that. Every comment has to make it known that they would never allow that to happen to them because they’re super smart. It’s honestly one of the most self-righteous, tone deaf takes I see on here.
Because that’s what the last several reported “breaches” have been. There’s been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.
In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.
Data loss or leaks may not be the end user’s fault, but it is their responsibility. Yes, open AI should have had shit in place for this to never have happened. Unfortunately, you, I, and the users whose passwords were leaked have no way of knowing what kinds of safeguards on my data they have in place.
The only point of access to my information that I can control completely is what I do with it. If someone says “hey, don’t do that with your password” they’re saying it’s a potential safety issue. You’re putting control of your account in the hands of some entity you don’t know. If it’s revealed, well, it’s THEIR fault, but you also goofed and should take responsibility for it.
Because people who come to Lemmy tend to be more technical and better on questions of security than the average population. For most people around here, much of this is obvious and we’re all tired of hearing this story over and over while the public learns nothing.
Your frustration is valid. Also calling people stupid is an easy mistake that a lot of prople make, its easy to do.
To be fair i think many ai user including myself have at times overshared beyond what is advised. I never stated to be flawless but that doesn’t absolve responsibility.
I do the same oversharing here on lemmy. But what i indeed don’t do is sharing real login information, real name, ssn or adress
Open ai is absolutely still to blame For leaking users conversations but even if it wasn’t leaked that data will be used for training and should never have been put in a prompt.
They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).
This sounds more like a huge fuckup with the site, not the AI itself.
Edit: A depressing amount of people commenting here obviously didn’t read the article…
To be fair the article headline is a straight up lie. OpenAI leaked it by sending a user someone else’s chat history, ChatGPT didn’t leak anything.
The ChatGPT service leaked the data. Maybe that can be attributed to the OpenAI organization that owns and operates ChatGPT, too, but it’s not “a straight up lie” to say that ChatGPT leaked information, when ChatGPT is the name of both the service and the LLM that powers the interesting part of that service.