ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems they encountered while using the portal.

4 points

Because no one reads the article:

“From what we discovered, we consider it an account take over in that it’s consistent with activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access,” the representative wrote. “The investigation observed that conversations were created recently from Sri Lanka. These conversations are in the same time frame as successful logins from Sri Lanka.”

Compromised account being used as a free access endpoint for GPT.

permalink
report
reply
3 points

I’ve been using ChatGPT as a poor man’s psychological analyst.

Does this mean my conversations about my deepest fears are not safe??

permalink
report
reply
1 point

People are using it as a partner, they’ve already found that to be true. Probably teenagers, which is kind of worse.

permalink
report
parent
reply
2 points

Like sexual partner? Tell me more.

permalink
report
parent
reply
1 point

There are a lot of lonely people in this world, there was some mention of it in an article a few weeks back.

permalink
report
parent
reply
14 points
*

I’m sorry but if you’re stupid enough to give chat gpt your passwords you deserve every bad thing that happens because of that.

This is not a chat gpt problem, it’s a PEBKAC one.

permalink
report
reply
4 points
*

It is a user problem and an OpenAI problem. Some data shouldn’t be getting shoved into ChatGPT, without a doubt.

ChatGPT is pulling from its history data which should be isolated to each user. It’s starting to hint at some exceedingly bad design around their AI.

Any time that ChatGPT is “broken” with creative prompts, a new filter is put in front of, or after, the AI model. (The model itself doesn’t change as it would be too expensive to re-train.) The bot then refuses specific input or clips potentially bad output. Life goes on.

Any data repositories that are use for chat should be physically separated from user history, and it isn’t. This implies a ton of different things, but it would all be speculation.

I am really thinking there is a great deal more fuckery going on than what OpenAI is showing to the public. Regardless of the technology, there always is a ton of fake going on with any company.

permalink
report
parent
reply
2 points

Doesn’t this mean that overwhelming non factual information would skew the results of chat gpt?

permalink
report
reply
0 points

No

permalink
report
parent
reply
9 points

That will be getting a problem in the future. People will start putting highly sensitive and confidential information into ChatGPT and the like. And of course they’ll use this data. Industrial espionage might get as easy as asking a common LLM for help with a specific problem.

permalink
report
reply
2 points

That’d be amazing if it could take all the data that’s fed to it and readily produce solutions like that.

What a time to be alive.

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 3.6K

    Monthly active users

  • 2.6K

    Posts

  • 41K

    Comments

Community moderators