3 points

This is the best summary I could come up with:


In a related FAQ, they also officially admit what we already know: AI writing detectors don’t work, despite frequently being used to punish students with false positives.

In July, we covered in depth why AI writing detectors such as GPTZero don’t work, with experts calling them “mostly snake oil.”

That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text.

Along those lines, OpenAI also addresses its AI models’ propensity to confabulate false information, which we have also covered in detail at Ars.

“Sometimes, ChatGPT sounds convincing, but it might give you incorrect or misleading information (often called a ‘hallucination’ in the literature),” the company writes.

Also, some sloppy attempts to pass off AI-generated work as human-written can leave tell-tale signs, such as the phrase “as an AI language model,” which means someone copied and pasted ChatGPT output without being careful.


The original article contains 490 words, the summary contains 148 words. Saved 70%. I’m a bot and I’m open source!

permalink
report
reply
72 points

they never did, they never will.

permalink
report
reply
6 points

Why tho or are you trying to be vague on purpose

permalink
report
parent
reply
72 points

Because you’re training a detector on something that is designed to emulate regular languages closest possible, and human speech has so much incredible variability that it’s almost impossible to identify if someone or something has been written by an AI.

You can detect maybe your typical generic chat GPT type outputs, but you can characterize a conversation with chat GPT or any of the other much better local models (privacy and control are aspects which make them better) and after doing that you can get radically human seeming outputs that are totally different from anything chat GPT will output.

In short, given a static block of text it’s going to be nearly impossible to detect if it’s coming from an AI. It’s just too difficult to problem, and if you’re going to solve it it’s going to be immediately obsolete the next time someone fine tunes their own model

permalink
report
parent
reply
6 points

Yeah this makes a lot of sense considering the vastness of language and it’s imperfections (English I’m mostly looking at you, ya inbred fuck)

Are there any other detection techniques that you know of? Wb forcing AI models to have a signature that is guaranteed to be indentifiable, permanent, and unique for each tuning produced? It’d have to be not directly noticeable but easy to calculate in order to prevent any “distractions” for the users.

permalink
report
parent
reply
-1 points

Because generative Neural Networks always have some random noise. Read more about it here

permalink
report
parent
reply
3 points

Isn’t that article about GANs?

Isn’t GPT not a GAN?

permalink
report
parent
reply
22 points

Because AIs are (partly) trained by making AI detectors. If an AI can be distinguished from a natural intelligence, it’s not good enough at emulating intelligence. If an AI detector can reliably distinguish AI from humans, the AI companies will use that detector to train their next AI.

permalink
report
parent
reply
-2 points

I’m not sure I’m following your argument here - you keep switching between talking about AI and AI detectors. Each of the below are just numbered according to the order of your prior responses as sentences:

  1. Can you provide any articles or blog posts from AI companies for this or point me in the right direction?
  2. Agreed
  3. Right…

I’m having trouble finding your support for your claim

permalink
report
parent
reply
20 points

Regardless of if they do or don’t, surely it’s in the interests of the people making the “AI” to claim that their tool is so good it’s indistinguishable from humans?

permalink
report
reply
15 points

Depends if they’re more researchers or a business imo. Scientists generally speaking are very cautious about making shit claims bc if they get called out that’s their career really.

permalink
report
parent
reply
6 points
*

It’s literally a marketing blog posted by OpenAI on their site, not a study in a journal.

permalink
report
parent
reply
3 points
*

Few decades ago probably, nowadays “scientists” make a lot of bs claims to get published. I was in the room when a “scientist” publishing several nature per year asked to her student to write a paper for a research without any result in a way that it looked like it had something important for a relatively good IF publication.

That day I decided I was done with academia. I had seen enough.

permalink
report
parent
reply
-2 points

Cool story bro

permalink
report
parent
reply
5 points

OpenAI hasn’t been focused on the science since the Microsoft investment. A science focused company doesn’t release a technical report that doesn’t contain any of the specs of the model they’re reporting on.

permalink
report
parent
reply
2 points

:(

permalink
report
parent
reply
0 points

Yes, but it’s such a falsifiable claim that anyone is more than welcome to prove them wrong. There’s a lot of slightly different LLMs out there. If you or anyone else can definitively show there’s a machine that can identify AI writing vs human writing, it will either result in better AI writing or it would be an amazing breakthrough in understanding the limits of AI.

permalink
report
parent
reply
2 points

People like to view the problem as a paradox - can an all powerful God create a rock they cannot lift? - but I feel that’s too generous, it’s more marking your own homework.

If a system can both write text, and detect whether it or another system wrote that text, then “all” it needs to do is change that text to be outside of the bounds of detection. That is to say, it just needs to convince itself.

I’m not wanting to imply that that is easy, because it isn’t, but it’s a very different thing to convincing someone else, especially a human, that understands the topic.

There is also a false narrative involved here, that we need an AI to detect AI which again serves as a marketing benefit to OpenAI.

We don’t, because they aren’t that good, at least, not yet anyway.

permalink
report
parent
reply
27 points

AI company says their AI is smart, but other companies are sell snake oil.

Gottit

permalink
report
reply
26 points

They tried training an AI to detect AI, too, and failed

permalink
report
parent
reply
5 points

Typically for generative AI. I think during their training of the Nobel, they must have developed another model that detect if GPT produce a more natural language. I think that other model may reached the point where it couldn’t flag it with acceptable false positive.

permalink
report
parent
reply
109 points
*

I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering or shitposting.

permalink
report
reply
42 points

We found the source

permalink
report
parent
reply
7 points

Do you also need help from a friend to prove you are not a robot?

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
3 points

I need a lotta help, just not from a friend and about anything robot-related 😮‍💨

permalink
report
parent
reply
1 point

Hope you have some good friends and family that can help.

permalink
report
parent
reply
25 points
*

I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering.

It’s not unusual for well-constructed human writing to resemble the output of advanced language models like ChatGPT. After all, language models like GPT-4 are trained on vast amounts of human text, and their main goal is to replicate and generate human-like text based on the patterns they’ve observed.

/gpt-4

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
11 points
*

Be me

well-constructed human writing

You guys?! 🤗

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 506K

    Comments