109 points
*

I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering or shitposting.

permalink
report
reply
42 points

We found the source

permalink
report
parent
reply
25 points
*

I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering.

It’s not unusual for well-constructed human writing to resemble the output of advanced language models like ChatGPT. After all, language models like GPT-4 are trained on vast amounts of human text, and their main goal is to replicate and generate human-like text based on the patterns they’ve observed.

/gpt-4

permalink
report
parent
reply
11 points
*

Be me

well-constructed human writing

You guys?! 🤗

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
7 points

Do you also need help from a friend to prove you are not a robot?

permalink
report
parent
reply
3 points

I need a lotta help, just not from a friend and about anything robot-related 😮‍💨

permalink
report
parent
reply
1 point

Hope you have some good friends and family that can help.

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
72 points

they never did, they never will.

permalink
report
reply
6 points

Why tho or are you trying to be vague on purpose

permalink
report
parent
reply
72 points

Because you’re training a detector on something that is designed to emulate regular languages closest possible, and human speech has so much incredible variability that it’s almost impossible to identify if someone or something has been written by an AI.

You can detect maybe your typical generic chat GPT type outputs, but you can characterize a conversation with chat GPT or any of the other much better local models (privacy and control are aspects which make them better) and after doing that you can get radically human seeming outputs that are totally different from anything chat GPT will output.

In short, given a static block of text it’s going to be nearly impossible to detect if it’s coming from an AI. It’s just too difficult to problem, and if you’re going to solve it it’s going to be immediately obsolete the next time someone fine tunes their own model

permalink
report
parent
reply
6 points

Yeah this makes a lot of sense considering the vastness of language and it’s imperfections (English I’m mostly looking at you, ya inbred fuck)

Are there any other detection techniques that you know of? Wb forcing AI models to have a signature that is guaranteed to be indentifiable, permanent, and unique for each tuning produced? It’d have to be not directly noticeable but easy to calculate in order to prevent any “distractions” for the users.

permalink
report
parent
reply
22 points

Because AIs are (partly) trained by making AI detectors. If an AI can be distinguished from a natural intelligence, it’s not good enough at emulating intelligence. If an AI detector can reliably distinguish AI from humans, the AI companies will use that detector to train their next AI.

permalink
report
parent
reply
-2 points

I’m not sure I’m following your argument here - you keep switching between talking about AI and AI detectors. Each of the below are just numbered according to the order of your prior responses as sentences:

  1. Can you provide any articles or blog posts from AI companies for this or point me in the right direction?
  2. Agreed
  3. Right…

I’m having trouble finding your support for your claim

permalink
report
parent
reply
-1 points

Because generative Neural Networks always have some random noise. Read more about it here

permalink
report
parent
reply
3 points

Isn’t that article about GANs?

Isn’t GPT not a GAN?

permalink
report
parent
reply
58 points

I know a couple teachers (college level) that have caught several gpt papers over the summer. It’s a great cheating tool but as with all cheating in the past you still have to basically learn the material (at least for narrative papers) to proof gpt properly. It doesn’t get jargon right, it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

Using translation tools is extra obvious—have a native speaker proof your paper if you attempt to use an AI translator on a paper for credit!!

permalink
report
reply
14 points

it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

It doesn’t hardly understand logic. I’m using it to generate content and it continuously will assert information in ways that don’t make sense, relate things that aren’t connected, and forget facts that don’t flow into the response.

permalink
report
parent
reply
10 points
*

As I understand it as a layman who uses GPT4 quite a lot to generate code and formulas, it doesn’t understand logic at all. Afaik, there is currently no rational process which considers whether what it’s about to say makes sense and is correct.

It just sort of bullshits it’s way to an answer based on whether words seem likely according to its model.

That’s why you can point it in the right direction and it will sometimes appear to apply reasoning and correct itself. But you can just as easily point it in the wrong direction and it will do that just as confidently too.

permalink
report
parent
reply
7 points

It has no notion of logic at all.

It roughly works by piecing together sentences based on the probability of the various elements (mainly words but also more complex) being there in various relations to each other, the “probability curves” (not quite probability curves but that’s a good enough analog) having been derived from the very large language training sets used to train them (hence LLM - Large Language Model).

This is why you might get things like pieces of argumentation which are internally consistent (or merelly familiar segments from actual human posts were people are making an argument) but they’re not consistent with each other - the thing is not building an argument following a logic thread, it’s just putting together language tokens in common ways which in its training set were found associate with each other and with language token structures similar to those in your question.

permalink
report
parent
reply
-27 points
*

Any teacher still issuing out of class homework or assignments is doing a disservice IMO.

Of coarse people will just GPT it… you need to get them off the computer and into an exam room.

permalink
report
parent
reply
39 points

GPT is a tool that the students will have access to their entire professional lives. It should be treated as such and worked into the curriculum.

Forbidding it would be like saying you can’t use Photoshop in a photography class.

permalink
report
parent
reply
23 points

It can definitely be a good tool for studying or for organizing your thoughts but it’s also easily abused. School is there to teach you how to take in and analyze information and chat AIs can basically do that for you (whether or not their analysis is correct is another story). I’ve heard a lot of people compare it to the advent of the calculator but I think that’s wrong. A calculator spits out an objective truth and will always say the same thing. Chat GPT can take your input and add analysis and context in a way that circumvents the point of the assignment which is to figure out what you personally learned.

permalink
report
parent
reply
10 points

I’ve been in photography classes where Photoshop wasn’t allowed, although it was pretty easily enforced because we were required to use school provided film cameras. Half the semester was 35mm film, and the other half was 3x5 graphic press cameras where we were allowed to do some editing - providing we could do the edits while developing our own film and prints in the lab. It was a great way to learn the fundamentals and learning to take better pictures in the first place. There were plenty of other classes where Photoshop was allowed, but sometimes restricting which tools can be used, can help push us to be better.

permalink
report
parent
reply
6 points

Depends on how it’s used of course. Using it to help brainstorm phrasing is very useful. Asking it to write a paper and then editing and turning it in is no different than regular plagiarism imo. Bans will apply to the latter case and the former case should be undetectable.

permalink
report
parent
reply
0 points

No it won’t. People will get it banned and they ought to.

permalink
report
parent
reply
10 points

Even in college? I never had a college course that allowed you to work on assignments in class

permalink
report
parent
reply
1 point

I studied engineering. Most classes were split into 2 hours of theory, followed by 2 hours of practical assignments. Both within the official class hours, so teachers could assist with the assignments. The best college-class structure by far imo.

permalink
report
parent
reply
35 points
*

I have to hand in a short report

I wrote parts of it and asked chatgpt for a conclusion.

So i read that, adjusted a few points. Added another couple points…

Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)

We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)

permalink
report
reply
13 points

I found out on the last screen of a travel grant application I needed a coverletter.

I pasted in the requirements for the cover letter and what I had put in my application.

I pasted the results in as the cover letter without review.

I got the travel grant.

permalink
report
parent
reply
8 points

Who reads cover letters? At most they are skimmed over.

permalink
report
parent
reply
10 points

Exactly. But they still need to exist. That’s what chat gpt is for. Letters, bullshit emails, applications. The shit that’s just tedious.

permalink
report
parent
reply
28 points

OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

If you ask this thing whether or not some given text is AI generated, and it is only right 26% of the time, then I can think of a real quick way to make it 74% accurate.

permalink
report
reply
14 points

I feel like this must stem from a misunderstanding of what 26% accuracy means, but for the life of me, I can’t figure out what it would be.

permalink
report
parent
reply
10 points
*

Looks like they got that number from this quote from another arstechnica article ”…OpenAI admitted that its AI Classifier was not “fully reliable,” correctly identifying only 26 percent of AI-written text as “likely AI-written” and incorrectly labeling human-written works 9 percent of the time”

Seems like it mostly wasn’t confident enough to make a judgement, but 26% it correctly detected ai text and 9% incorrectly identified human text as ai text. It doesn’t tell us how often it labeled AI text as human text or how often it was just unsure.

EDIT: this article https://arstechnica.com/information-technology/2023/07/openai-discontinues-its-ai-writing-detector-due-to-low-rate-of-accuracy/

permalink
report
parent
reply
1 point

Specificity vs sensitivity, no?

permalink
report
parent
reply
2 points
*

In statistics, everything is based off probability / likelihood - even binary yes or no decisions. For example, you might say “this predictive algorithm must be at least 95% statistically confident of an answer, else you default to unknown or another safe answer”.

What this likely means is only 26% of the answers were confident enough to say “yes” (because falsely accusing somebody of cheating is much worse than giving the benefit of the doubt) and were correct.

There is likely a large portion of answers which could have been predicted correctly if the company was willing to chance more false positives (potentially getting studings mistakenly expelled).

permalink
report
parent
reply
4 points

it seemed like a really weird decision for OpenAI to have an AI classifier in the first place. their whole business is to generate output that’s good enough that it can’t be distinguished from what a human might produce, and then they went and made a tool to try and point out where they failed.

permalink
report
parent
reply
2 points

That may have been the goal. Look how good our AI is, even we can’t tell if its output is human generated or not.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 543K

    Comments