97 points
*

My takeaway from this is:

  1. Get a bunch of AI-generated slop and put it in a bunch of individual .htm files on my webserver.
  2. When my bot user agent filter is invoked in Nginx, instead of returning 444 and closing the connection, return a random .htm of AI-generated slop (instead of serving the real content)
  3. Laugh as the LLMs eat their own shit
  4. ???
  5. Profit
permalink
report
reply
37 points
*

I might just do this. It would be fun to write a quick python script to automate this so that it keeps going forever. Just have a link that regens junk then have it go to another junk html file forever more.

permalink
report
parent
reply
13 points

Also send this junk to Reddit comments to poison that data too because fuck Spez?

permalink
report
parent
reply
7 points

there’s a something that edits your comments after 2 weeks to random words like “sparkle blue fish to be redacted by redactior-program.com” or something

permalink
report
parent
reply
7 points

This is a great idea, I might create a Laravel package to automatically do this.

permalink
report
parent
reply
4 points

QUICK

Someone create a github project that does this

permalink
report
parent
reply
73 points

Inbreeding

permalink
report
reply
53 points

What are you doing step-AI?

permalink
report
parent
reply
2 points

Are you serious? Right in front of my local SLM?

permalink
report
parent
reply
2 points

Photocopy of a photocopy.

permalink
report
parent
reply
50 points

So now LLM makers actually have to sanitize their datasets? The horror

permalink
report
reply
17 points

I don’t think that’s tractable.

permalink
report
parent
reply
16 points

Oh no, it’s very difficult, especially on the scale of LLMs.

That said, we others (those of us who have any amount of respect towards ourselves, our craft, and our fellow human) have been sourcing our data carefully since way before NNs, such as asking the relevant authority for it (ex. asking the post house for images of handwritten destinations).

Is this slow and cumbersome? Oh yes. But it delays the need for over-restrictive laws, just like with RC crafts before drones. And by extension, it allows those who could not source the material they needed through conventional means, or those small new startups with no idea what they were doing, to skim the gray border and still get a small and hopefully usable dataset.

And now, someone had the grand idea to not only scour and scavenge the whole internet with no abandon, but also boast about it. So now everyone gets punished.

At last: don’t get me wrong, laws are good (duh), but less restrictive or incomplete laws can be nice as long as everyone respects each other. I’m excited to see what the future brings in this regard, but I hate the idea that those who facilitated this change likely are the only ones to go free.

permalink
report
parent
reply
5 points

that first L stands for large. sanitizing something of this size is not hard, it’s functionally impossible.

permalink
report
parent
reply
4 points

You don’t have to sanitize the weights, you have to sanitize the data you use to get the weights. Two very different things, and while I agree that sanitizing a LLM after training is close to impossible, sanitizing the data you give it is much, much easier.

permalink
report
parent
reply
1 point

They can’t.

They went public too fast chasing quick profits and now the well is too poisoned to train new models with up to date information.

permalink
report
parent
reply
38 points

Imo this is not a bad thing.

All the big LLM players are staunchly against regulation; this is one of the outcomes of that. So, by all means, please continue building an ouroboros of nonsense. It’ll only make the regulations that eventually get applied to ML stricter and more incisive.

permalink
report
reply
24 points

They call this scenario the Habsburg Singularity

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 505K

    Comments