59 points

I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”

Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?

permalink
report
reply
30 points

Because you are coming from the perspective of a reasonable person

These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already

permalink
report
parent
reply
4 points
*

Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.

I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.

permalink
report
parent
reply
2 points

They want everything, does it exist, but it’s not in their dataset? Then they want it.

They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that

permalink
report
parent
reply
11 points

This is getting ridiculous. Can someone please ban AI? Or at least regulate it somehow?

permalink
report
reply
2 points

The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.

permalink
report
parent
reply
1 point

As for everything, it has good things, and bad things. We need to be careful and use it in a proper way, and the same thing applies to the ones creating this technology

permalink
report
parent
reply
1 point

Once a technology or even an idea is there, you can’t really make it go away - ai is here to stay. The generative LLM are just a small part.

permalink
report
parent
reply
309 points

Imagine how much power is wasted on this unfortunate necessity.

Now imagine how much power will be wasted circumventing it.

Fucking clown world we live in

permalink
report
reply
56 points

On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.

permalink
report
parent
reply
86 points
Deleted by creator
permalink
report
parent
reply
32 points

permalink
report
parent
reply
18 points

I…uh…frick.

permalink
report
parent
reply
-1 points

I just want to keep using uncensored AI that answers my questions. Why is this a good thing?

permalink
report
parent
reply
10 points

Because it only harms bots that ignore the “no crawl” directive, so your AI remains uncensored.

permalink
report
parent
reply
5 points

Because it’s not AI, it’s LLMs, and all LLMs do is guess what word most likely comes next in a sentence. That’s why they are terrible at answering questions and do things like suggest adding glue to the cheese on your pizza because somewhere in the training data some idiot said that.

The training data for LLMs come from the internet, and the internet is full of idiots.

permalink
report
parent
reply
1 point

From the article it seems like they don’t generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."

permalink
report
parent
reply
1 point
72 points

Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.

permalink
report
reply
37 points

I think the negativity is around the unfortunate fact that solutions like this shouldn’t be necessary.

permalink
report
parent
reply
4 points

thousands of times a second

Modify your Nginx (or whatever web server you use) config to rate limit requests to dynamic pages, and cache them. For Nginx, you’d use either fastcgi_cache or proxy_cache depending on how the site is configured. Even if the pages change a lot, a cache with a short TTL (say 1 minute) can still help reduce load quite a bit while not letting them get too outdated.

Static content (and cached content) shouldn’t cause issues even if requested thousands of times per second. Following best practices like pre-compressing content using gzip, Brotli, and zstd helps a lot, too :)

Of course, this advice is just for “unintentional” DDoS attacks, not intentionally malicious ones. Those are often much larger and need different protection - often some protection on the network or load balancer before it even hits the server.

permalink
report
parent
reply
1 point

Already done, along with a bunch of other stuff including cloudflare WAF and rate limiting rules.

I am still annoyed that it took me over a day’ of my life to finally (so far) restrict these things. And several other days to offload the problem to Cloudflare pages for sites that I previous self hosted but my rural link couldn’t support.

this advice is just for “unintentional” DDoS attacks, not intentionally malicious ones.

And I don’t think these high volume AI scrapes are unintentional DDOS attacks. I consider them entirely intentional. Not deliberrately malicious, but negligent to the point of criminality. (Especially in requesting the same pages again so frequently, and all of them ignoring robots.txt)

permalink
report
parent
reply
39 points

I guess this is what the first iteration of the Blackwall looks like.

permalink
report
reply
17 points

Gotta say “AI Labyrinth” sounds almost as cool.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


Community stats

  • 21K

    Monthly active users

  • 14K

    Posts

  • 624K

    Comments