Is there an extension that warns you when you are wasting time reading ai-generated crap?
Case in point, I was reading an article that claimed to compare kubernetes distros and wasted some good minutes before realizing it was full of crap.
Most of the internet was already BS before ‘working’ LLMs, where do you think the models learned it from? I think what you want is a crap detector, and I’m with you. Any ideas good ideas and I’ll donate my time to work on it.
For me it’s uBlacklist with my personal list in front and some github page I found after.
FYI Kagi has an integrated Blocker/Upranker/Downranker similar to this. Under their stats page you can see, which domains have been blockes/raised/… the most.
The most hated one by far: Pinterest and all locale-specific sub-domains.
I think at some point we will have to introduce human confirmation from creator side.
I don’t mind someone using chatgpt as a tool to write better articles, but most of internet is sensles bs.
Unfortunately, even OpenAI themselves took down their AI detection tool because it was too inaccurate. It’s really, REALLY hard to detect AI writing with current technology, so any such addon would probably need to use a master list of articles that are manually flagged by human.
If you could detect AI authored stuff, couldn’t you use that to train your LLM?
Suspect it would operate more on the basis of a person confirming that the article is of reasonable quality & accuracy.
So not unlike editors selecting what to publish, what to reject & what to send back for improvements.
If good articles by AI get accepted & poor articles by people get rejected, there may still be impacts, but at face value it might be sufficient for us seeking to read stuff.
https://addons.mozilla.org/addon/ai-noise-cancelling-headphones/
I literally just saw this pop up in my Mastodon feed a few minutes ago, funny enough. I haven’t used it so I can’t vouch for it, but I would imagine that’s something akin to what you’re looking for.