You are viewing a single thread.
View all comments View context
12 points

“just as trustworthy as human authors” - Ok so you have no idea how these chatbots work do you?

permalink
report
parent
reply
1 point

You have a lot of faith in human authors.

permalink
report
parent
reply
10 points

Oh I do not, but the choice is: a human who might understand what happens vs a probabilistic model that is unable to understand ANYTHING

permalink
report
parent
reply
-10 points

probabilistic model that is unable to understand ANYTHING

You’re the one who doesn’t understand how these things work.

permalink
report
parent
reply
6 points

LLM AI bases its responses from aggregated texts written by … human authors, just without having any sense of context or logic or understanding of the actual words being put together.

permalink
report
parent
reply
0 points

I understand they are just fancy text prediction algorithms, which is probably justa as much as you do (if you are a machine learning expert, I do apologise). Still, the good ones that get their data from the internet rarely make mistakes.

permalink
report
parent
reply
7 points
*

I’m not an ML expert but we’ve been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It’s just a fancy mirror. That’s why, for example, world of warcraft player have been able to trick those bots into making an article about a feature that doesn’t exist.

Do you really want to lose your time reading a blob of data with no coherency?

permalink
report
parent
reply
4 points

Do you really want to lose your time reading a blob of data with no coherency?

We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you’ve found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it’s possible to make a reliable model with that bullshit detector.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 543K

    Comments