You are viewing a single thread.
View all comments View context
10 points

Indeed. LLMs read with the same sort of comprehension that humans have, so if a supermarket makes their website compatible with humans then it’s also compatible with LLMs. We have the same “API”, as it were.

permalink
report
parent
reply
2 points

Can LLMs interpret structured input like html?

permalink
report
parent
reply
5 points

Yup. And those that can’t can have a parser pull just the human-readable text out, like a blind person’s screen-reader would do.

permalink
report
parent
reply
1 point
*

LLMs can read the website, but I’d argue its comprehension works VERY differently than human comprehension. If I ask you whats the price of a Banapple, you’ll know that doesn’t exist. The LLM might catch that thing doesn’t exist, or it might average all the prices of all the Apple associated data it has and all the banana associated data it has, regardless of unit, and give you that averaged price, or otherwise make up a logic to deliver you a price. It doesn’t know shit about fruit in the way you intuitively understand fruit.

permalink
report
parent
reply
2 points

That sounds like an issue with your system prompt. If you’re using an LLM to interpret web pages for price information then you’d want to include instructions about what to do if the information simply isn’t in the web page to begin with. If you don’t tell the AI what to do under those circumstances you can’t expect any specific behaviour because it wouldn’t know what it’s supposed to do.

I suspect from this comment that you haven’t actually worked with LLMs much, and are just going off the general “lol they hallucinate” perception they have right now? I’ve worked with LLMs a fair bit and they very rarely have trouble interpreting what’s in their provided context (as would be the case here with web page content). Hallucinations come from relying on their own “trained” information, which they recall imperfectly and often gets a bit jumbled. To continue using a human analogy, it’s like asking someone to rely on their own memory rather than reading information from a piece of paper.

permalink
report
parent
reply
1 point

Or you could just prompt it to not guess prices for articles that don’t exist. Those models are pretty good at following instructions.

permalink
report
parent
reply
1 point

My point is it processes information very differently than how humans do. It doesn’t know anything about what these things actually are, for instance, leading it to give impossible suggestions. I’ve asked it for Obama hi/lo strategy to see what it would say and its advice included playing hands to win low that didnt qualify for low. It knew useful stuff and also confidently declared mistakes in ways amateurs who barely knew the rules wouldn’t do.

My point being, it works very differently from humans.

permalink
report
parent
reply

People Twitter

!whitepeopletwitter@sh.itjust.works

Create post

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying.
  5. Be excellent to each other.

Community stats

  • 9.1K

    Monthly active users

  • 793

    Posts

  • 36K

    Comments