You are viewing a single thread.
View all comments View context
2 points

Full disclosure - my background is in operations (think IT) not AI research. So some of this might be wrong.

What’s marketed as AI is something called a large language model. This distinction is important because AI implies intelligence - where as a LLM is something else. At a high level LLMs are using something called “tokens” to break apart natural language into elements that a machine can understand, and then recombining those tokens to “create” something new. When a LLM is creating output it does not know what it is saying - it knows what token statistically comes after the token(s) it has generated already.

So to answer your question. An AI can hallucinate because it does not know the answer - its using advanced math to know that the period goes at the end of the sentence. and not in the middle.

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 8.5K

    Monthly active users

  • 1.2K

    Posts

  • 44K

    Comments