Sloppy LLM programming? Never!
In completely unrelated news I’ve been staring at this spinner icon for the past five minutes after asking an LLM to output nothing at all:
What are the chances that the front end was not programmed to handle the LLM returning an empty string?
boooo Gemini now replies “I’m just a language model, so I can’t help you with that.”
what would a reply with no text look like?
nah it just described what an empty reply might look like in a messaging app
they seem to have done quite well at making Gemini do mundane responses