Avatar

blakestacey

blakestacey@awful.systems
Joined
27 posts • 388 comments
Direct message

[ChatGPT interrupts a Scrabble game, spills the tiles onto the table, and rearranges THEY ARE SO GREAT into TOO MANY SECRETS]

permalink
report
parent
reply

From an article about a boutique brand that sells books to rich people:

Assouline has made its name publishing tomes that sell for $1,000 or more.

Oh, so they publish textbooks.

“They represent stealth wealth, intended to tell you what your hosts are about and to provide visual evidence: that the owners are people of wealth, education and taste.”

🎶 Please allow me to introduce myself 🎶

permalink
report
reply

The list of diatribes about forum drama that are interesting and edifying for the outsider is not long, and this one is not on it.

permalink
report
reply

From the documentation:

While reasoning tokens are not visible via the API, they still occupy space in the model’s context window and are billed as output tokens.

Huh.

permalink
report
parent
reply

Silicon Valley is proud to announce the man who taught his asshole to talk, based on the hit William S. Burroughs story, “Don’t be the man who taught his asshole to talk.”

permalink
report
parent
reply

An interesting thing came through the arXiv-o-tube this evening: “The Illusion-Illusion: Vision Language Models See Illusions Where There are None”.

Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something “really is” and how something “appears to be”, and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.

permalink
report
reply

Max Kennerly’s reply:

For a client I recently reviewed a redlined contract where the counterparty used an “AI-powered contract platform.” It had inserted into the contract a provision entirely contrary to their own interests.

So I left it in there.

Please, go ahead, use AI lawyers. It’s better for my clients.

permalink
report
reply

“Your mother was volatile with poor control last night, Trebek!”

permalink
report
reply

A statement by one of the authors who has resigned from the NaNoWriMo board: No More NaNoWriMo, by Cass Morris.

permalink
report
reply