ChatGPT didn’t nearly destroy her wedding, her lousy wedding planner did. Also whats she got against capital letters?
Yea yea guns don’t kill people, bullet impacts kill people. Dishonesty and incompetence are nothing new, but you may note that the wedding planner’s unfounded confidence in ChatGPT exacerbated the problem in a novel way. Why did the planner trust the bogus information about Vegas wedding officiants? Is someone maybe presenting these LLM bots as an appropriate tool for looking up such information?
Yes, even some influential people at my employer have started to peddle the idea that only “old-fashioned” people are still using Google, while all the forward-thinking people are prompting an AI. For this reason alone, I think that negative examples like this one deserve a lot more attention.
Bullet impacts don’t kill people, tissue deorganization and fluid loss kill people!
“Comment whose upvotes all come from programming dot justworks dot dev dot infosec dot works” sure has become a genre of comment.
“I can safely bet that by ‘all upvotes come from programming dot justworks dot dev dot infosec dot works’ you actually mean ‘a vast majority of upvotes come from these tech instances’ even before reading your comment.”
“Or in other words I correctly interpreted what you meant but apparently the way you said it is a problem because I prefer to blame users rather than peddlers.”
I can make a safe assumption before reading the article that ChatGPT didn’t ruin the wedding, but rather somebody that was using ChatGPT ruined the wedding.
“blame the person, not the tools” doesn’t work when the tools’ marketing team is explicitly touting said tool as a panacea for all problems. on the micro scale, sure, the wedding planner is at fault, but if you zoom out even a tiny bit it’s pretty obvious what enabled them to fuck up for as long and as hard as they did
do you think they ever got round to reading the article, or were they spent after coming up with “hmmmm I bet chatgpt didn’t somehow prompt itself” as if that were a mystery that needed solving
“ChatGPT is good, but only if no one in a position of authority uses it”
Cool.
almost all of your posts are exactly this worthless and exhausting and that’s fucking incredible
As a fellow Interesting Wedding Haver, I have to give all the credit in the world to the author for handling this with grace instead of, say, becoming a terrorist. I would have been proud to own the “Tracy did nothing wrong” tshirt.
Credit to her for making the best of a bad situation. “We almost couldn’t get legally married, so we had to bring in Elvis to officiate the paperwork after the ceremony” is going to be a top-tier wedding story for every party going forward.
Yea yea words.
Trust but verify.
Here’s a better idea - treat anything from ChatGPT as a lie, even if it offers sources
Scams are LLM’s best use case.
They’re not capable of actual intelligence or providing anything that would remotely mislead a subject matter expert. You’re not going to convince a skilled software developer that your LLM slop is competent code.
But they’re damn good at looking the part to convince people who don’t know the subject that they’re real.
I think we should require professionals to disclose whether or not they use AI.
Imagine you’re an author and you pay an editor $3000 and all they do is run your manuscript through ChatGPT. One, they didn’t provide any value because you could have done the same thing for free; and two, if they didn’t disclose the use of AI, you wouldnt even know your novel had been fed into one and might be used by the AI for training.
I think we should require professionals not to use the thing currently termed AI.
Or if you think it’s unreasonable to ask them not to contribute to a frivolous and destructive fad or don’t think the environmental or social impacts are bad enough to implement a ban like this, at least maybe we should require professionals not to use LLMs for technical information