I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.
Beats manually summarizing that info myself.
Maybe their prompt sucks?
“tools” doesn’t mean “good”
good tools are designed well enough so it’s clear how they are used, held, or what-fucking-ever.
fuck these simpleton takes are a pain in the arse. They’re always pushed by these idiots that have based their whole world view on fortune cookie aphorisms
@RagnarokOnline @dgerard “They failed to say the magic spells correctly”
I also use it for that pretty often. I always double check and usually it’s pretty good. Once in a great while it turns the summary into a complete shitshow but I always catch it on a reread, ask a second time, and it fixes things up. My biggest problem is that I’m dragged into too many useless meetings every week and this saves a ton of time over rereading entire transcripts and doing a poor job of summarizing because I have real work to get back to.
I also use it as a rubber duck. It works pretty well if you tell it what it’s doing and tell it to ask questions.
Isn’t the whole point of rubber duck debugging that the method works when talking to a literal rubber duck?