Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.
Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.
More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”
Why is generative AI even needed for audio transcription? We’ve had decent voice recognition tools for years even on cheap consumer grade stuff.
Whisper really is a lot better when it works, and it’s free. The problem is that it refuses to produce gibberish or give up when it doesn’t work. You’ll always need an editor.
The toaster oven I just invented works much better than a traditional one. It reheats French fries perfectly, you can dehydrate in it, makes succulent roasted chicken, and about 2.5% of the time it burns down your house. You’ll always need to keep an eye on it to make sure that doesn’t happen. Remember though, much better than a traditional one.
This definition of “better” feels like claiming that a Beeper that’s constantly hooked to power is the perfect alarm because it warns you every time someone is trying to break in - while entirely ignoring that it is just constantly blaring.
I use it for generating subtitles. It figures out context, it ignores stuttering, it does punctuation etc. It’s really is just better. With clean audio it transcribes like a human does.
It does better than other techniques with dirty audio, but when it fails it fails weird, which is the big issue here.
No, we really haven’t had on-device voice recognition that meets any definition of “decent”. Anything reasonable phones out to “the cloud” for decent voice recognition.
So? I’d rather have my software talk to a server than be downright wrong just so another business can climb onto the AI bandwagon.
You can’t do that with personal information like the ones doctors needs transcribed. It has to be local.
Some examples
In this example, the speaker said, “as the um, the, her father dies not too long after he remarried….” while the program transcribes that as " It’s fine. It’s just too sensitive to tell. She does die at 65….”
In this example, the speaker said, “and after she got the telephone he began to pray” while the program transcribes that as “I feel like I’m going to fall. I feel like I’m going to fall, I feel like I’m going to fall….”
Wow, that’s bad. I thought it would be more of a “confusing a sentence for a similar sounding one” type thing but from the above and the article it’s just generating semi-believable text and sticking them into the transcriptions.
It’s actually extremely good at figuring out confusing text. It gets weird when the audio quality is bad.
I use it for generating subs for obscure movies.
This one was wild:
In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”
But the transcription software added: “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”
From picking up and object to mass murder lmao. Not even close!
LLMs in medicine. What could go wrong?
As someone who uses Whisper fairly often, it’s obvious that they’ve trained off of a bunch of YouTube videos.
Most of the time it’s very accurate, but there have definitely been a few times in long transcription sessions where it will randomly hallucinate that someone is saying “Don’t forget to like and subscribe!” When nothing was said anywhere near that.
A few months back my GP asked if they could use a transcription thing they were trialling during my consult.
He seemed shocked when I declined.
I just don’t understand why anyone would actually want that?
I want my doctor to listen to what I tell him, and I don’t really want what I say to be used for any other purpose, because no other purpose would be to my benefit.
Next week they’ll be adding to share “basic characteristics” about me with third party “wellness partners”.