less long by AI (faster to detect changes than humans).
Many things change things. A bit of smoke in the air might have been from a gunshot that happened 10 minutes ago, or it might have been from a cigarette 15 minutes ago. Binary search relies on changes that indicate a specific thing has happened–a broken window, a bike no longer there, blood stains on the street. Anything undetectable by humans would still be useless to AIs. A bit a smoke? Could have been a gunshot 3 minutes ago, could have been a cigarette, could be fog, could be a vape. Even the things that AIs are truly useful for, like interpreting video compression artifacts, wouldn’t help, because any number of things can cause compression artifacts. How could it tell what pixels are slightly off color because of a gunshot 3 minutes ago, and what pixels are slightly off color because someone walked past the camera?
At that point, just feed the entire video to the AI and have it tell you when it sees guns or puffs of smoke or hears screams. Binary search is useless when you can just have a machine watch the entire video in one sitting over the course of five seconds and tell you when the interesting thing happens.
Anything undetectable by humans would still be useless to AIs. A bit a smoke? Could have been a gunshot 3 minutes ago, could have been a cigarette, could be fog, could be a vape.
Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.
At that point, just feed the entire video to the AI and have it tell you when it sees guns or puffs of smoke or hears screams.
Is there a point where one technique works better than another technique? Sure. I’m not arguing that. But if you’re dealing with a very long time, you’d still want to do a binary search first.
Binary search is useless when you can just have a machine watch the entire video in one sitting over the course of five seconds and tell you when the interesting thing happens.
Depends on how long that tape is, which is what was being originally discussed by the OP.
A binary search assisted by AI in determining the point in the tape where the effect happened quickly is still a very fast way of doing so (assuming the tape duration is very long), as alluded by others in other topic trees in this topic.
Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.
Lmao now I know you’re fucking with me
Yeah lemme spend three weeks training this AI on the difference between gunsmoke, cigarette smoke, vapes, and fog in this specific alley. Oh, y’all already found the killer because someone just watched the video? Well my point stands, the AI could do it faster
Once it’s trained
In another week
Oh shit, it thought that guy’s cell phone was a gun. See you in another month!
Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.
Lmao now I know you’re fucking with me
Yeah lemme spend three weeks training this AI on the difference between gunsmoke, cigarette smoke, vapes, and fog in this specific alley. Oh, y’all already found the killer because someone just watched the video? Well my point stands, the AI could do it faster
Once it’s trained
In another week
Oh shit, it thought that guy’s cell phone was a gun. See you in another month!
Um, I was being completely serious. Having AI determine shapes/opaqueness is a simple matter for it. And I’m assuming the training would already be done before the event happens, over time.
You don’t think crime forensics labs won’t be training AI to do these kind of detections going forward? Really?
(Maybe its a matter of people not truly grocking what AI will do and how it will change things, going forward. /shrug)