They use LLMs for what they can actually do, which is bullet point core concepts to a huge volume of information, parse a large volume of information for specific queries that may have needed a tech doing a bunch of variations of a bunch of keywords, before, etc. Provided you have humans overseeing the summaries, have the queries surface the actual full relevant documents, and fallback to a human for failed searches, it can potentially add a useful layer of value.
They’re probably also using it for propaganda shit because that’s a lot of what intelligence is. And various fake documents and web presences as part of cover identities could (again, with human oversight), probably allow you to produce a lot more volume to build them out.
Provided you have humans overseeing the summaries
right, at which point you’re just better doing it the right way from the beginning, not to mention such tiny detail as not shoving classified information into sam altman’s black box
I’m not really arguing the merit, just answering how I’m reading the article.
The systems are airgapped and never exfiltrate information so that shouldn’t really be a concern.
Humans are also a potential liability to a classified operation. If you can get the same results with 2 human analysts overseeing/supplementing the work of AI as you would with 2 human analysts overseeing/supplementing 5 junior people, it’s worth evaluating. You absolutely should never be blindly trusting an LLM for anything. They’re not intelligent. But they can be used as a tool by capable people to increase their effectiveness.
it’s not airgapped, it’s still cloud, it can’t be. it’s some kind of “secure” cloud that passed some kind of audit. openai already had a breach or a few, so i’m not entirely sure it will pan out