CEO’s, above everything else, manage risk. They hedge their bets against the market and against their own workforce.
While “AI” might be better at calculating risk with known factors, the joys of hallucination and often having a poor amount of true knowledge of what’s actually going on means that a LLM would run a company into the ground hilariously fast.
It’s why I wet myself laughing whenever someone suggests to replace HR or recruiters with AI. Their job is to protect the company, and all it takes is a poor AI decision, and suddenly there’s a lawsuit for millions or a bad actor was hired and fucked the company up from the inside.
AI tools are powerful, but that’s all they are, and all they will be for a long time. If anything, we’re likely to see regressions in performance for ChatGPT as they fight legal battles over their use of protected data/hallucinations, and slightly improved performances from Google/Amazon/Apple on their own initiatives.