The top quartile of funds selected by the AI model generated 2.1x the original investment versus an industry average of 1.85x.
For now. Companies are going to quickly have prospectuses, quarterly reports and annual statements sanitised after this
And that’s what AI is going to really do. Make the world more grey. Lemmy instance owners: ban LLMs now!
I think that there are other possible risks.
Exploiting AI fragility is a serious concern. A given “AI” today is vastly more simple than a human. It may be able to operate well given certain assumptions, such as that statements are trying to affect human investors rather than AI models. What happens if I try to craft information and inject it so as to swing AI-deiven investments?
I remember going through an article a while back on AI for serious applications, like military and the like, and one point is that unless you are very rigorously careful about where you are pulling your training data from – and a lot of people training models are not – an adversary may be able to attack an AI-deiven system by aiming to poison its training data.