“A computer can never be held accountable - therefore a computer must never make a management decision”. - IBM Training manual, 1979.
So jazzed to live here in the future where we have AI making insurance denials and deciding drone targets! /s
Good on her for standing against the Silicon Valley doom hype machine.
Well put
hero material.
my family contributes $5 monthly to the signal foundation. consider doing the same if you are able.
the entire family uses it personally and for business so its among the best montly contribs we make.
This is the primary reason I’ve not given agents more power than something extremely controlled (I.e. only a function to turn on/off the lights). As I was always concerned that these generational models might accidentally do something dumb or annoying, let alone something that might be illegal or harmful.
I don’t really see how anyone would ask AI to complete something completely autonomously at this stage, without oversight.
There are some things it would be nice to have a brain in a jar to do for me.
I do not, however, want that jar sitting in an Amazon-owned server room somewhere, where they can modify the jar to change how the brain works.
Take back your brain-in-a-jars from big tech! /j
It will be interesting tracing accountability when AI does something less than legal while trying to implement a legal command.