This was already budgeted for when they decided to use a chatbot instead of paying employees to do that job.
Trying to blame the bot is just lame.
Corporate IT here. You’re assuming they’re smart enough to budget for this. They aren’t. They never are. Things are rarely if never implemented with any thought put into any other scenario that isn’t happy path.
As a corporate IT person also. Hello.
But we do put thought into what can go wrong. But no we don’t budget for it, and as far as we are concerned 99% success rate is 100% correct 100% of the time. Nevermind 7 billion transactions per year multiplied by 99% is a fuck ton of failure.
Amen. Fwiw at my work we have an AI steering committee. No idea what they’re doing though because you’d think enough articles and lawsuits against OpenAI and Microsoft on shady practices most recently allowing AI to be used by militaries potentially to kill people. I love knowing my org supports companies that enable the war machine.
Great! Please make sure that your server system is un-racked and physically present in court for cross examination.
Better put Ryan Gosling on standby in case he needs to “retire” the rouge Air Canada chatbot Blade Runner style.
Rogue*. I’m not usually that guy, but this particular typo makes me see red.
I know what you mean, except for me it makes me see rouge ever since I spent some time in France.
“Airline tried arguing virtual assistant was solely responsible for its own actions”
that’s not how corporations work. that’s not how ai works. that’s not how any of this works.
Oh, it is if they are using a dump integration of LLM in their Chatbot that is given more or less free reign. Lots of companies do that, unfortunately.
If it’s integrated in their service, unless they have a disclaimer and the customer has to accept it to use the bot, they are the ones telling the customer that whatever the bot says is true.
If I contract a company to do X and one of their employees fucks shit up, I will ask for damages to the company, and They internally will have to deal with the worker. The bot is the worker in this instance.
So what you’re saying is that companies will start hiring LLMs as “independent contractors”?
Why would air Canada even fight this? He got a couple hundred bucks and they paid at least 50k in lawyer fees to fight paying those. They could have just given him the cost of the lawyer’s fees and be done with it
Because now they have to stop using the chatbot or take on the liability of having to pay out whenever it fucks up.
Which is fascinating, that they themselves thought there was any doubt about it, or they could argue such a doubt.
This is the same like arguing “It wasn’t me who shot the mailmen dead. It was my automated home self defense system”
Agree 100%–i mean who are you gonna fine, the bot? The company that sold you the bot? This is a simple case of garbage in, garbage out–if they set it up properly and vetted its operation, they wouldn’t be trying to make such preposterous objections. I’m glad this went to court where it was definitively shut down.
Fuck Canada Air. The guy already lost a loved one, now they wanna drag him through all this over a pittance? To me, this is the corporate mindset–going to absolutely any length necessary to hoover up more money, even the smallest of scraps.
Most likely to fight the precedent of them being liable for using an ai chatbot that gives faulty information.
Because there is something far nastier in the world than self interest. This airline seems to me like it was operating from a place of spite.
That’s an important precedent. Many companies turned to LLMs to cut the cost and dodge any liability for whatever model can say. It’s great that they get rekt in the court.