An email arrived from the Badger’s car insurance provider recently. It advised that a renewal quote was in his online account. Logging in revealed a 25% increase in premium! A check using market comparison sites provided quotes for the same cover within a few pounds of his existing premium. The Badger thus used the provider’s chatbot within his account to signal his intent to take his business elsewhere. The chatbot dialogue, however, ultimately resulted in the Badger staying with his provider with the same cover at the price he currently pays!
This is a commonplace renewal dynamic, but the Badger found himself musing on his experience. Apart from being irritated by his provider’s attempted 25% price rise when they were obviously prepared to retain their customer for a much lower price, using the chatbot was easy, efficient, and quick. However, it wasn’t obvious at any stage in the chatbot dialogue whether the Badger had really conversed with a human in the provider’s organisation. This meant that both he and the provider were implicitly accepting the validity of the chatbot’s deal. A number of ‘what if’ scenarios regarding customer use of AI chatbots started bubbling in the Badger’s brain. And then he read, here and here, about Air Canada and its AI chatbot!
An AI chatbot on the Air Canada website advised a passenger that they could book a full-fare flight to attend their grandmother’s funeral and claim for a discounted bereavement fare thereafter. Guidance elsewhere on the website was different. The passenger did as the chatbot guided and subsequently claimed for the bereavement discount. Air Canada refused the claim, and the parties ended up at a Tribunal with the airline arguing that the chatbot was ‘responsible for its own actions’. The Tribunal ruled for the passenger and that the airline was liable for negligent misrepresentation. The case not only establishes the principle that companies are liable for what their AI chatbots say and do, but it also highlights – as noted here – broader risks for businesses when adopting AI tools.
The amount of money for the discount claim was small (<CAN$500) but the Tribunal’s findings will reverberate widely. The case also exposes something which is commonplace with many big companies, namely the dominance of a legalistic behavioural culture regardless of common-sense within an organisation. This was a bereaved customer complying with advice given by the company’s AI chatbot on the company’s own website, and yet rather than be empathetic, take responsibility, and apply common-sense, the company chose a legal route and to hide behind ‘the hallucination’ of its chatbot. So, bravo to the passenger for fighting their corner, bravo to the Tribunal for their common-sense judgement, and yes bravo to Air Canada for making sure that we all now know that companies cannot shirk responsibility for the behaviour of their AI chatbots…