Consider this scenario. Someone approaches an individual and asks them to provide answers to some questions. The individual performs some Google searches of the internet, consults books in a local library, and then pieces together the answers to the questions. These are then communicated to the requestor face to face, or by phone or video call. The requestor uses the answers to commit a wicked crime for which they are prosecuted. The person providing the answers to the requestor’s questions is deemed by law to have some culpability for the crime and so they are prosecuted too. Now consider the same scenario but with the perpetrator directly asking ChatGPT (or similar) the same questions. The AI’s answers are used to commit the same wicked crime for which the perpetrator is prosecuted. The AI, however, does not have the same legal culpability for the crime as the individual noted above.
Reports that Florida’s attorney general has opened a criminal investigation into whether ChatGPT provided advice to a murdering gunman last year, see here for example, made the Badger wonder about the following question: ‘Are people using AI professionally or personally really aware of where the boundaries of responsibility sit?’ Probably not, was the conclusion after musing in the Spring sunshine. If a doctor follows a wrong diagnosis delivered by an AI, is the doctor responsible or the hospital, the engineers who built the AI model, or some other organisation in the chain? Some who build and deploy AI models appear to think such responsibility questions can be sorted out later when something goes awry and causes a crisis. This is never a sensible approach.
The more AI develops, the more it impacts important aspects of everyone’s life. However, it isn’t obvious, at least to the Badger, that professionals or the public understand much about how AI arrives at its answers. The Badger, who’s not a lawyer, thus spent a little time exploring how the law deals with the question of responsibility when someone takes action guided by AI’s output. It appears that you – not the AI vendor nor the algorithm – but you the user are legally responsible. This means that anyone – organisations, professionals, or members of the general public – using AI is always responsible and liable for the actions taken on guidance from AI. Organisations and humans can be sued, but AI cannot. When AI makes a mistake, liability flows to the humans and organisations that deployed it and used it,
That’s not really a surprise, but it’s a reminder for all users that they are more likely to find themselves in the dock than AI. It’s also a reminder that proper human consideration and diligence is imperative before acting on AI’s outputs. The Badger also thinks it’s a reminder that we must never allow AI to autonomously rule the world…