[Edit -- Changed to title as suggested by Raemon. So I've also made some edits to the post itself.]

Where does liability fall in the case of AI doing things?

I think there is a rather close parallel with autonomous cars/vehicles. Probably 4 or 5 years back that was a question being raise. I assume since I don't hear much about it any more it's been resolved and clearly defined for those making and using such vehicles or insuring such vehicles. Perhaps that translates very well into the more general AI space. But I'm not sure.

Anyone aware of where the whole question of liability assignment and insurance related to producing and using very advanced AI tools and products in our day to day interactions currently stands?

New Answer
New Comment

1 Answers sorted by

Viliam

20

I think the current state if autonomous cars is "there must be a human driver inside anyway, and take over if the machine does something wrong", which means the liability is pushed on the customer.

So I would assume the same would happen with AIs. Especially when it is your hands actually doing the things an AI told you to. Like, GPT-4 gave you a recipe that poisoned your family, but you should have thought about it before following the recipe (even if the way it made the result poisonous is not obvious, e.g. the ingredients seemed harmless, but their combination and way of cooking created some poison).

I assume you could successfully sue the company if it made a harmful exception for you, e.g. if it hardcoded to the autonomous car algorithm "if the driver is jmh, and the car is in the middle of a bridge in the rightmost lane, turn sharply to the right and accelerate". But you would have to prove that it happened.

Mercedes-Benz, the corporation, is assuming practically all liability for their fanciest Level 3 autonomous S-class cars in Germany. Assuming all the rules and restrictions regarding the usage of the autonomous mode were observed.

So there's a precedent, that will inevitably be used in future arguments, for having the manufacturer assume liability. 

Though in the case of AI, I'm unsure who the manufacturer would be.

2 comments, sorted by Click to highlight new comments since:

FYI I think it'd be more helpful if this post's title was "Who is liable for AI?" rather than "A different aspect of AI Risk" (which could mean anything)

This reminds me of the incident in Belgium a few months ago:

https://www.lesswrong.com/posts/9FhweqMooDzfRqgEW/chatbot-convinces-belgian-to-commit-suicide

The question of liability in these kinds of circumstances is fascinating and important. The legal system will decide these questions by setting precedents as they occur if we don't try to address them (or at least think about them) in advance.