You might be interested in the Moral Sense test. It's less awful than the average cognitive neuroscience survey though as usual the questions are irritatingly restrictive. The questions come in linked pairs or triples, which (if you're a consequentialist) are basically identical.
There was one interesting quartet of scenarios. 1. You're a construction worker and you're not supposed to throw bags filled with rocks off the roof, but you do and no one gets killed. (It's assumed that you're highly unlikely to hit anybody.) 2. Ditto, but someone does get killed. 3. You get drunk, drive into a lawn, and kill a little girl. 4. Ditto, but you hit a tree instead. The only form of punishment available is a fine. What should the fine be in each case?
My (immediate) intuition was that 1. and 2. should have the same (large) fine, but 3. and 4. need not. The thing is, one could construct 5. you shoot someone and hit, and 6. you shoot someone and miss; if 5. and 6. are inequivalent and 1. and 2. are equivalent, one has to interpolate between them, and I think 3. and 4. are nearer the 5. and 6. end than the 1. and 2. end.
The difference is in the likelihood of the bad outcome. If people were perfectly linear, it wouldn't, as it were, matter whether they were fined $1 with probability 1 or $100 with probability 1/100, so you could split up the expected values any way you like. However, in fact people tend to disregard very unlikely outcomes, so you have to spread out the damage somehow. Assume that the law is omniscient. Given total damage X, you need to split it up into a piece Y, such that
fine = (Y) p(disaster) if no disaster,
fine = (Y) p(disaster) + (X - Y) if disaster occurs.
If the disaster is likely, people do the math correctly and get the "just" expected value X p(disaster). If it's highly unlikely, they get the reduced expectation value Y p(disaster) (as opposed to 0 if no fine). However, the minimum fine that will actually deter people is nonzero, so Y p(disaster) is bounded below, and as p(disaster) goes to zero, Y approaches X. Say a life is worth $1,000,000. The "correct" fine for something that would kill a person once every million times is $1, which is worthless as a deterrent. If you wanted to deter such acts you would need Y > X, a punitive fine, which shouldn't care about whether the disaster actually happened.
As for justifying the other limit: if no one was hurt, why (apart from the above logic) punish?
Naturally law-and-economists have said a lot about this. Here's a decent introduction I found: http://www.daviddfriedman.com/Academic/Becker_Chapter/Becker_Chapter.html
(Author's got a physics background.)
Post a Comment