Pitfalls of our simplistic moral evaluation system


In The Trouble With “Good” Scott Alexander describes some problems that stem from our evolutionary-induced simplistic good-bad evaluation system (emotivism), that meshes moral beliefs with facts and personal preferences.
I’ll only paste one quote as the post itself has just the right composition to be clear enough while not being too long.

So this is one problem: the inputs to our mental karma system aren’t always closely related to the real merit of a person/thing/idea.

Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. […]

Another problem: we are tempted to assign everything about a concept the same score. Eliezer Yudkowsky currently has 2486 karma. How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma. How much does he know about economics? Somewhere around level 2486 would be my guess. How well does he write? Probably well enough to get 2486 karma. Translated into mental terms, this looks like the Halo Effect. Yes, we can pick apart our analyses in greater detail; having read Eliezer’s posts, I know he’s better at some things than others. But that 2486 number is going to cause anchoring-and-adjustment issues even so.

But the big problem, the world-breaking problem, is that sticking everything good and bad about something into one big bin and making decisions based on whether it’s a net positive or a net negative is an unsubtle, leaky heuristic completely unsuitable for complicated problems.