I would like to think about constructing a moral code. A set of rules which define appropriate, let's say good, behavior. Then, from these rules, we would have a basis for assessing our actions and conduct in life. The following disclaimer shall apply: I will not construct a complete moral code within this blog. Firstly, because that would be silly. Secondly, If I could, then I ought to spend less time blogging and more time solving the world's problems. Instead, I will suggest a construction method with the hope of obtaining some broad rules which can guide our actions for more simple situations, without considering the endlessly complex issues which a full moral code would need to address. How can we go about doing this?
Ideally, we would like to derive these rules from natural or universal principles. However, that would require knowing such natural or universal principles. Since we cannot simply observe these laws, “knowing” these laws would require developing a theory of such laws and then using that theory to deduce a moral code. Economics, along with many other sciences, attempts to make normative statements (what should be) by developing theories, deriving implications from that theory about how different actions affect the outcomes and, finally, determining which actions are optimal. Those actions which lead to the best outcomes. Similarly, in deriving a moral code, we would specify a model, a theory of natural laws, and then determine the optimal actions within that theory, i.e. a moral code. Unfortunately, models are approximations of the actual world and are subject to a myriad of misspecifications. Now, this may not be a significant problem if we are interested in a specific question, such as whether to increase/decrease benefits to unemployed workers. Modeling the outcome of such a change in policy would not necessarily require having a theory about global warming. Thus, we could abstract from issues pertaining to global warming and have a simplified theory about how people choose whether to search for a job or remain unemployed.
Unfortunately, if we want to derive a complete moral code, then we require a model, hopefully the true model, of the universe in its entirety. A theory of all-natural laws in which we can use to determine the optimal policies that we ought to follow. A theory that we cannot possibly hope to specify properly. Furthermore, even if we were able to specify such a model and identify it as the true model, it would be too complicated to enable us to derive any implications for outcomes and optimal policies. Rather than theorize, attempt to reveal and debate these natural laws. I want to propose a different approach. I would like to work backwards. Let's start by thinking about the sets of optimal policies for each possible model of the universe. Each set contains the moral code that would be deduced from the corresponding, potentially true, model of the universe. To codify morality, I want to identify the set of rules which makes up the intersection of these sets. That is, a rule is included in the moral code, if it is common to all moral codes derived from any possible model, including the true model. This procedure will potentially fail to identify rules which are in the set corresponding to the true model, but are not common to all other models. More importantly, this procedure will exclude rules which are not consistent with the true model from being part of the moral code. There is a trade-off here: We will be limiting the moral code by possibly missing rules which are consistent with the true model, but we will avoid including false rules which could lead to justifying immoral actions. As a result, we will be introducing more "grey area" into our morality, however, I believe the cost associated with including false rules which could then justify truly immoral actions are higher than those of limitations on evaluating actions imposed by a limited moral code. I would like to note that economics has many ways of dealing with model uncertainty which I am not employing here. The reason I am choosing the specific criteria above is my clear preference for limiting the moral code. More specifically, I am not interested in characterizing an entire moral code. There are countless situations, some paradoxical, which a complete moral code would need to address. However, for our daily lives, in which we do not encounter complicated moral dilemmas very often, there is less need for a fully characterized moral code. I am more interested in identifying broad rules which can generally be agreed upon by everyone and easily implemented in daily life, a sort of ad hoc moral code.
Can we identify any rules within the intersecting set, or is this set empty? I believe the following rule resides in the intersection: You ought not take action(s) which would hurt (impose a negative externality(ies)) on another individual. Considering any model in which good, in the traditional sense, is the basis of morality, then actively avoiding causing harm to another would certainly be among the optimal policies. First, it is important to note that I am considering only models in which the individual is the unit of analysis and I am not considering morality concerning aggregation of individuals. That is, I am abstracting from issues considering the "good of the many", and not addressing the societal trade-off between the common good and the individuals within that society. Second, the above rule focuses on action, and not on inaction. Leaving open and entire debate about whether it is immoral not to act in situations in which action may be helpful to another. Lastly, there are surely more rules within the intersection; rules which cover more specific situations and actions than the broad rule identified above. As the previous two comments suggest, identifying these additional rules would be required to more fully construct a suitable moral code.
To conclude, I am not interested in deducing a complete code within this post. I would like a simple, operational, moral code which can provide general guidance for common situations. A minimum set of rules we can all agree upon and follow. Limiting the set of rules has several virtues. First, by construction, we all agree on them. No matter your theory of the universe, or natural laws, the moral code constructed above is within the set of optimal rules that you would deduce from your specific beliefs. Second, it limits the ability to justify actions on moral grounds. For example, if you choose to be an asshole (which is a clear violation of the rule identified above), then you are simply choosing to act immoral. That immoral act may well be justifiable on other grounds. Not all acts which we deem good need be moral. The specific and complex rules required to justify such actions, or similar, would most likely not reside in the intersection of all possible models. Lastly, it limits the ability of people to pass moral judgement on one another. There are vast differences in the beliefs of individuals, and these differences can cause conflict, sometimes severe. With broad and common definitions of right and wrong, these "fundamental" differences would be removed and moral arguments against specific actions would no longer be valid. Take, for example, homosexuality. People (not I) argue that homosexual acts are morally wrong, citing the moral code laid out by, say, their interpretation of the bible (their model). Now, I would argue that the act of homosexuality has no direct negative impact on other individuals (assuming these acts are consensual). Thus, there would be no moral basis for arguing against or for. It would simply be a difference in tastes, which is perfectly acceptable. Individuals could still, and certainly would, disagree. However, insisting on and pressing their beliefs would not make them anymore right or just, it would make them just assholes. And, as previously mentioned, being an asshole is certainly wrong.
Comentarios