Disclaimer: all opinions below represent only an effort of thought by the author of this article, and do not claim to represent, in any way, the vision of CLSBE or its Economics Club.
For my inaugural lines in this column, I would like to stay away from the technicalities of our beloved science of Economics and address an issue that has been puzzling me as of late: morals; more specifically, morals in Economics. Not of the scientific knowledge, I mean, even though it may be limited by ethical constraints: that is why there are ethics committees approving research projects, for example, or why there are privacy laws protecting our homes, even if any of us happened to be a celebrity. But that is not the kind of subject whose morality I wanted to discuss: not of economic science itself, but of how we approach it.
We can think of several models that are, to a significant extent, based upon morally questionable behavior: lack of honesty in information asymmetry scenarios, enforcing the necessity of signaling; the absolute disregard for others’ welfare one finds in Ponzi schemes, enforcing the need for no-Ponzi games restrictions in macroeconomic models; and the mistrust that leads to the alleged impossibility of achieving the Prisoners’ Dilemma outcome that Pareto dominates the Nash equilibrium in the one-shot game, to quote a few examples. As Dr. House would often say, “everybody lies”.
Some of the readers may disagree with me, but I would guess that most, if not all, of you at least in an ideal or quasi-ideal world would always do their best to do good and avoid evil. It is beyond the scope of this article to discuss these concepts, but it seems consensual that, generally, truth-telling is good and lying is bad, just like severely harming others for your own benefit is bad and promoting someone else’s well-being is good; therefore, an agent with a moral sense would try to choose the morally good option and avoid the morally bad option. And yet, in doing Economics we assume that, if the chance is given, said agent is going to misbehave. How can we, then, reconcile these facts? In the next paragraphs, I attempt to propose a sketch of a solution to this problem, using one particular case: truth-telling in adverse selection.
We know that full information, that is, knowing the truth, is good: any optimization problem is based on an information set that we assume as being either true or as close as possible to reality (here I am assuming that truth and reality are the same absolute things, independently of being known to agents or not) and doing otherwise appears to be irrational. If we build our optimization problems based on this certain information set and not on any other, we can say that there is a preference for truth underlying the way we think and behave, which appears consistent with a moral preference for truth.
But one thing is knowing the truth, another is revealing it when we know it. According to what I have already stated, if an agent faces two actions yielding the same measurable payoffs but with one requiring telling the truth and the other lying, it appears likely that the agent will tend to choose the morally right action, meaning he will choose the action that will not require lying. Problems will arise when the measurable payoffs of lying are (at least significantly) higher than those of telling the truth: now, it is not surprising that in many cases agents will prefer to lie; however, we may still see some agents choosing to tell the truth and get only the lowest payoff.
Assuming that there is more to this than just infinite repetition of a Prisoners’ Dilemma kind of game with a discount factor that is high enough – for example, if the “game” is finitely repeated – we can still say that agents that still choose to tell the truth have a stronger preference for behaving in a way deemed morally correct than agents who choose to lie.
Based on this, we could say that, for any given set of measurable payoffs generated by a moral and an immoral behavior, there should be some kind of cutoff “level of preference for morality” for which an agent would choose the moral behavior over the morally unacceptable one. Putting this in other words, in a given situation some people would automatically behave as “angels” and others would behave as “demons”, in a simplifying image.
Let us look at the case of adverse selection: to ensure truthful revelation of types through signaling, the principal inserts in his own problem incentive constraints, and calculates the contracts that will attain this objective, yielding a second-best solution. However, there is an assumption underlying this kind of reasoning: that the only thing agents care about is the measurable payoff, with no sort of moral contingencies. Agents that do behave like this should not be bothered, since it corresponds to their behavior. However, well-behaved agents may feel bothered, even insulted, by the principal assuming that under certain circumstances the agent will behave not according to the values they hold dear, and they could feel this even more as it being an injustice since this lack of trust by the principal takes the equilibrium away from the first-best.
But would this conviction be according to reality? If the proposed scheme of things described above about a cutoff “level of preference for morality” exactly described reality and every agent involved behaved as an “angel”, perhaps they would be correct. However, there are reasons to think that this is not the case.
Let us think, as an extreme case, of a drug addict who is struggling to break free from addiction, and knows that addiction is a bad thing while being sober is a good thing. Under the very same circumstances, being faced with the temptation to take the drugs, on one occasion his willpower may allow him not to do it, while on another occasion he may allow himself to a terrible relapse. This example can be extended to other situations, like simply getting out of bed in the morning as soon as the alarm clock rings or, back to extreme situations, Nazi soldiers helping (or at least allowing) Jewish people to escape once or twice during World War 2. From here we can see that the way we may behave according to or against a certain morally established conduct may vary even independently from observable or measurable outcomes and, therefore, one cannot say for sure that if, ever being faced with a certain situation, they will for sure act in one way or another. There may be interesting consequences about preferences that may be derived from this, such as their structure possibly being volatile even in the short term, which I will not develop for now (an alternative to this could be a violation of the rationality assumption, but since this is one of the basic tenets of the neoclassical school and taking it away could destroy much of what we have learnt about Economics I do not want to do away with it unless it becomes absolutely necessary).
Therefore, going back to truth-telling and adverse selection, no agent may say for sure that they are an “angel”, and therefore the truth-telling constraints, this is, the incentive constraints, should not be done away with even if for some agents they seem unfair. Instead, since these agents value truth-telling, they could instead look at them not as a sign of mistrust, but as a crutch to help them stay true to their principles.