# A Nerdly, Software-Engineering-ish, Computer-Sciencey Approach to the Gay Marriage Debate

Gay marriage is a proposed change that’s hoped to be an optimization. Advocates say that the current system (society) is sub-optimal in that a particular sub-system (marriage) is not sufficiently general (same-sex couples aren’t allowed to participate). Let’s say marriage is a function by which a man and a woman are combined to create an object conforming to the Family interface:

marriage(Man,Woman) : Family

The proposed optimization generalizes this function:

marriage(Person,Person) : Family

A general rule in software engineering is to maximize the generality of the input parameters (Person instead of Man or Woman). In this light, the second marriage function represents an improvement because it makes the marriage function applicable to more people.

However, this focus on the function signature obscures some important issues. For one thing, the marriage function does not stand in isolation, but is embedded at the core of an extremely complex network of functions. In software this is known as the call graph. For example, the network of function calls in one relatively small piece of software looks like this:

Each node (circle) represents a function, and each edge (line between circles) represents one function calling (using) another. Now imagine the complexity of such a graph for our entire society, in which marriage constitutes only a single (though essential) node, connected to things like employment relationships, governmental functions, child-rearing, religious organizations, and so on. A key function like marriage would be at the center of the graph, like the large green dot in the middle of the picture. Messing with that node can affect not only it and its immediate neighbors, but actually every other node in the graph.

It should also be noted that the proposed optimization not only redefines the function signature (who can participate in marriage) but actually provides a new implementation for the function:h

class GayFamily implements Family { // a bunch of experimental code... } 
In software, the new implementing class could be automatically tested to make sure it meets the requirements of the Family interface. In society, such automated tests are unavailable, and the impacts of this new implementation may not be fully known within the natural lifetime of anyone now living.

In fact, it’s essentially impossible to conduct the sort of experiment needed to assess whether gay marriage is a good idea. We’re not just interested in effects on individuals or families (important as they are) but on entire societies. It would require many societies under many circumstances randomly assigned to either implement or not implement the change, and for sufficient time to pass for the network effects to work themselves out.

Obviously this line of reasoning could be applied to various types of radical social change, and I’m certainly not saying that such change is never justified. But I am saying we need humility and caution when hacking a key function like marriage that the entire system is built around. In the end, this isn’t software. This is society. It’s parents, it’s children, it’s lives.

Because an robust empirical evaluation of this change at the civilizational level will not be soon forthcoming, anyone with an opinion must come to it by non-empirical means. And my opinion is that this change isn’t worth the risk.

# Choice

Relative to our own capability to act, believe, intend, or feel in various ways, choice is the process by which we actually do act, believe, intend, and feel. While some in the cognitive sciences feel that our choices may be entirely a product of chemical processes and circumstance, most people believe at some level that choice is a function of some inviolable personal free will or agency.

For purposes of discussion, let $\Omega$ (omega) be the universe of all possible actions, beliefs, intents, and feelings. Let $G$ be a group of people, and let the range of actions, beliefs, intents, and feelings possible for $G$ be called $X$ and constrained by $X \subseteq \Omega$. We could say that $X = \bigcup_{g \in G}{X_g}$ where $X_g$ for a given individual $g$ is the range of actions, beliefs, intents, and feelings possible for that individual.

### Compulsion

We usually think of compulsion as forcing a person to undertake some action/belief/intent/feeling $y$. Alternately, we can state more generally that compulsion is the ability to restrict a person’s or group of people’s $X$ arbitrarily. A person bound by options $X_g$ can operate within $X_g$‘s range of options. They can even limit themselves further to a smaller subset of $X_g$. But they cannot operate outside of their own range of possibilities, in the space defined by $\Omega - X_g$. Governments have some abilities to restrict or expand $X_g$ for an individual and $X$ for a group, though thankfully this sort of interference is significantly limited by constitutions in the United States and other countries. Theoretically, a totally despotic government could cause somebody to do an arbitrary $y$ by restricting their set of options until $X_g = \left\{y\right\}$ and $y$ is the only option left.

### Popular Opinion

In society we tend to accept restrictions in some situations and oppose it in others. The acceptability of these restrictions seems to be governed both by legal processes and by the feelings of the public at large. Representative government still offers no perfect solution, but generally a tension of these two forces results in policy that reflects popular opinion except in certain cases where the law intervenes on behalf of minority groups.

### The Public Interest

Ultimately, many instances of expanding $X_g$ for one person results in limiting $X_g$ for another. As stated previously, $X = \bigcup_{g \in G}{X_g}$. If $X_g$ is not disjoint (i.e. there is overlap between various $X_g$s), then the potential for conflict resides at $X_a \cap X_b$ for any $a \in G$ and $b \in G$. So if one option in my $X_g$ is to park in parking spot 457, then, once I have parked there, parking in spot 457 is no longer an option for anyone else. So who says I can limit other people’s options like that? Well, perhaps I have purchased a parking pass for that spot—using prices to limit demand for a certain option can be effective.

But for some instances of $X_a \cap X_b$ there is no established system of conflict resolution. So how should these conflicts (or potential conflicts) be dealt with? Are there some ground rules?

The idea of “general welfare” can be helpful. Indeed, this is enshrined in the U.S. Constitution’s Preamble as one of the primary purposes for the existence of government. And even when it’s not explicitly invoked as a legal doctrine, general welfare is frequently a guiding force. For example, the substantial injury to the welfare of blacks who were denied the vote in the past was not outweighed by the minor, supposed benefit of a sense of superiority this allowed whites to carry around, and this fact was acknowledged when the judges deciding Brown v. the Board (I can’t remember the full case name) acted to “promote the general welfare.”

We all hope that jurists, legislators, and executive officers will be wise and thoughtful judges of what will best promote the benefit of society as a whole. But do they always do so?

# Choose

There comes a point for all of us where we simply have to make a decision: either we choose the flat, gray neutrality of belieflessness, or we choose to see the world in the dynamic contours of faith.

When you believe in nothing—or, rather, when your belief is that there is no right or wrong, no good or bad—then everything becomes the same. Giving your grandma a hug versus, say, a punch in the face is totally neutral as far as morality, according to this view. The truly principled neutralitist will refuse to judge any one circumstance as being better than any other. But most adherents are quite human (as we all are) and succumb to considerations of self-interest. This totally relative comparison then pervades and colors everything, but, in essence, there is still only one reality: me. It’s not that pursuit of self-interest is inherently evil. But, of course, neither is it inherently good.

When the morally-positive paradigm is assumed, on the other hand, another metric for choosing one thing over another comes into play. Aside from “What’s in it for me?” there comes to be “Is it right?” These two imperatives can be sharply divergent. It seems like, even sans penalties imposed by society, something like embezzlement would be highly approved of by the self-interest imperative, but disapproved of by the morality imperative. These moments of divergence often become a defining experience for the believer, for choosing something other than what raging self-interest dictates both proves and reinforces belief.

There is no such divergence for the self-interested neutralitist, and thus there is no self to overcome, no hill to climb, no peak to summit, no view to behold. Just flat, boring, static neutrality. Even meeting all of the demands of self-interest brings no satisfaction for, in the neutral paradigm, after death there will be no “self” to remember how much of its interest was achieved, so why achieve it at all?

Which leads to another issue: no morally neutral paradigm asserts anything about the post-death self other than “It ceases to exist.” Why is this? I believe it results from a combination of two things: first, if an individual first comes to believe that death is the end of the self, a moral neutrality frequently follows due to the elimination of incentives; second, though I’m purely speculating here, if an individual first comes to a stance of moral neutrality, belief in post-death persistence of self might suddenly seem irrelevant: if there’s no right or wrong here, then it’s probably because there isn’t any there either.

Likewise any moral positivism that denies the persistence of self past death saps itself of much of the incentive for the believer to follow the moral imperative.