My morality 1.0

The post bellow was written gradually over the last couple of years. I am no longer sure if I agree with its approach and I will write why in future posts.

My beliefs

By “morality” here I simply mean some rules that we strive to follow when deciding our actions. In my opinion there is no objective moral truth, no set of rules which are correct. Because how would something being morally true manifest itself? Also, in words of Brian Tomasik “why would I care about what the moral truth was even if it existed? What if the moral truth commanded me to needlessly torture babies?” That means that all moral beliefs are unjustified. Objectively speaking, moral beliefs like “people ought not to lie” are just as unjustified as “jews ought to be killed”. Somewhat unusually, instead of direct unjustified moral beliefs, I have unjustified beliefs about what properties moral beliefs ought to have.

Base of morality ought to be emotions

Even if there is no objectively right morality, we can still have some rules we strive to follow. But on what grounds could we possibly choose such rules? How can we decide what shall be valuable as an ends, not as a means? The only base that makes sense to me is emotions. That makes any morality subjective. Notice that sometimes our emotions are self-contradicting. E.g. you want to lose weight but you also want to eat a cake. But you want the wish for losing weight to win this battle. Perfect version of yourself would not eat the cake. I believe that emotions, that perfect version of ourselves would listen to, ought to be the ones we base our morality on. I shall call these top-level emotions.

Morality ought to be rational

Once value is established using emotions, I ought to reason about it rationally.[1] E.g. Let’s say I get to know a random starving child. I know that there are 100 000 more children who are starving just like him. However, my wish to feed them all would not be 100 000 times bigger than my wish to feed this one child. I am not even capable of experiencing such a strong wish. But I believe that I ought to behave as if I cared about feeding 100 000 children 100 000 times more than about feeding one[2] because I believe that I ought to be rational. This is just one of many flavours of what I call rationality. Another example is that I think it’s rational to behave as if 1% chance of feeding 100 children = feeding 1 child. In general, my decisions should be consistent, no Dutch Books possible.

Morality ought to be impartial

Let’s say I must choose: either I have to suffer intense pain for one hour, either 2 people I don’t know each has to suffer the same amount of pain for one hour. Let’s also say that I (or 2 others) would not lose that hour of life and would not remember this experience (to rule out justifications like “I would have done something important during that hour”). I believe that I ought to choose me suffering the pain. I wouldn’t want to, but I believe I would ought to. In analogical situation I believe I ought to choose my mother suffering instead of 2 strangers suffering, even though I subjectively care about my mother much more. This is a standard assumption in moral philosophy but in real life most people don’t share this belief. E. g. majority of people feel no moral guilt for spending most of their resources on themselves and their family.

My top-level emotions and further reasoning

List of my top-level emotions and conclusions from them:

  1. I want to maximise happiness and minimise suffering for me and people close to me. However, I believe I ought to be impartial, so I try to care about happiness/suffering of all people equally. That lead to utilitarianismAfter considering argument from species overlap and other arguments against speciesism I realised that caring only about humans is irrational, so now I care about other animals too.
  2. I want to make valuable contributions towards utilitarian goals. The source of this unusual wish is probably me being utilitarian for a very long time. For a long time I mistakenly thought that utilitarianism is objectively correct morality. I was often judging myself by how good action is according to utilitarianism and that may have developed this wish in me. It brings up genuinity of reasoning above into question: it’s possible, that when I invented utilitarianism during childhood[3], reasons for committing to it were different from what I present here (e. g. wish to be different). Maybe I now just rationalise my utilitarianism with the reasoning presented above. Does it matter what were my real/original reasons for utilitarianism? I’m not sure. Since there is no objective morality, how could some reasons for choosing a morality be good or bad? What does good even mean in this sentence? I guess this wish simply strengthens my utilitarianism.
  3. I want world rich of species, animals, culture and other wonders to exist. I discovered this emotion when I reasoned that we ought to destroy as much nature as possible to avoid wild animal suffering. But the thought of wonders of nature not existing simply made me extremely sad. That emotion had nothing to do with selfishness, I wanted this rich world to exist whether I exist or not. Then I remembered that, according to science, we probably live in one of many parallel universes and our own universe is huge, if not infinite. If there are a lot of planets with almost identical flora and fauna, I am less sad over destroyed rainforest on Earth. For some reason I must be sure that these other planets exist for this to work. It barely matters to me emotionally whether there is 0% or 90% probability that these planets exist, it only makes a difference if it’s 100%. This is not rational. But what resolution would be rational? See applying math on emotions.
  4. I want to avoid overriding other’s wishes. Imagine one person living in a desert island who is greatly suffering but wants to live because he e.g. thinks that premature death is immoral. Something feels wrong when saying “Even though you want to live, I will kill you because I’ve calculated that you would experience more suffering than happiness in your future life. And according to a morality I made up, suffering and happiness is all that matters.” I’m unsure whether this really is a separate top-level emotion or just confusion about what happiness is. I had discussions about it here and here.
  5. I want to be honest. In thought experiments like “would you lie to someone who is dying to make him happier, no one else will ever know, because the universe will be destroyed shortly after” part of me feels that there is something wrong about lying there. I used to bite the bullet and say that I would lie but I think that was confusion on my part. I think that lying by omission, not technically lying, etc.[7] are also forms of dishonesty that are equally as bad. I think that the only virtuous way to go is to always try to create as accurate portrayal of reality in others’ brains as possible, even when drastically better utilitarian outcomes could be achieved by lying. Although I’d be fine with lying to people with dementia who will soon forget the conversation anyway. For a long time I thought that I am only honest for instrumental reasons but now I think that it’s a separate top-level emotion.
  6. Accomplishments. All else being equal, I slightly prefer a person to be happy because he accomplished something, not because he consumed cocaine that was given to him by utilitarians (like in all proper thought experiments, universe is destroyed shortly afterwards, no long-lasting effects of cocaine count).
  7. Maybe I value cooperation a bit, I’m not sure.

I don’t know how to make trade-offs between different values and I am pretty sure there is no solution to the problem. Utilitarianism is the strongest one so I ussually just follow it. But I’m unsure whether I should continue doing that.

My views vs. views of my idealised self

If I had to choose whose values to maximise:

a) The ones written down in this text

b) The ones I would write down after 10 more years of thinking about morality, if I thought faster, experienced more, talked with all the people and read all the books. My own personalised Coherent Extrapolated Volition, if you will.[4]

Of course I would take b). I have some evidence about how b) looks like. I can look at conclusions of other people who have thought a lot about morality. Even if I don’t agree with their conclusions, I assign them some moral weight for this very reason. For example, some people think that lying is inherently wrong. I think it’s usually (or always?) wrong to lie for instrumental reasons. But if I thought about it more, if I had different experiences, there is a chance that I would agree that lying is wrong inherently so I behave as is if I thought that lying is a little bit wrong inherently.[5]

If some experience would change what I think, I try to behave as if I had that experience. E.g. It seems that people who have experienced extreme suffering are more likely to be more negative-leaning utilitarians.[6] As a result I slightly negative utilitarian. But not much because it could be that having extreme happiness would make us “positive utilitarians” (i.e. make us value happiness more than suffering). However, I am less sure of that, there is no empirical evidence.

Notes

[1] Rational is a bit of unclear term. Maybe we should stop using it, but it’s difficult. By rational I mean noticing logical flaws in our emotions and trying to behave as if those flaws didn’t exist. By logic I mean patterns of thinking that we can’t imagine not being correct. E. g. we can’t imagine how 2+2 could not be equal 4 or how from “all swans are white” it would not follow that a given swan is white. I am sure there are long philosophical texts that define these terms better, but I think this is good enough for conveying my thoughts. It’s interesting to note that logic could be false too. My favourite expression of this thought is here.

[2] Or should I care about feeding 1 child 100 000 times less than about feeding 100 000? See applying math on emotions.

[3] Interestingly, I invented utilitarianism from scratch and it was my morality, long before I knew the term utilitarianism. Had no clue my idea was not original. I was shocked to discovered Wikipedia article about utilitarianism.

[4] Should I take CEV of humanity instead? No, I am not altruistic at THAT level. I want to maximise MY values. Besides, my CEV may choose to maximise CEV of everyone.

[5] Note, that for the same reason, if people believe there is an omniscient, maximally wise god, it might make sense for people to believe his moral views even if they don’t understand them and don’t agree with them.

[6] E.g. they’d rather prevent one person’s extreme suffering than make one trillion people slightly happier (or prevent mild suffering for trillion people because extreme suffering is incomparably worse in their opinion).

[7] One way I catch myself being dishonest sometimes is trying to avoid obtaining information that I would then have to reveal to avoid lying by omission. E.g. (real situation) group of people I am in try to decide between donating a lot of money to a charity that I consider to be effective (Against Malaria Foundation) and a charity that I consider to be ineffective (Switchback). As soon as effective charity is in the lead, I stop collecting information about both charities out of fear to find something that could make another charity be in the lead.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s