My morality 2.0: theory

I think this post is a worthwhile read for utilitarians who are also moral anti-realists. I dismiss my old morality and try to create new life goals for myself. I didn’t know what kind of goals/morality I’ll end up with when I started writing this text and the conclusions surprised me.

Written-down views vs. real views

I used to think that I am simply utilitarian with some selfish tendencies. When I listened to other people saying they value different things, I used to feel what they mean but bite bullets and say that I only care about happiness and suffering. Maybe I did that out of desire to be different, or out of desire to keep my morality simple and elegant, or because I didn’t want to admit I was wrong. When I allowed myself to feel and explore other emotions, I realised that they were in me all along, repressed under my utilitarianism. I used to always dismiss them, never let them “speak”. I had a very simplistic model of myself. Listening to myself made that model richer and more complex, I added more moral principles emotions to it. I carefully described that model hereStill, when deciding what to do, I didn’t really query myself for an answer. Instead, I was querying that model of myself. I was always asking what would “the person I think I am” want in this situation, not what I want. But no matter how much I work on it, my model of myself and my morality will always be incomplete. There are actions that feel wrong even though I can’t pinpoint what moral principle/top level emotion they break.[1]

Me5_25893f77-75c4-4b7e-949f-127b7395650d

How do I even choose which intuitions are my moral principles (or top-level emotion as I call them in My morality 1.0) and which are biases that should discarded? E.g. if I care about people close to me more than people far away, is that a bias or should that be my moral principle? I decided a long time ago that it’s a bias because I thought that a perfect version of me would care about a human the same amount regardless of how far away (s)he lives. In other words, I would swallow a pill that would eliminate distance as a factor of my care (umm, probably). So caring about people close to me is not what I call a top-level emotion. But it is an emotion that’s near the top. Should I really totally ignore it?

Should I think in terms of moral principles/top-level emotions at all, or should I just do what feels right without trying to understand why it feels right? If I just do what feels right, then I no longer have the problem where I imperfectly model my own morality. But it feels wrong to trust my raw emotional responses for moral guidance because they are irrational, partial and inconsistent. But I’m no longer sure it’s such a bad thing. Why did I want to be rational, impartial and consistent in the first place?

I’ve been dealing with this internal struggle between different moral principles/top-level emotions, emotions that are near top, and selfishness. I’ve been trying to only listen to my moral principles but that causes me to feel guilt every time I’m enjoying myself and my life goals like external obligations that are forced upon me. This leads to procrastinating, just like described here. Old me would’ve said that it’s too early to think whether I procrastinate, because it matters whether I’ll get anything done only after I define what my goals are. But I want to avoid procrastination not just because I want to achieve some goals. I want to have life goals that come from within, that make me excited and genuinely motivated, that lead to me living a rich and fulfilling life. Hmm, so I do seem to know what I want at this high level. And I know that what I want is too complex to have clearly defined moral principles that prescribe the right action for all situations. By the way. position that there should be no defined moral principles is called moral particularism and has some other convincing arguments for it here.

What does it mean to do what feels right?

So I decided that I want to do what feels right and have no defined moral principles. That’s nice but I still need some more concrete goals and rules of thumb. I had a clearly defined purpose through much of my life and I simply don’t know how to make decisions without it. I see at least 3 possible ways to “do what feels right”:

  1. Just do what I want to do in the moment, I’d probably just eat cake and watch TV all the time which would make me miserable in the long term and almost everyone else worse off. Clearly the wrong option.
  2. Judge the desirability of possible world states using my raw gut feeling and then work rationally to achieve them. That would be a form of consequentialism where an action is good if its consequences make the world better according to my gut feeling. E.g. If I feel that world would be better if it had more flowers, then planting flowers is good. This feels like an improvement on utilitarianism. Once I thought that a remote possibility of utilitronium shockwave dominates all the utilitarian calculations and wanted to work towards it as an ultimate goal, despite the fact that world filled with nothing but utilitronium doesn’t sound very exciting and desirable to me. Maybe I should work towards world states that I actually desire to exist.
  3. Do what feels like the most ethical thing to do without worrying about any consequences or rules. E. g. It feels a bit wrong to go to a strip club, so I don’t go, even though I don’t foresee any bad consequences and don’t have any moral principles it would break.

These possible moral systems seem theoretically interesting but I am still not super-excited about living according to any combination of them. It feels like they still paint a very incomplete picture of what I care about. Combination of 2 and 3 would encompass most of my moral intuitions but it would still maintain this internal struggle between different moral principles and selfishness that I’ve been dealing with. This struggle causes me to feel guilt every time I’m enjoying myself, life feels forced and goals seem artificial. I don’t want that, I don’t want there to be any “shoulds” that feel like external obligations, I want all my wishes to be integrated into one system and forget the word “morality” altogether. Yes, even without morality I push myself to pursue memorable, exciting or productive activities instead of playing video games all the time, but such pushing feels a bit more internal and natural than moral obligations.

I think I need a different approach. I want to formulate goals that “make me excited and genuinely motivated, that lead to me living a rich and fulfilling life”. What kind of goals would make me maximally excited? To answer this question I came up with some more specific questions:

  • What kind of person do I want to be? (as an end, not as a mean?)
  • What do I want my life do to look like?
  • Assuming I’m no longer alive, how do I want the world to look like?
  • What would make me feel sense of accomplishment? What would I have to accomplish to feel in my deathbed that my life was well spent and not wasted?

My wishes are very much unstable so I answer these questions often to see how my wishes fluctuate. I do that here.

 

But should I really have to give up consistency of explicitly expressed moral claims completely?

First, I want to clarify the question. I will definitely still use rationality rather than emotions when judging the validity of statements that are either true or false like “London is a capital of UK”, “7+9=16” and “Forced molting processes kill 5 to 10 percent of egg-laying hens”. However, I believe that moral claims (like “suffering is bad”, “lying is wrong”) are neither true nor false. However, moral claims can be inconsistent. E.g. I used to believe these two contradictory beliefs:

  1. Only suffering of highly intelligent beings matters
  2. Suffering of people with severe mental disabilities matters but suffering of chimps doesn’t, even if chimps are more intelligent

When this was argument from species overlap was pointed out to me, I let go of belief 1 and I want to continue making moral progress like this. How should I react when other such inconsistencies are pointed out in my morality or behavior?

Looking at my previous morality, you could say that I used to put rationality and impartiality before emotions. Maybe I should stop doing that because that leads to me forcing myself to do things I don’t feel like. If a logical argument about morality doesn’t move me emotionally, maybe I should ignore it even if I can’t verbalize any reason for ignoring it. I can’t always know why it doesn’t convince me because I don’t have a full and complete model of myself. The problem is that if I take such position, I’m unsure if I’ll ever make any more moral progress. As I mentioned, I was first convinced that non-human animals matter intellectually. I don’t remember how much time it took for me to start caring about non-human animals emotionally. I know I care now. If I had the approach I’ve just suggested, would I had dismissed the argument from species overlap because it doesn’t move me emotionally right away? Ignoring inconsistencies seems wrong. I could at least list inconsistencies I notice in a document. E.g. I think there are solid arguments to disvalue suffering of bugs and reinforcement learners but emotionally I still don’t care about them. Not sure I want to foster such care but I could try a little. I could just list my inconsistencies in a document and see what happens. I don’t have to answer all the questions in ethics at once, I just need to decide how to live my life for now.

Wanting vs. liking

This citation:

The first category, “things you do even though you don’t like them very much” sounds like many drug addictions. Smokers may enjoy smoking, and they may want to avoid the physiological signs of withdrawl, but neither of those is enough to explain their reluctance to quit smoking. I don’t smoke, but I made the mistake of starting a can of Pringles yesterday. If you asked me my favorite food, there are dozens of things I would say before “Pringles”. Right now, and for the vast majority of my life, I feel no desire to go and get Pringles. But once I’ve had that first chip, my motivation for a second chip goes through the roof, without my subjective assessment of how tasty Pringles are changing one bit.

Think of the second category as “things you procrastinate even though you like them.” I used to think procrastination applied only to things you disliked but did anyway. Then I tried to write a novel. I loved writing. Every second I was writing, I was thinking “This is so much fun”. And I never got past the second chapter, because I just couldn’t motivate myself to sit down and start writing. Other things in this category for me: going on long walks, doing yoga, reading fiction. I can know with near certainty that I will be happier doing X than Y, and still go and do Y.

Neuroscience provides some basis for this. A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for “wanting” and “liking”, and were able to knock out either circuit without affecting the other (it was actually kind of cute – they measured the number of times the rats licked their lips as a proxy for “liking”, though of course they had a highly technical rationale behind it). When they knocked out the “liking” system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn’t show up in the MRI. Knock out “wanting”, and the rats seem to

made me realise that I also want to be the kind of person who doesn’t do things that are neither enjoyable, nor useful, like eating chips.

Notes

[1] E.g. I created a meetup.com group Effective Animal Altruism London. When someone stepped down from a similar meetup group London Animal Rights and Welfare and asked who can take it over, I took over and did nothing but duplicated events from my original meetup group so that more people would show up. When I look at my top level emotions, I can see why I did it – according to my intuition such advertising of EA has a positive utilitarian impact (maybe it doesn’t but let’s just assume that it does). There is nothing else in my written down principles that addresses the situation. Yet it feels bad to do this sort of thing. I could add a new top-level emotion. But it feels pointless because I will never be able to list all of them.

Advertisements

2 thoughts on “My morality 2.0: theory

  1. (reminder to myself) Book “Moral Tribes” makes some arguments (camera settings analogy) that we shouldn’t trust our emotions when we are dealing with situations that those emotions weren’t evolved to solve. Most emotions just evolved to solve tragedy of the commons. Does this change enything?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s