Wiki Contributions

Comments

Heh, I got the same feeling from the Dutch people I met. My ex wife once did a corporate training thing where they were learning about the power of "yes and" in improve and in working with others. She and one other European person (from Switzerland maybe?) were both kinda upset about it and decided to turn their improve into a "no but" version.

Ya I definitely took agreeableness == good as just an obvious fact until that relationship.

This isn't as strong of an argument as I once thought


What is the "this" you're referring to? As far as I can tell I haven't presented an argument.

Do you have a link to the job posting?

I would say it feels like my brain's built in values are mostly a big subgoal stomp, of mutually contradictory, inconsistent, and changeable values. [...]

it feels like my brain has this longing to find a small, principled, consistent set of terminal values that I could use to make decisions instead. 


Here's a slate star codex piece on our best guess on how our motivational system works: https://slatestarcodex.com/2018/02/07/guyenet-on-motivation/. It's essentially just a bunch of small mostly independent modules all fighting for control of the body to act according to what they want. 

I don't think there's any way out of having "mutually contradictory, inconsistent, and changeable values." We just gotta negotiate between these as best we can.

There are at least a couple problems with trying to come up with a "small, principled, consistent set of terminal values" you could use to make decisions.

  1. You're never gonna be able to do it in a way that covered all edge cases.
  2. Even if you were able to come up with the "right" system, you wouldn't actually be able to follow it. Because our actual motivational systems aren't simple rule following systems. You're gonna want what you want, even if your predetermined system says to do otherwise.
  3. You don't really get to decide what your terminal values are.  I mean you can fudge it a bit, but you certainly don't have complete control over them (and thank god).

Negotiating between competing values isn't something you can smooth over with a few rules. Instead it requires some degree of finesse and moment to moment awareness. 

Do you play any board games? In chess there are a lot of what we can call "values." Better to keep your king safe, control the center, don't double your pawns etc. But there's no "small, principled, consistent set of" rules you can use to negotiate between these. It's always gotta be felt out in each new situation.

And life is much messier and more complex than something like chess. 

 

It sounds like both of you may have gone through the exercise of find terminal goals that work for you.

I "found terminal goals" in the sense that I tried to figure out what were the main things I wanted in life. I came up with some sort of list (which will probably change in the future). It's a short list, but definitely not principled or consistent :D. Occasionally it does help to keep me focused on what matters to me. If I find myself spending a lot of time doing stuff that doesn't go in one of those directions, I try to put myself more on track.

If you want I can try to figure out how I got there. But it seems like your more concerned with the deciding between competing values thing.

Inclusive genetic fitness seems like it may be a reasonable terminal goal to replace the subgoal stomp.

Ya definitely don't do that. If you did that you'd just spend all your time donating sperm or something.

Answer by weathersystemsJun 19, 202240

While these sound good, the rationale for why these are good goals is usually pretty hand wavy (or maybe I just don't understand it).

 

At some point you just got to start with some values. You can't "justify" all of your values. You got to start somewhere. And there is no "research" that could tell you what values to start with.

Luckily, you already have some core values.

The goals you should pursue are the ones that help you realize those values. 

 

but there are a ton of important questions where I don't even know what the goal is

You seem to think that finding the "right" goals is just like learning any mundane fact about the world. People can't tell you what to want in life like they can explain math to you. It's just something you have to feel out for yourself.

Let me know if I'm miss-reading you.

Maybe a dumb question. What's an EM researcher? Google search didn't do me any good.

What do you think about the vulnerable world hypothesis? Bostrom defines the vulnerable world hypothesis as: 

If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semian-archic default condition.

(There's a good collection of links about the VWH on the EA forum). And he defines "semi-anarchic default condition" as having 3 features:

1. Limited capacity for preventive policing. States do not have sufficiently reliable means of real-time surveillance and interception to make it virtually impossible for any individual or small group within their territory to carry out illegal actions – particularly actions that are very strongly disfavored by > 99 per cent of the population. 

2. Limited capacity for global governance. There is no reliable mechanism for solving global coordination problems and protecting global commons – particularly in high-stakes situations where vital national security interests are involved. 

3. Diverse motivations. There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level) – in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (‘the apocalyptic residual’) who would act in ways that destroy civilization even at high cost to themselves.

To me, the idea that we're in a vulnerable world is the strongest challenge to the value of technological progress. If we are in a vulnerable world, the time we have left before civilizational devastation is partly determined by our rate of "progress."

Bostrom doesn't give us his probability estimate that the hypothesis true. But to me it seems quite likely that at some point we'll invent the technology that will screw us over (if we haven't already). AI and engineered pandemics are the scariest potential examples for me.

Do you disagree with me about the probability of us being in a vulnerable world? Do think we can somehow avoid discovering the civilization destroying tech while only finding the beneficial stuff?

Or do you think we are in a vulnerable world, but that we can exit the "semi-anarchic default condition?" Bostrom's suggestions (like having complete surveillance combined with a police state) for exiting the semi-anarchic default condition seem quite terrifying.

If you've written or spoken about this somewhere else, feel free to just point me there.

I'm not so sure I get your meaning. Is your knowledge of the taste of salt based on communication?

Usually people make precisely the opposite claim. That no amount of communication can teach you what something subjectively feels like if you haven't had the experience yourself.

I do find it difficult to describe "subjective experience" to people who don't quickly get the idea. This is better than anything I could write: https://plato.stanford.edu/entries/qualia/. 

Load More