Sarokrae

Wiki Contributions

Comments

Sorted by

I wouldn't worry too much about the comments. Even Guardian readers don't hold the online commentariat of the Guardian in very high esteem, and it's reader opinion, not commenter opinion, that matters the most.

It seems like the most highly upvoted comments are pretty sane anyway!

I've read a fair number of x-risk related news pieces, and this was by far the most positive and non-sensationalist coverage that I've seen by someone who was neither a scientist nor involved with x-risk organisations.

The previous two articles I'd seen on the topic were about 30% Terminator references. This article, while not necessarily a 100% accurate account, at least takes the topic seriously.

This is a summary reasonably close to my opinion.

In particular, outright denouncement of ordinary social norms of the sort used by (and wired into) most flesh people, and endorsement of an alternative system involving much more mental exhaustion for the likes of people like me, feels so much like defecting that I would avoid interacting with any person signalling such opinions.

Actually I don't think you're right. I don't think there's much consensus on the issue within the community, so there's not much of a conclusion to draw:

Last year's survey answer to "which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" was as follows:

Pandemic (bioengineered): 272, 23% Environmental collapse: 171, 14.5% Unfriendly AI: 160, 13.5% Nuclear war: 155, 13.1% Economic/Political collapse: 137, 11.6% Pandemic (natural): 99, 8.4% Nanotech: 49, 4.1% Asteroid: 43, 3.6%

I'm pretty sure this is one of the main areas Prof David Spiegelhalter is trying to cover with experiments like this one. He advises the British government on presenting medical statistics, and his work is worth a read if you want to know about how to phrase statistical questions so people get them more right.

This post reminded me of a conversation I was having the other day, where I noted that I commit the planning fallacy far less than average because I rarely even model myself as an agent.

A non-exhaustive list of them in very approximate descending order of average loudness:

  • Offspring (optimising for existence, health and status thereof. This is my most motivating goal right now and most of my actions are towards optimising for this, in more or less direct ways.)

  • Learning interesting things

  • Sex (and related brain chemistry feelings)

  • Love (and related brain chemistry feelings)

  • Empathy and care for other humans

  • Prestige and status

  • Epistemic rationality

  • Material comfort

I notice the problem mainly as the loudness of "Offspring" varies based on hormone levels, whereas "Learning new things" doesn't. In particular when I optimise almost entirely for offspring, cryonics is a waste of time and money, but on days where "learning new things" gets up there it isn't.

As an "INFJ" who has learned to think in an "INTJ" way through doing a maths degree and hanging out with INTJs, I also agree that different ways of problem solving can be learned. What I tend to find is that my intuitive way of thinking gets me a less accurate, faster answer, which is in keeping with what everyone else has suggested.

However, with my intuitive thinking, there is also an unusual quirk that although my strongly intuitive responses are fairly inaccurate (correct about half the time), this is much more accurate than they have any right to be given the precision of the ones that are correct. My intuitive thinking usually applies to people and their emotions, and I frequently get very specific hypotheses about the relationships between a set of people. Learning logical thinking has allowed me to first highlight hypotheses with intuition, then slowly go through and falsify the wrong ones, which leads me to an answer that I think I couldn't possibly get with logic alone, since my intuition uses things like facial expressions and body languages and voice inflections to gather much more data than I could consciously.

"Whichever subagent currently talks in the "loudest" voice in my head" seems to be the only way I could describe it. However, "volume" doesn't lend to a consistent weighting because it varies, and I'm pretty sure varies depending on hormone levels amongst many things, making me easily dutch-bookable based on e.g. time of month.

I'm not entirely sure. What questions could I ask myself to figure this out? (I suspect figuring this out is equivalent to answering my original question)

Load More