All of shokwave's Comments + Replies

I went from ardently two-boxing to ardently one-boxing when I read that you shouldn't envy someone's choices. More general than that, actually; I had a habit of thinking "alas, if only I could choose otherwise!" about aspects of my identity and personality, and reading that post changed my mind on those aspects pretty rapidly.

An extreme form of brain damage might be destruction of the entire brain. I don't think that someone with their entire brain removed has consciousness but lacks the ability to communicate it; suggesting that consciousness continues after death seems to me to be pushing well beyond what we understand "consciousness" to refer to.

The brain seems to be something that leads to consciousness, but is it the only thing?

Maybe other things can "lead to" consciousness as well, but what makes you suspect that humans have redundant ways of generating consciousness? Brain damage empirically causes damage to consciousness, so that pretty clearly indicates that the brain is where we get our consciousness from.

If we had redundant ways of generating consciousness, we'd expect that brain damage would simply shift the consciousness generation role to our other redundant system, so the... (read more)

0Adam Zerner9y
It causes damage to our ability to communicate our consciousness. For all we know, people with brain damage (and who are sleeping, unconscious, dead etc.) may be conscious, but just unable to communicate it with us (or remember it when they wake up A concrete example might help. Consciousness could exist on some small quantum or string level, or other small level we haven't even discovered yet. It's possible that this level is undisturbed when we die, and that we continue to be conscious.

Well, in dath ilan, people do still die, even though they're routinely cryonically frozen. I suspect with an intelligence explosion death becomes very rare (or horrifically common, like, extinction).

Only a few people die. Once they figure out how to cure death, they'll stop dying. The vast majority of members will exist after that point.

I'd caution that suspecting (out loud) that she might develop an exercise disorder would be one of those insulting or belittling things you were worried about (either because it seems like a cheap shot based on the anorexia diagnosis, or because this might be one approach to actually getting out from under the anorexia by exerting control over her body).

Likely a better approach to this concern would be to silently watch for those behaviours developing and worry about it if and when it actually does happen. (Note that refusing to help her with training and... (read more)

It seems like the War on Terror, etc, are not actually about prevention, but about "cures".

Some drug addiction epidemic or terrorist attack happens. Instead of it being treated as an isolated disaster like a flood, which we should (but don't) invest in preventing in the future, it gets described as an ongoing War which we need to win. This puts it firmly in the "ongoing disaster we need to cure" camp, and so cost is no object.

I wonder if the reason there appears to be a contradiction is just that some policy-makers take prevention-type measures and create a framing of "ongoing disaster" around it, to make it look like a cure (and also to get it done).

One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.

This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others.

If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example,... (read more)

If all 'moral worth' meant was the consequences of what happened, I just wouldn't deem 'moral worth' to be that relevant towards judging. It would seem to me like we're just making 'moral worth' into something kind of irrelevant except from a completely pragmatic point. Not sure if saying 'making the best decision you could is al you can do' is that much of a shortcut. I mean, I would imagine that a lot of smart people would realize that 'making the best decision you can' is still really, really difficult. If you act as your only judge (not just all of you, but only you at any given moment), then you may have less motivation; however, it would seem strange to me if 'fear of being judged' is the one thing that keeps us moral, even if it happens to become apparent that judging is technically impossible.

There's no other source of morality and there's no other criterion to evaluate a behaviour's moral worth by. (Theorised sources such as "God" or "innate human goodness" or "empathy" are incorrect; criteria like "the golden rule" or "the Kantian imperative" or "utility maximisation" are only correct to the extent that they mirror the game theory evaluation.)

Of course we claim to have other sources and we act according to those sources; the claim is that those moral-according-to-X behaviours are im... (read more)

What makes the game theory evaluation correct?
But we are our adaptations. Are you claiming morality should be defined by evolutionary fitness? (So we should tile the universe by our DNA?) How is that better than other external sources of morality? We already have a morality, it doesn't matter (for the purpose of being moral) where it came from, be it God or evolution. Also, saying the morality comes from solving PD doesn't help, since PD already assumes the agents have utility functions. Game theory is only directly relevant to rationality, not morality. If you and I are playing a non-zero sum game then we better cooperate for our own good. But the fact that my utility function already includes your well-being is completely independent. I agree that evolutionary thinking can be helpful to figure out what our morality is (since moral intuition is low bandwidth and noisy), but I'm against imaginary extrapolations of evolution.

Sorry, I was trying to get at 'moral intuitions' by saying fairness, justice, etc. In this view, ethical theories are basically attempts to fit a line to the collection moral intuitions - to try and come up with a parsimonious theory that would have produced these behaviours - and then the outputs are right or interesting only as far as they approximate game-theoretic-good actions or maxims.

Even given other technological civilisations existing, putting "matter and energy manipulation tops out a little above our current cutting edge" at 5% is way off.

Way off in which direction?
There's a lot you can do on the surface of a clement planet, and a lot you can do in a solar system without replicators that eat everything. Also depends on what you mean by 'above'.
What do you mean "just"?
Unsure what you mean by the 'just'. Should it be more, and what is different about how we value morality based on its origin?
By "concept of morality", do you mean moral intuitions or the output of ethical theories?

so they round me off to the nearest cliche

I have found great value in re-reading my posts looking for possible similar-sounding cliches, and re-writing to make the post deliberately inconsistent with those.

For example, the previous sentence could be rounded off to the cliche "Avoid cliches in your writing". I tried to avoid that possible interpretation by including "deliberately inconsistent".

I like it - do you know if it works in face-to-face conversations?

I suspect the real issue is using the "nutrients per calorie" meaning of nutrient dense, rather than interpreting it as "nutrients per some measure of food amount that makes intuitive sense to humans, like what serving size is supposed to be but isn't".

Ideally we would have some way of, for each person, saying "drink some milk" and seeing how much they drank, and "eat some spinach" and seeing how much they ate, then compare the total amount of nutrients in each amount on a person by person basis.

I know this is not the correct meaning of nutrient dense, but I think it's more useful.

I think the best we can hope in this context is to have a number of distinct and precise metrics--like nutrients per calorie, nutrients per dollar and nutrients per bulk--, feed these to intuition, and decide accordingly. In other words, when it comes to food, I think we should make decisions according to a "rational" rather than a "quantified" model [], given the difficulties of coming up with adequate definitions of a "serving size". Your approach wouldn't work, I believe, because how much people eat of a given food often depends on the presence or absence of other complement and substitute foods.

Counterpoint: Beeminder does not play nice with certain types of motivation structures. I advocated it in the past; I do not anymore. It's probably not true for you, the reader (you should still go and use it, the upside is way bigger than the downside), but be aware that it's possible it won't work for you.

Yeah. Beeminder doesn't work for me either - nor do most online punishment-based motivators. My problem with it is that it doesn't punish you for failing to do the thing you need to do. It punishes you for failing to record the fact that you did the thing you need to do. So if you're time-poor (like me) and still managed to do the thing... but didn't have time to go online and tell beeminder that you did the thing... you still get punished. :(

I mentioned on Slate Star Codex as well, it seems like if you let consequentialists predict the second-order consequences of their actions they strike violence and deceit off the list of useful tactics, in much the same way that a consequentialist doctor doesn't slaughter the healthy traveler for the organ transplants to save five patients, because the consequentialist doctor knows the consequence of destroying trust in the medical establishment is a worse consequence.

So, officially there is a battle between X and Y, and secretly there is a battle between X1 and X2 (and Y1 and Y2 on the other side). And people from X1 and X2 keep rationalizing about why their approach is the best strategy for the true victory of X against Y (and vice versa on the other side).

This part doesn't make clear enough the observation that X2 and Y2 are cooperating, across enemy lines, to weaken X1 and Y1. 2 being politeness and community, and 1 being psychopathy and violence.

Disclaimer: I mentioned psychopaths and violent people, but that's in a context of an actual war and actual killing. If we only speak about "fighting" metaphorically, we need to appropriately redefine what it means to be "violent". In context of verbal internet wars, the analogy of psychopaths would be trolls, and the analogy of people who enjoy violence would be people who enjoy winning debates. For the internet version of Genghis Khan, the greatest joy is to defeat his enemies in a public discourse, make them unpopular, destroy their websites, and take over their followers. The important thing is to win the popularity contest, having a better model of reality is only incidental. The thing to protect is the pleasure of winning, but other people's applause lights can be used strategically. A person from X1 has only friends in X1 and X2. A person from X2 has friends in X1, X2, Y2. Assuming that having more friends is an advantage, the mutual politeness creates an advantage for people from X2 and Y2, and this is why they are doing it. I'd call that cooperation. In their case, cooperation is both a strategy and a goal. In a way, also people from X1 and Y1 cooperate, but this cooperation is purely instrumental, as they hate each other. However, any act that successfully increases the mutual hate between groups X and Y helps them both, because it reduces their relative disadvantage against the 2.

(Rational) Harry

Seemed eminently more readable than rationalist!Harry to me when I first encountered this notation, although now it's sunk in enough that my brain actually generated "that's more keystrokes!" as a reason not to switch style.

Just curious (and not necessarily addressed to you specifically), but what on Earth is wrong with the standard, conventional English notation for this, which is a hyphen? E.g. "Rational-Harry" etc.

I don't subvocalise, and when I learned that other people do I was very surprised. A data point for subvocalisation being a limit on reading speed: I read at ~800wpm.

It was a tongue-in-cheek suggestion to begin with (an amusing contrast to all the others saying 'turn money into time'), but modafinil has a unique claim to "buying time": it lets you function just as well and usually better than average, on less sleep. A more thorough analysis

It's been a while since I watched it, but do you think Ben Affleck's character in Good Will Hunting was rational, but of limited intelligence?

Yep, a pretty good example, I think

Look, you're my best friend so don't take this the wrong way, but if you're still living here in 20 years, still working construction, I'll fuckin' kill ya. Tomorrow, I'm gonna wake up and I'll be fifty, and I'll still be doing this shit. And that's alright, that's fine. But you're sitting on a winning lottery ticket and you're too scared to cash it in, and that's bullshit. Cau

... (read more)

Turn your money into time; that is, purchase modafinil.

Or, if the legality is an issue because you need e.g. security clearance for your job, adrafinil.
I'm not sure if stimulants are adequately described as turning money into time. They generally speed up time perception, meaning that you experience the same period of time as subjectively shorter. Sure, you get more stuff done in it, but still... The formulation is perhaps misleading.
Doesn't work for me. I feel much worse for a about a week after using it.

With all capitalized words the list would start like this:

You know that feeling you get when you're coding, and you write something poorly and briefly expect it to Do What You Mean, before being abruptly corrected by the output? I think I just had that feeling at long distance.

From looking at the scripts, it appears first and last names (actually, all capitalised words I think) were counted separately ("Neal: 11, Stephenson: 11" and "Munroe: 13, Randall: 11", etc) and first names were handedited out (so that's why both Nassim and Taleb are on the list).

The answer is somewhere between "Nassim Taleb was quoted 16 times, and three of those times the attribution was just 'Taleb'" and "Nassim Taleb was quoted 13 times and was mentioned in three other quotes (since he's a controversial figure)".

Yes. To be exact, not all capitalized words, but all capitalized words that my English spellchecker does not recognize. With all capitalized words the list would start like this: * 1523 I * 1327 The * 558 It * 428 If * 379 But Of course the spellchecking method is itself a source of errors. Previous years I never felt like manually correcting these, but checking now it seems like these were the main victims: * Graham 43 * Bacon 20 * Newton 18 * Franklin 18 * Shaw 17 * Silver 12 * Pinker 10 Graham is actually number one. I added them to this list, and also to the "Top original authors by karma collected" list. Not retroactively, though, just for 2013.

I think it's a bit unfair to the average physicist to say that he's closer in intelligence to the village idiot than to Einstein

The average physicist's contribution to physics is closer to the village idiot's contribution than to Einstein's, no?

Depends on whether you use a log scale.
Well not really. I think it's a bit unfair to the average physicist to say that he's closer in intelligence to the village idiot than to Einstein, don't you think...? Hence the average phycisist should be much further to the right on your scale. Thus zooming in rather illustrates what I wanted to say - that productivity increases massively beyond a certain level of ability.

Excellent in-group signalling but terrible public relations move.

We don't need or want to signal friendliness to absolutely everyone. We want to carefully choose what kind of filters and how many filters we apply to people who might be interested in our community. Every filter comes with a cost in that it reduces our growth, and must be justified through increasing the quality of our discussions. However, filter not at all, and you might as well just step out onto the street and talk to strangers. Personally, I am all for filtering out the "punish for not putting modesty before facts" attitude. Both because I find it irritating, and because it drives away boastful awesome people, and I like substantiated boasting and the people who do it. In other words, "Yeah, fuck 'em."
So is admitting to being an atheist, for example. Optimizing for public relations is rarely a good move.

Fair enough; drug use is a lot more public relations damaging than self-proclaimed high IQ.

And the same goes for recreational drug-use, no? If it's just in the survey like IQ is and we don't have a banner proclaiming it, the argument that it might make us look bad doesn't hold any water.

If you replace "smart" with "used drugs recreationally" you might see my point?

Actually I don't think that rationality (as the CFAR mission) has much to do with using drugs recreationally it does have something to do with being smart. You could have a CFAR that experiments with various mind altering substances to see which of those improve rationality. That's not the CFAR that we have. I did a lot of QS PR. That means having a 2 hour interview where the journalist might pick 30 seconds of phrases that come on TV. I wouldn't have had any issue in that context of playing into a nerd stereotype. On the other hand I wouldn't have said something that fits QS users into the stereotype of drug users.

The same problem you presumably have with someone external writing an article about how LW is a group of criminals: it makes us look bad.

You might not agree with self-proclaimed high IQ being a social negative, but most of the world does.

So? Fuck 'em.
I don't think the goal of LW is to be socially approval for the average person. On the one hand it's to grow people who might want to participate in LW. The fact that LW has many smart people in it, could draw the right people into LW. On the other hand it's to further the agenda of CFAR, MIRI and FHI. I don't think the world listens less to a programmer who wants to warn about the dangers of UFAI when the programmer proclaims that he's smart. It's very hard for me to see a media article that wouldn't describe CFAR as a bunch of people who think they are smart. If you write the advancement of rationality on your bannar, that something that everyone is to assume anyway. Having polled IQ data doesn't do further damage.
Depends of how loudly you self-proclaim it. It's not as we had a mensa banner on the frontpage or something.

The offence centered on the ableism of the slurs in particular; "You're free to use an insult I can't stand on things I don't respect, but I won't stand for use of it on things I do respect" doesn't sound like a standard policy; otherwise you'd feel comfortable using profanity in front of your parents, but only when talking about a group they don't respect.

There interested in not gathering data that would cause someone to admit criminal behavior.

As far as I'm aware - and correct me if I'm wrong - drug use is not a crime (and by extension admitting past drug use isn't either). Possession, operating a vehicle under the influence, etc, are all crimes, but actually having used drugs isn't a criminal act.

There also the issue of possible outsiders being able to say: "30% of LW participants are criminals!"

The current survey (hell, the IQ section alone) gives them more ammunition than they could possibly expend, I feel.

If one is known for using drugs, then every unusual claim he makes is dismissed as a literal pipe dream. It is a huge blow to authority.
How do you use a drug without possessing it at some point? Isn't admitting use of drugs a fortiori an admission of possession of drugs?
What the problem with someone external writing an article about how LW is a group who thinks they are high IQ?

really incredibly blunt

It's possible that it is too blunt. My instinct (calibrated on around half a hundred nights of conversation with Australian LessWrongers in person) says that it's not, though.

Good point. It might not even make sense to ask "Which culture of social interaction do you feel most at home with, Ask or Guess?".

  • Are you Ask or Guess culture?

I'm not culture.

In some social circles I might behave in one way, in others another way. In different situations I act differently depending on how strongly I want to communicate a demand.

P(Supernatural): 7.7 + 22 (0E-9, .000055, 1) [n = 1484]

P(God): 9.1 + 22.9 (0E-11, .01, 3) [n = 1490]

P(Religion): 5.6 + 19.6 (0E-11, 0E-11, .5) [n = 1497]

I'm extremely surprised and confused. Is there an explanation for how these probabilities are so high?

Our universe came from somewhere. Can you be 100% sure that no intelligence was involved? If there was an intelligence involved, it would probably qualify as supernatural and god, even if it was something technically mundane (such as the author of the simulation we call reality, or an intelligent race that created our universe or tweaked the result, possibly as an attempt to escape the heat death of their universe). Eg if you ask our community, "What are the odds that in the next million years humans be able to create whole world simulations?" I suspect they'll answer "very high". For extra fun, you can wonder if the total number of simulated humans is expected to outnumber the total number of real humans.

Well, we apparently have 3.9% of "committed theists", 3.2% of "lukewarm theists", and 2.2% of "deists, pantheists, etc.". If these groups put Pr(God) at 90%, 60%, 40% respectively (these numbers are derived from a sophisticated scientific process of rectal extraction) then they contribute 6.3% of the overall Pr(God) requiring an average Pr(God) of about 3.1% from the rest of the LW population. If enough respondents defined "God" broadly enough, that doesn't seem altogether crazy.

If those groups put Pr(religion) at 90... (read more)

I hope rationalist culture doesn't head in that direction.

Something like "I'm finding this conversation aversive, and I'm not sure why. Can you help me figure it out?" would be way more preferable. Something in rationalist culture that I actually do like is using "This is a really low-value conversation, are you getting any value? We should stop." to end unproductive arguments.

It seems that preferences must vary on this one. This one seems much more potentially problematical because it pulls the other into your (already aversive) emotional world. It can work if there is already a huge amount of rapport and intimacy but the other more independent request seems safer. I really do like whatever variants of the theme "Agree to d̶i̶s̶a̶g̶r̶e̶e̶ STFU" that can be made to work.

To the latter, your interlocutor says (or likely, thinks to themselves):

"Uh, actually, I was rather enjoying that conversation. I thought it had value. But I guess I was wrong; it seems you do not find me interesting, or think that I am annoying. That hurts."

Working as intended?

This is a horrible thing to do to a Guesser. (I agree denotatively, but...)

It took me almost six months from meeting a particular Guess person to realise this: the times I offended them clustered according to whether I was a soldier in their war, not by my actual actions.[0]

Lots of things, maybe most things you can do in a conversation are horrible things to do to a Guesser. I'm well above average for social skills plus a few points above LW average IQ and even I find it hard to navigate conversations with a Guesser (I swear I have better social skills... (read more)

What's your policy for interacting with Patrick? Do you get along? I have some of the same problems you describe about walking on eggshells around Guessers.

0: I could use ableist slurs (insane; crazy) freely to deride people, institutions, papers etc that argued for no gendered pay gap, for biological difference between race, etc. But it was a serious transgression to use the same slurs to describe people, institutions, or papers that argued for parapsychology, telepathy, etc.

"You're free to insult the things that I don't have much respect for, but not the things that I do respect" sounds like the standard policy of most humans, Guesser or not.

I recognise your concern acutely - I've had the same "one of those people who has poor social skills and yet wants me to behave more like them" - and I think stressing the "whenever you suspect you'd both benefit from them knowing" part of rule one much more seems like it would help a lot in that direction.

(It's cheap, not cheep)

Tell and Ask seem to be more compatible than Ask and Guess. I have no intuition for how compatible Tell and Guess are. I think Ask is cheaper for the teller than Guess is (in Guess, you have to formulate a plausible sentence that contains a subtle request, unless you want to force the receiver).

I really like the idea of Tell on a date; I think it's already somewhat present in the rationalist meetup I attend.

It's evidence that Guess is the Nash equilibrium that human cultures find. Consider that the Nash equilibrium in the Prisoner's Dilemma (and in the Iterated Prisoner's Dilemma with known fixed length) is both defect. It's a common theme in game theory that the Nash equilibrium is not always the best place to be.

I am going to attempt to summarise this, hopefully fairly. A warning, for anyone to whom it applies: a cis white male is going to try and say what you said, but better.

I am doing this because I think social justice / equality is 1) important, 2) often written with an extreme inferential distance.

Parentheses with "ed:" are my own addition, usually a steelman of the author's position or an argument they didn't make but could have, although sometimes a critique. They aren't what the author said.

This is inspired by Yvain's writing, in particular a

... (read more)
I found the original post pretty unreadable, so thank you for summarizing it. Upvoted for helpfulness.
86.2% of respondents to the 2012 survey were cis males, and 84.6% of respondents were non-Hispanic whites.
While this comment may be helpful, I would advise that you only read it after reading and trying to understand the original.

Not necessarily, and in the case of "avowed racists of Less Wrong" almost certainly not. The "biological realism" concept is that there are genetic and physiological differences split so sharply along racial lines ("carves reality at its joints") that it is correct to say that all races are not born equal. Proponents of this concept would claim it is obviously true, and they would also be called racists. These people could donate heavily to African charities out of sympathy for what is, in their eyes, the "bad luck" ... (read more)

but of sympathy for what is, in their eyes, the "bad luck" to be born a certain race

Or more to the point, sympathy for people with greater challenges than others, and finding that African charities, by targeting Africans, are more likely to target people with those challenges.

I made a $150 donation. I particularly like that effort has gone into making the workshops more accessible. I'm suggesting to my father that he should apply for the February workshop (I am very surprised to have ended up believing it will be worthwhile for him).

Thank you!

It's unfortunate that "calories in, calories out" and "saturated fats are bad" are both general medical consensuses (wow, that word is actually in dictionaries) - it seems very likely the first is true and the second false, but both issues have the same "medical consensus saying they're true vs fringe expert saying they're all wrong" dynamic.

Was not going to reply until I saw this is actually a month old and not more than three years, so you're in luck.

The Confessor claims to have been a violent criminal, and in Interlude with the Confessor we see the Confessor say this to Akon:

And faster than you imagine possible, people would adjust to that state of affairs. It would no longer sound quite so shocking as it did at first. Babyeater children are dying horrible, agonizing deaths in their parents' stomachs? Deplorable, of course, but things have always been that way. It would no longer be n

... (read more)

It seems plausible that Quirrel read the science books and isn't going to tell Harry anything reality-breaking, since he did a similar thing with the library - after telling Harry that Memory Charms are just filed under M, he says he's going to put some of his own special wards on the restricted section.

It could always be possible that Quirrel's "special wards" happen to let Harry through more easily, or allow Harry to browse the section more covertly, though I'd put odds of that fairly low given his mention of the situation to Minerva.
While I didn't realize Quirrel might be lying about his willingness to cooperate, the Memory books aren't in the restricted section and we haven't heard anything about him sabotaging Harry's attempts at taking over the universe since chapter 95 (except for some sort of deception in 100/101... "what a fiasco" is not something you would hear Quirrel say to himself audibly and honestly). Maybe he was in Zombie mode until 99? We wouldn't know, since we weren't in Harry's POV since 2 days after that chapter, although in his shoes I would have attempted contacting Quirrel as soon as possible.
While both the former and the latter are entirely plausible things for Quirrell to do, it is also worth noting that Quirrell would happily play the Role of a concerned tutor before McGonagall at this time. It would make her trust him more at a time when he may need to use her and other teachers on short notice to fulfil his own objectives, even if he doesn't intend to do anything about the Restricted Session at all.

Using it regularly is the most important thing by far. I don't use it anymore, the costs to starting back up seem too high (in that I try and fail to re-activate that habit), I wish I hadn't let that happen. Don't be me; make Anki a hardcore habit.

I think when restarting a deck after a long time it's important to use the delete button a lot. There might be cards that you just don't want to learn and it's okay to delete them. You could also gather the cards you think are really cool and move them into a new deck and then focus on learning that new deck.
Why not just restart from scratch with empty decks? It should be less daunting at first... My strategy to avoid losing the habit is having decks I care less about than others, so that when I stopped using Anki for a few weeks, I only had to catch up on the "important" decks first, which was less daunging than catching up with everything (I eventually catched up with all the decks, somewhat to my surprise). I'm also more careful than before in what I let in - if content seems too unimportant, it gets deleted. If it's difficult, it gets split up or rewritten. And I avoid adding too many new cards.
Load More