All of atucker's Comments + Replies

I think that crux is doing a lot of work in that it forces the conversation to be about something more specific than the main topic, and because it makes it harder to move the goal posts partway through the conversation. If you're not talking about a crux then you can write off a consideration as "not really the main thing" after talking about it.

What's the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a "restart LW" focus seem easier than trying to guarantee tech support responsiveness.

When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.

Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision "user xyz123 is abusing the voting mechanism" or "user xyz123 is a sockpuppet for user abc789", describe my case to other mods, and only after getting their agreement I would learn who the "user xyz123" actually is.

(But of course, getting the database... (read more)

"Strong LW diaspora writers" is a small enough group that it should be straightforward to ask them what they think about all of this.

My willingness to cross post from Putanumonit will depend on the standards of quality and tone in LW 2.0. One of my favorite things about LW was the consistency of the writing: the subject matter, the way the posts were structured , the language used and the overall quality. Posting on LW was intimidating, but I didn't necessarily consider it a bad thing because it meant that almost every post was gold.

In the diaspora, everyone sets their own standards. I consider myself very much a rationality blogger and get linked from r/LessWrong and r/slatestarcodex, ... (read more)

I have been doing exactly this. My short-term goal is to get something like 5-10 writers posting here. So far, some people are willing, and some have some objections which we're going to have to figure out how to address.

Yes. This meetup is at the citadel.

My impression is that the OP says that history is valuable and deep without needing to go back as far as the big bang -- that there's a lot of insight in connecting the threads of different regional histories in order to gain an understanding of how human society works, without needing to go back even further.

The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can't share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is 'doom,' each player has an incentive to change the game.

This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or n... (read more)

I tend to think of downvoting as a mechanism to signal and filter low-quality content rather than as a mechanism to 'spend karma' on some goal or another. It seems that mass downvoting doesn't really fit the goal of filtering content -- it just lets you know that someone is either trolling LW in general, or just really doesn't like someone in a way that they aren't articulating in a PM or response to a comment/article.

That just means that the sanity waterline isn't high enough that casinos have no customers -- it could be the case that there used to be lots of people who went to casinos, and the waterline has been rising, and now there are fewer people who do.

I have the same, though it seems to be stronger when the finger is right in front of my nose. It always stops if the finger touches me.

Hobbes uses a similar argument in Leviathan -- people are inclined towards not starting fights unless threatened, but if people feel threatened they will start fights. But people disagree about what is and isn't threatening, and so (Hobbes argues) there needs to be a fixed set of definitions that all of society uses in order to avoid conflict.

See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don't do so at a particularly high rate.

Also, it's easier to move specific people to a country than it is to raise the standard of living of entire countries. If you're doing raising-living-standards as an x-risk strategy, are you sure you shouldn't be spending money on locating people interested in x-risk instead?

I quite agree that if all you care about is x-risk then trying to address that by raising everyone's living standards is using a nuclear warhead to crack a nut. I was addressing the following thing you said: which I think is clearly wrong: bringing everyone's living standards up will increase the pool of people who have the motive and opportunity to work on x-risk, and since the number of people working on x-risk isn't zero that number will likely increase (say, by 2x) if the size of that pool increases (say, by 2x) as a result of making everyone better off. I wasn't claiming (because it would be nuts) that the way to get the most x-risk bang per buck is to reduce poverty and disease in the poorest parts of the world. It surely isn't, by a large factor. But you seemed to be saying it would have zero x-risk impact (beyond effects like reducing pandemic risk by reducing overall disease levels). That's all I was disagreeing with.

My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of o... (read more)

It seems that "donate to a guide dog charity" and "buy me a guide dog" are pretty different w/r/t the extent that it's motivated cognition. EAs are still allowed to do expensive things for themselves, or even as for support in doing so.

On second thought, the blind person's appeal might best be "buy your warm fuzzies here".

It seems easier to evaluate "is trying to be relevant" than "has XYZ important long-term consequence". For instance, investing in asteroid detection may not be the most important long-term thing, but it's at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.

Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will... (read more)

Any given asteroid will either be detected and deflected in time, or not. There, to my understanding at least, no mediocre level of asteroid impact risk management which makes the situation worse, in the sense of outright increasing the chance of an extinction event. More resources could be invested for further marginal improvements, with no obvious upper bound. Poverty and disease are more complicated problems. Incautious use of antibiotics leads to disease-resistant strains, or you give a man a fish and he spends the day figuring out how to ask you for another instead of repairing his net. Sufficient resources need to be committed to solve the problem completely, or it just becomes even more of a mess. Once it's solved, it tends to stay solved, and then there are more resources available for everything else because the population of healthy, adequately-capitalized humans has increased. In a situation like that, my preferred strategy is to focus on the end-in-sight problem first, and compare the various bottomless pits afterward.
Yes, most x-risk reduction will have to come about through explicit work on x-risk reduction at some point. It could still easily be the case that working on improving the living standards of the world's poorest people is an effective route to x-risk reduction. In practice, scarcely anyone is going to work on x-risk as long as their own life is precarious, and scarcely anyone is going to do useful work on x-risk reduction if they are living somewhere that doesn't have the resources to do serious scientific or engineering work. So interventions that aim, in the longish term, to bring the whole world up to something like current affluent-West living standards seem likely to produce a much larger population of people who might be interested in reducing x-risk and better conditions for them to do such work in.
I'm inclined to agree. A possible counterargument does come to mind, but I don't know how seriously to take it: 1. Global pandemics are an existential risk. (Even if they don't kill everyone, they might serve as civilizational defeaters that prevent us from escaping Earth or the solar system before something terminal obliterates humanity.) 2. Such a pandemic is much more likely to emerge and become a threat in less developed countries, because of worse general health and other conditions more conducive to disease transmission. 3. Funding health improvements in less developed countries would improve their level of general health and impede disease transmission. 4. From the above, investing in the health of less developed countries may well be related to x-risk. 5. Optional: asteroid detection, meanwhile, is mostly a solved problem. Point 4 seems to follow from points 1-3. To me point 2 seems plausible; point 3 seems qualitatively correct, but I don't know whether it's quantitatively strong enough for the argument's conclusion to follow; and point 1 feels a bit strained. (I don't care so much about point 5 because you were just using asteroids as an easy example.)

Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.

Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it's less of a social feedback hit.

This is partially good, because it makes it easier to "get into" trying to implement utilitarianism, but it's al... (read more)

My guess is just that the original reason was that there were societal hierarchies pretty much everywhere in the past, and they wanted some way to have nobles/high-status people join the army and be obviously distinguished from the general population, and to make it impossible to be demoted far down enough so as to be on the same level. Armies without the officer/non-officer distinction just didn't get any buy-in from the ruling class, and so they wouldn't exist.

I think there's also a pretty large difference in training -- becoming an officer isn't just about skills in war, but also involves socialization to the officer culture, through the different War Colleges and whatnot.

You would want your noticing that something is bad to, in some way, indicate what would be a better way to make the thing better. You want to know what in particular is bad and can be fixed, rather than the less informative "everything". If your classifier triggers on everything, it tells you less on average about any given thing.

My personal experience (going to Harvard, talking to students and admissions counselors) suggests that at one of the following is true:

Teacher recommendations and the essays that you submit to the colleges are also important in admissions, and the main channel through which human capital not particularly captured by grades, and personal development are signaled.

There are particularly known-to-be-good schools that colleges disproportionately admit students from, and for slightly different reasons that they admit students from other schools.

I basically compl... (read more)

Teacher recommendations and essays may be weak signals. Getting good recommendations depends in part on how appealing you are to teachers (in respects that are orthogonal to personal development). For example, people develop halos around people who are physically attractive, viewing them in more favorable terms along all dimensions. Some students get extensive coaching on their essays. I'm interested by this in juxtaposition with the fact that you got into Harvard. If you'd be willing to email me with some more details about your personal profile (e.g. high school grades, test scores and extracurricular achievements) I'd very much appreciate it. You can reach me at

All I can say is that I don't understand why intelligence is relevant for whether you care about suffering.

Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.

As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating... (read more)

I agree with this point entirely - but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action - such as devoting time / money to animal rights groups - has to be ballanced against other action - helping humans - but that doesn't apply very strongly to innaction - not eating meat. You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.

Political instrumental rationality would be about figuring out and taking the political actions that would cause particular goals to happen. Most of this turns out to be telling people compelling things that you know that they don't happen to, and convincing different groups that their interests align (or can align in a particular interest) when it's not obvious that they do.

Political actions are based on appeals to identity, group membership, group bounding, group interests, individual interests, and different political ideas in order to get people to shi... (read more)

This distinction is just flying/not-flying.

Offense has an advantage over defense in that defense needs to defend against more possible offensive strategies than offense needs to be capable of doing, and offense only needs one undefended plan in order to succeed.

I suspect that not-flying is a pretty big advantage, even relative to offense/defense. At the very least, moving underground (and doing hydroponics or something for food) makes drones just as offensively helpful as missles. Not flying additionally can have more energy and matter supplying whatever it is that it's doing than flying, which allows for more exotic sensing and destructive capabilities.

Also, what's offense and what's defense? Anti-aircraft artillery (effective against drones? I think current air drones are optimized for use against low-tech enemies w/ few defenses) is a "defense" against 'attack from the air', but 'heat-seeking AA missles', 'flack guns', 'radar-guided AA missiles' and 'machine gun turrets' are all "offenses" against combat aircraft where the defenses are evasive maneuvers, altitude, armor, and chaff/flare decoys. In WWI, defenses (machine guns and fortifications) were near-invincible, and killed attackers without time for them to retreat. I think that current drones are pretty soft and might even be subject to hacking (seem to remember somethign about unencrypted video?) but that would change as soon as somebody starts making real countermeasures.

Almost certainly, but the point that stationary counter-drones wouldn't necessarily be in a symmetric situation to counter-counter-drones holds. Just swap in a different attack/defense method.

I see. The existence of the specific example caused me to interpret your post as being about a specific method, not a general strategy. To the strategy, I say: I've heard that defense is more difficult than offense. If the strategy you have defined is basically: Original drones are offensive and counter-drones are defensive (to prevent them from attacking, presumably). Then if what I heard was correct, this would fail. If not at first, then likely over time as technology advanced and new offensive strategies are used with the drones. I'm not sure how to check to see if what I heard was true but if defense worked that well, we wouldn't have war.

I think that if you used an EMP as a stationary counter-drone you would have an advantage over drones in that most drones need some sort of power/control in order to keep on flying, and so counter-drones would be less portable, but more durable than drones.

Is there not a way to shield combat drones from EMP weapons? I wouldn't be surprised if they are already doing that.

From off site:

Energy and Focus is more scarce than Time (at least for me), Be Specific (somewhat on site, but whatever),

From on the site:

Mind Projection Fallacy, Illusion of Transparency, Trivial Inconveniences, Goals vs. Roles, Goals vs. Urges

Fair, but at least some component of this working in practice seems to be a status issue. Once we're talking about awesomeness and importance, and the representativeness of a person's awesomeness and the importance of what they're working on, and how different people evaluate importance and awesomeness, it seems decently likely that status will come into play.

Good point, I did summarize a bit fast.

There's two issues at hand, one asserting that you're doing something that's high status within your community, and asserting that your community's goals are more important (and higher status) than the goals of the listener's community.

If there's a large inferential distance in justifying your claims of importance, but the importance is clear, then it's difficult to distinguish you from say, cranks and conspiracy theorists.

(The dialogues are fairly unrealistic, but trying to gesture at the pattern)

A within culture iss... (read more)

I entirely agree with this point, but suspect that actually following this advice would make people uncomfortable.

Since different occupations/goals have some amount of status associated with them (nonprofits, skilled trades, professions) many people seem to take statements about what you're working on to be status claims in addition to their denotational content.

As a result, working on something "outside of your league" will often sound to a person like you're claiming more status than they would necessarily give you.

Are you sure? How can you easily tell that something is out of someones league? I can imagine that if you talk to someone at a party it is more impressive to say that you work in rocket surgery than it is to say that you work as a carpenter. Even though you might be lousy at the first and great at the second.
Beware using status as a universal explanation.
That is Terrible!!! I never thought of that. But yes, it does sound very much like that is the case. What can we do? I mean, sincerely, how can we get people to work on what matters for them even if they don't think they have the status for it?

Textbooks replace each other on clarity of explanation as well as adherence to modern standards of notation and concepts.

Maybe just cite the version of an experiment that explains it the best? Replications have a natural advantage because you can write them later when more of the details and relationships are worked out.

If I were in London, or even within an hour or two of it, I would try to go to this.

0Ben Pace11y
Us poor folk in the north of England. :(

"May your plans come to fruition"

I used to say that more when leaving megameetups or going on a trip or something. It has the disadvantage that you can't say it very fast.

I also want a word/phrase that expresses sympathy but isn't "sorry".

In writing, I just use “:-(”.
I often say "My sympathies" when I want to express my sympathy but don't want any possibility of being understood as apologizing.

Entirely agreed. Even if you more often than not get the same answers from fMRI and surveys, the fMRI externalizes the judgment of whether or not someone is empathizing/emotional/cognitive stating with regards to something else.

One might argue that we probably have a decent understanding of how well people's verbal statements line up with different facts, but where this diverges from the neurological reality is interesting enough to be spending money on the chance of finding the discrepancies. If we don't find them, that's also fascinating, and is worth knowing about.

Taking for granted that what people say about themselves is accurate, but externalized measurement is also worthwhile for it's own sake.

I think it would probably be worth going into a bit more about what delineates tacit rationality from tacit knowledge. Rationality seems to me to apply to things that you can reflect about, and so the concept of things that you can reflect about but can't necessarily articulate seems weird.

For instance, at first it wasn't clear to me that working at a startup would give you any rationality-related skills except insofar as it gives you instrumental rationality skills, which could possibly just be explained as better tacit knowledge -- you know a bajillion m... (read more)

When evaluating the relationship between success and rationality it seems worth keeping in mind survivorship bias.

An interesting case is that Will Smith seems likely to be explicitly rational in a way that other people in entertainment don't talk about -- he'll plan and reflect on various movie-related strategies so that he can get progressively better roles and box office receipts.

For instance, before he started acting in movies, he and his agent thought about what top-grossing movies all had in common, and then he focused on getting roles in those kin... (read more)

An interesting case is that Will Smith seems likely to be explicitly rational in a way that other people in entertainment don't talk about

In the same venue, I've been impressed by Greene's account of 50 Cent he made in the book "The 50th law". If that's really 50's way of thinking, it's brutally rational and impressively strategical.

Marginal effort within the bounds of a consulting agency offering a service "tailored" to each school district.

I think the hard part of refitting the model would probably just be getting access to the data -- beyond that it seems like a statistician or programmer would be able to just tell a computer how to minimize some appropriate cost function.

Something like most of the marginal effort is devoted to gathering the data, which presumably doesn't require that much expertise relative to understanding the model in the first place.

In practice, there are substantial privacy law issues, although those can be gotten around if the district is clever. More importantly, collecting, collating, and ensuring coder reliability is expensive. What you called "marginal effort" is quite difficult for just about any large bureaucracy.

Maybe slightly vary the parameters to make the model "new"? Like, fit it to data from that district, and it will probably be slightly different from "other" models.

But that requires effort, and school districts don't generally want to put in effort to do things differently.

Has anyone published data on the effectiveness of Bayesian prediction models as an educational intervention? It seems like that would be very helpful in terms of being able to convince school districts to give them a shot.

0ThinkOfTheChildren11y Quite a few things there. SAS's EVAAS is generally considered the gold standard of bayesian prediction models as educational interventions; unfortunately as SAS is based in North Carolina it has yet to spread outside that particular state. Some states have similar systems being produced by similar companies. Particularly, if I were you I would read:
Most of the discussions I've happened to see focus on , because being able to judge teachers is directly useful to school districts and lots of outsiders are interested in the topic. Since the usual approach uses multilevel model (you need to adjust for school-level effects, district-level effects, etc before you can extract a usable teacher-level effect), it's almost Bayesian by default, and if you google 'bayesian value-added modeling' you'll find a ton of material.
My experience is that school districts have a strong not-invented-here bias. For example, special education laws require research based interventions, a requirement that is generally ignored.

Same. I'd be interested in trying this for a bit starting after mid-May.

I'd be interested as well.
Me also.

It's somewhat tricky to separate "actions which might change my utility function" from "actions". Gandhi might not want the murder pill, but should he eat eggs? They have cholesterol that can be metabolized into testosterone which can influence aggression. Is that a sufficiently small effect?

Any kind of self-modification would be out of the question until the AGI has solved the problem of keeping its utility function intact. That being said, it should be easier for an AGI to keep its code unmodified while solving a problem than for Gandhi not to eat.

A lot of Herodotus' histories have interesting stories about people exhibiting and not exhibiting ancient Greek virtues.

Though, the other stuff in the post, and his other comments on the thread, really make it seem to me to be related to the house rather than to him, or his friends.

Given that Johnny Depp appears to be on the Singularity side (as the uploaded human), I suspect that they'll be portrayed sympathetically, even if the ending isn't exactly happy.

I think that the nutritional value of the food, or at least the perceived nutritional value of the food, also plays a role in how quickly you start liking it. I've started liking raw beef liver and fish oil after waaaay fewer tries than say, ceviche.

Is the rawness of the liver motivated by nutritional or gustative reasons? How do you prepare it?

So given some data, to determine the relative probability of two competing hypotheses, we start from the ratio of their prior probabilities, and then multiply by the ratio of their likelihoods. If we restrict to hypotheses which make predictions "within our means"---if we treat the result of a computation as uncertain when we can't actually compute it---then this calculation is tractable for any particular pair of hypotheses.


When two people disagree about the relative complexity of two hypotheses, it must be because that hypothesis is simpler

... (read more)

From what I understand, Watson is more supposed to do machine learning and question answering in order to do something like make medical diagnoses based on the literature.

MetaMed tries to evaluate the evidence itself, in order to come up with models for treatment for a patient that are based on good data and an understanding of their personal health.

They both involve reviewing literature, but MetaMed is actually trying to ignore and discard parts of the literature that aren't statistically/logically valid.

Upvoted, but I'm a bit confused as to what we're trying to refer to with "spam".

If by spam we mean advertising, yes. Definitely.

If by spam we mean undesirable messaging that lowers the quality of the site, then I would think that this is very much not spam.

If this startup was not associated with MIRI I would downvote it; there are lots of great startups but this is not the place to advertise them.

Some people (myself included) use "spam" to refer to any kind of advertising in a public setting, e.g. you might preface an email sent out to multiple mailing lists as "sorry for the spam, guys, but..." even if it's a valuable and high-quality email. The connotation, to me, is mildly self-deprecating rather than strictly negative.

It's in some weird-to-link-to facebook format.

Basically, it's the same as the edge essay, but you should replace the last paragraph with...

Robert Altmeyer's research shows that for a population of authoritarian submissives, authoritarian dominators are a survival necessity. Since those who learn their school lessons are too submissive to guide their own lives, our society is forced to throw huge wads of money at the rare intelligent authoritarian dominants it can find, from derivative start-up founders to sociopathic Fortune 500 CEOs. However, with their

... (read more)
To be fair, it might be easier to protect someone that the top authorities aren't gunning for. Hiding Aaron Swatz under the federal case blew over wasn't an option.

I more or less agree with your reading of this essay, but it misses an important point that the edited version on Edge leaves out -- in the original version, he compared the friends of Aaron Swartz with the friends of someone Michael knows.

Basically, when she was institutionalized against her will, her low-status, relatively poor friends helped break her out of the mental hospital and hide her until the police chase blew over. In contrast, when placed in a legal battle Aaron Swartz wasn't able to rely on his much smarter, wealthier, and in almost every way... (read more)

Can you link to the unedited essay? Or is it not available?

I think it's pretty possible to macro-optimize successfully and still lose. All you have to do is know what to do and not how to do it.

Load More