All of Louie's Comments + Replies

In Silicon Valley. With a group of people who know about LessWrong but are dubious about its instrumental value.

Truth: It's Not That Great

2009: "Extreme Rationality: It's Not That Great"

2010: "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality"

2013: "How about testing our ideas?"

2014: "Truth: It's Not That Great"

2015: "Meta-Countersignaling Equilibria Drift: Can We Accelerate It?"

2016: "In Defense Of Putting Babies In Wood Chippers"

2016: "In Defense Of Putting Babies In Wood Chippers"

Heck, I could write that post right now. But what's it got to do with truth and such?

Request for concrete AI takeover mechanisms

Yes. I assume this is why she's collecting these ideas.

Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in".

In general MIRI isn't in favor of soliciting storytelling about the singularity. It's a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.

3oooo7yOP: >>So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance. Louie: >>Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in". These two statements contradict each other. If it's true that Katja doesn't speak for all of MIRI on this issue, perhaps MIRI has a PR issue and needs to issue guidance on how representatives of the organization present public requests. When reading the parent post, I concluded that MIRI leadership was on-board with this scenario-gathering exercise. EDIT: Just read your profile and I realize you actually represent a portion of MIRI leadership. Recommend that Katja edit the parent post to reflect MIRI's actual position on this request.
3jimrandomh7yLouie, there appears to be a significant divergence between our models of AI's power curve; my model puts p=.3 on the AI's intelligence falling somewhere in or below the human range, and p=.6 on that sort of AI having to work on a tight deadline before humans kill it. In that case, improvements on the margin can make a difference. It's not nearly as good as preventing a UFAI from existing or preventing it from getting Internet access, but I believe later defenses can be built with resources that do not funge.
Request for concrete AI takeover mechanisms

Than you should reduce your confidence in what you consider obvious.

Request for concrete AI takeover mechanisms

So MIRI is interested in making a better list of possible concrete routes to AI taking over the world.

I wouldn't characterize this as something that MIRI wants.

4lukeprog7yI guess we should have clarified this in the LW post, but I specifically asked Katja to make this LW post, in preparation for a project proposal blog post to be written later. So, MIRI wants this in the sense that I want it, at least.
0Said Achmiz7yAre you associated with MIRI? Edit: I didn't read further down, where the answer is made clear. Sorry, ignore this.
0NoSuchPlace7yAre you saying this is some thing which MIRI considers actively bad or are you just pointing out that this something which is not helpful for MIRI? While I don't see the benefit of this exercise I also don't see any harm since for any idea which we come up with here some one else would very likely have come up with it before if it were actionable for humans.
0jimrandomh7yIt seemed pretty obvious to me that the point of making such a list was to plan defenses.
One Medical? Expansion of MIRI?

To clarify, One Medical partnered with us on this event... but are not materially involved with expanding MIRI themselves. They're simply an innovative business nearby us in Berkeley who wanted to support our work. I know it's somewhat unprecedented to see MIRI with strong corporate support, but trust me, it's a good thing. One Medical's people did a ton of legwork and made it super easy to host over 100 guests at that event with almost no planning needed on our part. They took care of everything so we could just focus on our work. A perfect partnership in... (read more)

Book Review: Naïve Set Theory (MIRI course list)

Thanks. That was what I thought, but I haven't read Causality yet.

2Alex_Altair8yThe material covered in Causality is more like a subset of that in PGM. PGM is like an encyclopedia, and Causality is a comprehensive introduction to one application of PGMs.
0cousin_it8yI haven't read PGM. Maybe you could ask Ilya Shpitser, he knows this stuff much better than I do.
Effective Altruism Through Advertising Vegetarianism?

The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ." There are some people going around making the claim, based on the extreme low-ball cost estimates.

Correct. I make this claim. If vegetarianism is that cheap, it's reasonable to bin it with other wastefully low-value virtues like recycling paper, taking shorter showers, turning off lights, voting, "staying informed", volunteering at food banks, and commenting on less wrong.

Young Americans believe they have the best health in the world...

Yep, you're right. I've never used the Open Threads so I didn't know that. Thanks.

Unfortunately, the Open Thread is rather difficult to find. You must know it exists, because it quickly gets lost among the new articles -- at least a third of which should be better placed in the Open Thread. So the problem makes itself worse. Unless someone reminds other people to use it... which always feels like starting a conflict with the author; and there are not even obvious guidelines. So thanks for not getting offended.

Young Americans believe they have the best health in the world...

Americans can only report their health derivative (dx/dt) :)

Young Americans believe they have the best health in the world...

A lot of the most unhealthy groups in the US are also poor and somewhat outside te reach of casual academic sampling.

I assumed that at first too. It turns out even removing the poor or minorities from the sample doesn't fix this gap.

Young Americans believe they have the best health in the world...

I guess the study used the modifier "wealthy" along with developed to explain their choice of reference class. I looked at the list and it didn't seem obviously cherry picked. What countries would you add?

2Nornagest8yThe proper reference class probably depends on what this is being used as a proxy for: the ratio of self-described health to medical quality-of-life metrics is an odd enough figure that I assume it's being used as a proxy for something. If we're looking for degree of overconfidence in health care efficacy, which seems like the most likely candidate, using the first N countries ranked by per-capita health care spending might be the way to go: that gives you a list that's not too dissimilar from the one in the article [\] _per_capita), although some of the details are different. That being said, once you actually start getting into the statistics, the US ends up in the middle of the rankings for most categories of disease and accident -- it's obesity-linked diseases, automotive accidents, and violence where it really shines. All of which isn't too much of a surprise, but I don't know if it's much of an indictment of the American health care system on its own. (There's some odd features buried in there, though. For example, the US is ranked highly in deaths from chronic obstructive pulmonary disease and lung cancer, both correlates of smoking -- but it's middling-to-low in deaths from other cancers, indicating good oncology, and has a fairly low smoking rate. Air pollution's also low. I have no idea what's causing this.)
7TrE8yWithout looking at the data, of course.
Young Americans believe they have the best health in the world...

The guts of the study lists one (of many) possible causes:

"getting health care depends more on the market and on each person’s financial resources in the U.S. than elsewhere".

Insurance companies should point out to their detractors that they provide a valuable service by making healthcare so inaccessible that Americans no longer have any idea how they're doing. And that given this absence of knowledge, Americans assume they're doing great.

0[anonymous]8yThis quote from the article points out causes of poor health, not of poor self-evaluation skills. Do you believe that people below 34 answer based on knowledge derived from healthcare services? I don't think teenagers visit their doctor very often.
Generic Modafinil sales begin?

I received a letter telling me on no uncertain terms that if [US Customs] found another shipment of modafinil addressed to me, they would prosecute me as a drug smuggler.

You mean something like this? That's not really as meaningful as it seems. There is always some legal risk associated with doing anything since there are so many US laws that no one has even managed to count them, but a pretty serious search through legal database turns up no records of people being prosecuted for modafinil importation, ever. So that letter is 100% posturing by US Custo... (read more)

'Life exists beyond 50'

Yeah, don't be discouraged. LW is just like that sometimes. If you link to something with little or no commentary, it really needs to be directly about rationality itself or be using lots of LW-style rationality in the piece. This was a bit too mainstream to be appreciated widely here (even in discussion).

Glad to see you're posting though! You still in ATL and learning about FAI? I made a post you might like. :)

-3hankx77879yLW needs just a generic link-submission area to give it some of that reddit functionality its missing. Maybe have a third area besides main and discussion which is just for reddit-style img/link submission... I guess I should add, I assumed life extension and such as being entirely on-topic here. It's kind of an obvious, major interest to any rationalist. To those who have no idea what I'm talking about, go read HPMOR a couple more times or something, jesus. Also, good list, thanks. I actually don't know anything about functional programming, I'm going to look that up today.
Course recommendations for Friendliness researchers

Just to clarify, I recommend the book "Probability and Computing" but the course I'm recommending is normally called something along the lines of "Combinatorics and Discrete Probability" at most universities. So the online course isn't as far off base as it may have looked. However, I agree there are better choices that cover more exactly what I want. So I've updated it with a more on-point Harvard Extension course.

The MIT and CMU courses both cover combinatorics and discrete probability. They are probably the right thing to take or very close to it if you're at those particular schools.

Thanks again for the feedback Klao.

Course recommendations for Friendliness researchers

Yep, SI has summer internships. You're already in Berkeley, right?

Drop me an email with the dates you're available and what you'd want out of an internship. My email and Malo's are both on our internship page:

Look forward to hearing from you.

Course recommendations for Friendliness researchers

Well, I figure I don't really want to recommend a ton of programming courses anyway. I'm already recommending what I presume is more than a bachelor's degree worth of course when pre-reqs and outside requirements at these universities are taken into account.

So if someone takes one course, they can learn so much more that helps them later in this curriculum from the applied, function programming course than its imperative counterpart. And the normal number of functional programming courses that people take in a traditional math or CS program is 0. So I have... (read more)

Course recommendations for Friendliness researchers

Ahh. Yeah, I'd expect that kind of content is way too specific to be built into initial FAI designs. There are multiple reasons for this, but off the top of my head,

  • I expect design considerations for Seed AI to favor smaller designs that only emphasize essential components for both superior ability to show desirable provability criteria, as well as improving design timelines.

  • All else equal, I expect that the less arbitrary decisions or content the human programmers provide to influence the initial dynamic of FAI, the better.

  • And my broadest answer is

... (read more)
Course recommendations for Friendliness researchers

I don't think those courses would impoverish anyones' minds. I expect people to take courses that aren't on this list without me having to tell people to do that. But I wouldn't expect courses drawn from these subjects to be mainstream recommendations for Friendliness researchers who were doing things like formalizing and solving problems relating to self-referencing mathematical structures and things along those lines.

3NancyLebovitz9yWhat I was thinking was "would you expect a FAI to do its own research about what it needs to for people to be physically safe enough, or should something on the subject be built in?
Course recommendations for Friendliness researchers

Good question. If I remember correctly, Berkeley teaches from it and one person I respect agreed it was good. I think the impenetrability was consider more of a feature than a bug by the person doing the recommending. IOW, he was assuming that people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.

Part of my motivation for posting this here was to improve my recommendations. So I'm happy to change the rec to something more accessible if we can crowd-source something like a consensus best choice here on LW that's still good for the smartest readers.

9John_Maxwell9yMy recollection of Berkeley's discrete math course (compsci-oriented version that also emphasizes probability heavily) was that it was taught mostly from some pretty nice lecture notes. Looks like the lecture notes from the Fall 2012 compsci version of the course are available for download from the course website []. It occurred to me the other year that lecture notes could be a good way to learn things in general, or at least pick up the basics of a subject: * There's no incentive for instructors to pad them to appease publishers. * Unlike textbooks or Wikipedia, they're being updated constantly based on what seems to work for explaining concepts most effectively. * They're often available freely for download. ( yoursubject on Google, OCWs, etc.) * They're probably written to communicate understanding rather than achieve perfect rigor. * If you know how long the corresponding lecture took, you can set a timer for that length of time (or 0.8 times as long, or whatever) and aim to get through the lecture notes before the timer rings (taking notes if you want; keep glancing at the timer and your progress to calibrate your speed). This is a pretty good motivational hack, in my experience. I like it better than attending an actual lecture because I can dive deeper or skim over stuff as I like, so it's kind of like a personalized version of the lecture. I don't worry if I end up skimming over some stuff and not understanding it perfectly--I don't understand everything in a "real" lecture either (much less, really). * They break the course material in to nice manageable chunks: if the class covered one set of notes per day, for instance, you could do the same in your self-study. (Don't Break the Chain [] and BeeMinder [] come to mind as macro-level motivational
8VincentYu9yYou must have misremembered; Rosen's text is very verbose and clear. It will certainly be an elementary text for any second-year or higher math undergraduate. The book is partly designed to be a crash course in math for CS students, so there are introductory chapters on proofs, logic, sets, etc., with plenty of examples. Exercises are mainly drills and computations, with a smaller proportion of exercises on proofs. As for the bad Amazon reviews—many students using this textbook will be encountering mathematical proofs for the first time, so frustration from some of the students is to be expected. I don't think the reviews are representative of the book. All in all, the text serves its purpose well as an introductory math book for CS undergraduates. I think its greatest downfall is its excessive verbosity—there is so much redundancy that examples take up around half the book.

[he was assuming that] people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.

Is this actually true? My current guess is that even though for a given level of training, smarter people can get through harder texts, they will learn more if they go through easier texts first.

9pragmatist9yI haven't read multiple discrete math textbooks, so I can't make a comparative judgment, but I can confirm that Concrete Mathematics [] is a delightful and useful text. Also, while Sipser's book is great for theoretical computer science, I think Moore's The Nature of Computation [] is much better, at least in terms of being fun to read.
Course recommendations for Friendliness researchers

The functional/imperative distinction is not a real one

How is the distinction between functional and imperative programming languages "not a real one"? I suppose you mean that there's a continuum of language designs between purely functional and purely imperative. And I've seen people argue that you can program functionally in python or emulate imperative programming in Haskell. Sure. That's all true. It doesn't change the fact that functional-style programming is manifestly more machine checkable in the average (and best) case.

it's less imp

... (read more)

How is the distinction between functional and imperative programming languages "not a real one"?

"Not a real one" is sort of glib. Still, I think Jim's point stands.

The two words "functional" and "imperative" do mean different things. The problem is that, if you want to give a clean definition of either, you wind up talking about the "cultures" and "mindsets" of the programmers that use and design them, rather than actual features of the language. Which starts making sense, really, when you note... (read more)

2jimrandomh9yNot exactly. There is a functional/imperative distinction, but I don't think it's located in the language; it's more a matter of style and dialect. The majority of the difference between functional style and imperative style is in how you deal with collections. In functional style, you use map, filter, fold and the like, mostly treat them as immutable and create new ones, and use a lot of lambda expressions to support this. In imperative style, you emulate the collection operators using control flow constructs. Most major programming languages today support both syles, and the two styles act as dialects. (The main exceptions are non-garbage-collected languages, which can't use the functional style because it interacts badly with object ownership, and Java, which lacks lambda expressions as a symptom of much bigger problems). These styles are less different than they appear. A lot of use of mutation is illusory; it matches to a palette of a dozen or so recognizable patterns which could just as easily be written in functional form. In fact, ReSharper can automatically translate a lot of C# between these two styles, in a provably-correct fashion; and if you want to prove complicated things about programs, the infrastructure to make that sort of thing easy is part of the price of admission. But there's a catch. Programmers who start in a functional language and avoid the imperative style don't learn the palette of limited mutations, and to them, imperative-style code is more inscrutable than to programmers who learned both styles. And while I'm much less certain of this, I think there may be an order-dependence, where programmers who learn imperative style first and then functional do better than those who learn them in reverse order. And I don't think it's possible to get to the higher skill levels without a thorough grasp of both.
6Kawoomba9yPretty much by definition all (Turing-complete) programming languages can in principle be transformed into each other, it's not even very hard: just take the ASM code and build some rudimentary reverse compiler that creates some strange looking code in your desired goal language. For practical purposes, machine checkability is easier for functional languages, but it's a difference in degree, not one in kind. (Corrections welcome!)
-1OrphanWilde9yFunctional-style programming doesn't make it any more natural, it just forbids you from doing things any other way. I spend most of my time when dealing with functional-style programming (primarily in XSLT) trying to figure out ways around the constraints imposed by the language rather than actually solving the problem I'm working on. In XSLT I once copied a chunk of code 8 times, replacing its recursive function calls with its own code, because it was blowing up the stack; and it's not like I could use mutable variables and skip the recursion, it was literally the only implementation possible. And it had to call itself in multiple places of its own code; it couldn't be phrased in a tail-recursion friendly fashion. Meaning that for that code, no functional language could have resolved the stack explosion issue. -That's- functional programming to me; dysfunctional. [ETA: Apparently there is a pattern which would overcome the tail stack issue, and compilers exist which can take advantage of it, so my statement that "No functional language could have resolved the stack explosion issue" was false.]
Course recommendations for Friendliness researchers

But I'm not sure where that is best covered.

Yeah, universities don't reliably teach a lot of things that I'd want people to learn to be Friendliness researchers. Heuristics and Biases is about the closest most universities get to the kind of course you recommend... and most barely have a course on even that.

I'd obviously be recommending lots of Philosophy and Psychology courses as well if most of those courses weren't so horribly wrong. I looked through the course handbooks and scoured them for courses I could recommend in this area that wouldn't steer ... (read more)

-1Halfwitz8yOne hack for this would be to roll the blogposts into an ebook. A small change in title and presentation can make a big difference in terms of perception.

Believe me, Luke and I are sad beyond words every day of our lives that we have to continue recommending people read a blog to learn philosophy and a ton of other things that colleges don't know how to teach yet. We don't particularly enjoy looking crazy to everyone outside of the LW bubble.

This doesn't look as bad as it looks like it looks. Among younger mathematicians, I think it's reasonably well-known that the mathematical blogosphere is of surprisingly high quality and contains many insights that are not easily found in books (see, for example, Fie... (read more)

Course recommendations for Friendliness researchers

PS - I had some initial trouble formatting my table's appearance. It seems to be mostly fixed now. But if an admin wants to tweak it somehow so the text isn't justified or it's otherwise more readable, I won't complain! :)

3bgaesop9yInteresting list. Minor typo: "This is where you get to study computing at it's most theoretical," the "it's" should read "its".
Bounding the impact of AGI

I believe Coq is already short and proven using other proving programs that are also short and validated. So I believe the tower of formal validation that exists for these techniques is pretty well secured. I could be wrong about that though... would be curious to know the answer to that.

Relatedly, there are a lot of levels you can go with this. For instance, I wish someone would create other programming languages like CompCert for programming formally validated programs.

Bounding the impact of AGI

Martel (1997) estimates a considerably higher annualized death rate of 3,500 from meteorite impacts alone (she doesn’t consider continental drift or gamma-ray bursts), but the internal logic of safety engineering demands we seek a lower bound, one that we must put up with no matter what strides we make in redistribution of food, global peace, or healthcare.

Is this correct? I'd expect that this lower-bound was superior to the above (10 deaths / year) for the purpose of calculating our present safety factor... unless we're currently able to destroy earth-threatening meteorites and no one told me.

1DaFranker9yWell, we do have the technological means to build something to counter one of them, if we were to learn about it tomorrow and it had ETA 2-3 years. Assuming the threat is taken seriously and more resources and effort are put into this than they were / are in killing militant toddlers in the middle-east using drones, that is. But if one shows up now and it's about to hit Earth on the prophecy-filled turn of the Mayan calendar? Nope, GG.
Bounding the impact of AGI

To paraphrase Kornai's best idea (which he's importing from outside the field):

A reasonable guideline is limiting the human caused xrisk to several orders of magnitude below the natural background xrisk level, so that human-caused dangers are lost in the noise compared to the pre-existing threat we must live with anyway.

I like this idea (as opposed to foolish proposals like driving risks from human made tech down to zero), but I expect someone here could sharpen the xrisk level that Kornai suggests. Here's a disturbing note from the appendix where he d... (read more)

3timtyler9yWe are well above there right now - and that's very unlikely to change before we have machine superintelligence.
0Louie9yIs this correct? I'd expect that this lower-bound was superior to the above (10 deaths / year) for the purpose of calculating our present safety factor... unless we're currently able to destroy earth-threatening meteorites and no one told me.
Thoughts on the Singularity Institute (SI)

Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management

FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you're just imagining this retroactively given that that's what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational i... (read more)

Popular media coverage of Singularity Summit -the Verge [link]

I didn't notice any factual inaccuracies

Although, multiple quotes were manufactured and misattributed.

Cleaning up the "Worst Argument" essay

I preferred the original version that appeared on your private website.

Once you sanitized it for LW by making it more abstract and pedantic, it lost many of the most biting, hilarious asides, that made it a fun and entertaining to read.

9Vaniver9yThe issue is that the asides are only biting and hilarious if you already agreed with them. When Yvain writes a statement like: I shake my head at the childish framing.
3Scott Alexander9ySo...the poll shows +9 support for making it more biased and snarky, +9 support for making it less biased and snarky, and a tepid rejection of leaving it the way it is. Awkward.
3RobertLumley9yI had never read it until now, actually. I saw it on twitter but didn't read it until it was posted to LW. I think the original original version is the most entertaining to read, but I think the most edited form is the most persuasive to a general audience because it is far more politically neutral.
Alan Carter on the Complexity of Value

Nope, I was wrong. It is the case that agents require equal priors for ATT to hold. AAT is like proving that mixing the same two colors of paint will always result in the same shade or that two equal numbers multiplied by another number will still be equal.

What a worthless theorem!

I guess when I read that AAT required "common priors" I assumed Aumann must mean known priors or knowledge of each others' priors, since equal priors would constitute both 1) an asinine assumption and, 2) a result not worth reporting. Hanson's assumption that humans sho... (read more)

Alan Carter on the Complexity of Value

They need to have the same priors? Wouldn't that make AAT trivial and vacuous?

I thought the requirement was that priors just weren't pathologically tuned.

1Dorikka9yI'm pretty sure they do need to have the same priors. My intuition is that AAT is basically saying that the perfect epistemic rationalists involved can essentially transfer all of the evidence that they have to the other, so that each one effectively has the same evidence and so should have the same posteriors...except that they'll still have different posteriors unless they began with the same priors. If they found that they had different priors, I think that they could just communicate the evidence which led them to form those priors from previous priors and so forth, but I think that if they trace their priors as far back as possible and find that they have different ones, AAT doesn't work. I'm not actually super-familiar with it, so update accordingly if I seem to have said something dumb.
Generic Modafinil sales begin?

Those "generics" you're talking about are ordered by your friends from overseas. The average American won't take advantage of Modafinil until they can pay x10 as much to buy it in a pharmacy in their neighborhood.

People are too risk-averse to try things that work. Hmm... if only there were some sort of drug they could take to make them smarter?

0Aleesa9yDefine "risk averse" I used to buy modalert from India and, based on its effectiveness, I feel confident it was not a fake. The biggest risk, I found out, is having a shipment confiscated by US Customs as it is a controlled substance. I received a letter telling me on no uncertain terms that if they found another shipment of modafinil addressed to me, they would prosecute me as a drug smuggler. That was more risk than I was willing to take. Since even the so-called generic modafinil in this country is so expensive, I now take Ritalin instead of modafini. Ritalin is available in a cheap generic in this country, but has side effects that the modafinil didn't have.
SotW: Be Specific

I think the bigger difference between CBT and psychoanalysis is something like, CBT: "Your feelings are the residue of your thoughts, many of which are totally wrong and should be countered by your therapist and you because human brains are horribly biased." vs, Psychoanalysis: "Your feelings are a true reflection of what an awful, corrupt, contemptible, morally bankrupt human being you are. As your therapist, I will agree with and validate anything you believe about yourself since anything you report about yourself must be true by definitio... (read more)

0Costanza9yIt was the Herbert Kornfeld [,1019/] trash talk that had me literally laughing out loud. Keep it real, L-Dog.
The Singularity Institute is hiring an executive assistant near Berkeley

Thanks for doing the research on this. It actually makes me feel a lot better knowing how low these base rates are.


Let me try again.

In 2009, each licensed driver drove an average of 14,000 miles.

For cars, the fatality rate per 100M VMT was 0.87 (the exact number is on page 22 of my original link). 14,000 miles/year * 0.87 deaths/100,000,000 miles = .0001218 deaths/year = 0.1218 millideaths/year. Inversely, 1 in 8210 people will die each year. Now, my math is hiding subtle assumptions - Traffic Safety Facts 2009 gives the fatality rate for passenger car occupants per vehicle miles traveled. This is affected by how many people occupy a given car! Their definition of moto... (read more)

Leveling Up in Rationality: A Personal Journey

I know lukeprog personally, but I suppose I should call him lukeprog on LW for other people's benefit. Thanks for the reminder.

0lsparrish10yThe fact that you know him personally makes it make more sense. I haven't met any other LWers in person yet, sometimes I forget this isn't only an online community. :)
4Kevin10yI know lukeprog personally and I call him lukeprog.
Leveling Up in Rationality: A Personal Journey

I'm concerned with the overuse of the term "applause light" here.

An applause light is not as simple as "any statement that pleases an in-group". The way I read it, a charge of applause lights requires all of the following to hold:

1) There are no supporting details to provide the statement with any substance.

2) The statement is a semantic stopsign.

3) The statement exists purely to curry favor with an in-group.

4) No policy recommendations follow from that statement.

I don't see a bunch of applause lights when I read this post. I see a post... (read more)

1lsparrish10yI agree. Whatever the reason is for me being annoyed or uncomfortable about lukeprog's writing on occasion is probably not because it is Applause Lights. It may even be that he is following Best Practices and I should just adjust to it. On the other hand, I think I get the feeling that when someone links to something external, they are trying to activate a cognitive shortcut. It's like being fast-talked by a car salesman, I'm being given a reference instead of a whole concept. I'm afraid that I will accept something without actually forming a complete mental model of it. I get the same sensation when someone uses footnote references to something that is unexplained or not apparent. That's the problem with scholarship -- it can be tricky to recreate an idea in the mind of another, because the other mind needs time to adjust to whatever you adjusted to. So if you dump something large and complex on someone it can end up seeming like an appeal to authority, even when there are good reasons -- because a good reason usually needs time and deliberate attention to be understood. Also, one has to keep in mind that having dependencies on many sources can make something less persuasive -- say you have five sources that independently seem 90% persuasive -- your result now only seems 59% persuasive. On an unrelated note, it is kind of weird to me when people use lukeprog's first name instead of complete username because it is also my first name (and a couple other LWers.) This may just be because Luke is a kind of uncommon name, and I have not previously had to get used to it referring to someone else.
What Curiosity Looks Like

Thanks MinibearRex.

I've added ads on Google AdWords that will start coming up for this in a couple days when the new ads get approved so that anyone searching for something even vaguely like "How to think better" or "How to figure out what's true" will get pointed at Less Wrong. Not as good as owning the top 3 spots in the organic results, but some folks click on ads, especially when it's in the top spot. And we do need to make landing on the path towards rationality less of a stroke of luck and more a matter of certainty for those who are looking.

2taryneast9yIt's been almost three months. How's the data on this campaign going?
3dbaupp10yDo you have any data from this campaign?
0MinibearRex10yThat sounds great. Thanks for taking the time to do that.
The bias shield

I also thought you meant that Bill O'Reilly had (surprisingly) written the best book ever on the Lincoln shooting when you said "But I was wrong."

Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far

Thanks for the helpful comments! I was uninformed about all those details above.

These posts are not about GiveWell's process.

One of the posts has the sub-heading "The GiveWell approach" and all of the analysis in both posts use examples of charities you're comparing. I agree you weren't just talking about the GiveWell process... you were talking about a larger philosophy of science you have that informs things like the GiveWell process.

I recognize that you're making sophisticated arguments for your points. Especially the assumptions that you... (read more)

Louie, I think you're mischaracterizing these posts and their implications. The argument is much closer to "extraordinary claims require extraordinary evidence" than it is to "extraordinary claims should simply be disregarded." And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.

I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to "Pascal's Mugging" does not entail ... (read more)

Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased)

Your comments are a cruel reminder that I'm in a world where some of the very best people I know are taken from me.

2Will_Newsome10ySingInst seems a lot better since I wrote that comment; you and Luke are doing some cool stuff. Around August everything was in a state of disarray and it was unclear if you'd manage to pull through.

Hi, here are the details of whom I spoke with and why:

  • I originally emailed Michael Vassar, letting him know I was going to be in the Bay Area and asking whether there was anyone appropriate for me to meet with. He set me up with Jasen Murray.
  • Justin Shovelain and an SIAI donor were also present when I spoke with Jasen. There may have been one or two others; I don't recall.
  • After we met, I sent the notes to Jasen for review. He sent back comments and also asked me to run it by Amy Willey and Michael Vassar, who each provided some corrections via email tha
... (read more)

Carl Shulman pointed out how absurd this was: If GiveWell had existed 100 years ago, they would have argued against funding the eradication of smallpox. Their process forces them to reject the possibility that an intervention could be that effective

To clarify what I said in those comments:

Holden had a few posts that 1) made the standard point that one should use both prior and evidence to generate one's posterior estimate of a quantity like charity effectiveness, 2) used example prior distributions that assigned vanishingly low probability to outcomes f... (read more)

Holden seems to have spoken with Jasen "and others", so at least two people. I don't think it's fair to say that speaking with 1/3 of the people in an organization is as unrepresentative as speaking with 1/3,000,000 of the Boy Scouts. And since Holden sent SIAI his notes and got their feedback before publishing, they had a second chance to correct any misstatements made by the guy they gave him to interview.

So calling this interview "a complete lie" seems very unfair.

I agree that GiveWell's process is limited, and I'm interested in the GiveWell Labs project.

Load More