All of habryka's Comments + Replies

People Will Listen

I have 2-3 friends I know about who lost $5k+ on Mt. Gox going down, and didn't hold crypto anywhere else. 

"Taking your environment as object" vs "Being subject to your environment"

Yeah, the current phrase feels confusing to me. If a human takes something else as a subject that... feels like it has some different connotations. In my mind the two opposing phrases are "being subject to" (passive) and "taking as object" (active).

There's some complexity here because English offers two words here, "subject" and "object" that can be used somewhat interchangeably in some situations but in most situations we have some notion that "subject" is on the left/upstream side of the causal arrow and "object" is on the right/downstream side. However Ben's reuse of "subject" by shifting it from actor ("subject to") to the acted upon ("as subject") seems mostly poetic and a reasonable alternative to talking about object.

Of course, because English is noun-focused, it's rather nice to have two diff... (read more)

niplav's Shortform

Yep, that's fine. I am not a moral prescriptivist who tells you what you have to care about. 

I do think that you are probably going to change your mind on this at some point in the next millennium if we ever get to live that long, and I do have a bunch of arguments that feel relevant, but I don't think it's completely implausible you really don't care.

I do think that not caring about how people are far away is pretty common, and building EA on that assumption seems fine. Not all clubs and institutions need to be justifiable to everyone.

Monastery and Throne

Hmm, I do think I honestly believe that behavioral scientists might be worse than the average politician at predicting public response. Like, I am not totally confident, but I think I would take a 50% bet. So this strikes me as overall mildly bad (though not very bad, I don't expect either of these two groups to be very good at doing this).

2jacobjacob6dHabryka, is the reasoning that politicians have a real incentive to accurately predict public response -- because it entirely determines whether they remain in power -- whereas behavioral scientists have a much weaker incentive, compared to the dominant incentive of publishing significant results?
niplav's Shortform

I don't know, I think it's a pretty decent argument. I agree it sometimes gets overused, but I do think given it's assumptions "you care about people far away as much as people closeby" and "there are lots of people far away you can help much more than people close by" and "here is a situation where you would help someone closeby, so you might also want to help the people far away in the same way" are all part of a totally valid logical chain of inference that seems useful to have in discussions on ethics. 

Like, you don't need to take it to an extreme, but it seems locally valid and totally fine to use, even if not all the assumptions that make it locally valid are always fully explicated.

1niplav7dRight, my gripe with the argument is that these first two assumptions are almost always unstated, and most of the time when people use the argument, they "trick" people into agreeing with assumption one. (for the record, I think the first premise is true)
2Dagon8dOn self-reflection, I just plain don't care about people far away as much as those near to me. Parts of me think I should, but other parts aren't swayed. The fact that a lot of the motivating stories for EA don't address this at all is one of the reasons I don't listen very closely to EA advice. I am (somewhat) an altruist. And I strive to be effective at everything I undertake. But I'm not an EA, and I don't really understand those who are.
Open and Welcome Thread - April 2021

Yeah, that makes sense. Will be more careful with moving old historical posts to the frontpage for this reason.

1Sherrinford7dOk, thanks.
Open and Welcome Thread - April 2021

Can you paste the link of the RSS feed? We've recently moved a bunch of old sequences post to the frontpage that we missed when we did our initial pass in 2017, so that seems like the most reasonable cause, if you are subscribed to a feed that filters only on frontpage posts. 

1Sherrinford8dSure, it's a frontpage feed: https://www.lesswrong.com/feed.xml?view=frontpage-rss&karmaThreshold=45
Open & Welcome Thread – March 2021

Yeah, I really want to get around to this. I am sorry for splitting the feature-set awkwardly across two editors!

2adamShimi11dTo be clear, I was just answering the comment, not complaining again about the editor. I find it's great, and the footnote is basically a nitpick (but a useful nitpick). I also totally get if it takes quite some time and work to implement. ;)
Rationalism before the Sequences
habryka12dModerator Comment12

Woah, at least one of those summaries seems really quite inaccurate. Bad enough that like, I feel like I should step in as a moderator and be like "wait, this doesn't seem OK". 

I am not very familiar with ESR's opinions, but your summary of "white people at BLM protests should be assumed to be communists and shot at will" is really misrepresenting the thing he actually said. What he actually said was "White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends;", with th... (read more)

6gjm11dYeah, "rioters" would have been more accurate than "people", though I don't know exactly what Eric considers the boundary between protesting and rioting. My apologies. As I said, mistakes get made when doing things quickly, and doing it quickly was much of the point. [EDITED to add:] I have edited my original comment to point out the mistake; I also found a comment from Eric on the original blogpost that clarifies where he draws the line between "rioters" and mere protestors, and have quoted that there too.
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)

This is great, thank you! 

Minor formatting note: The italics font on both the AI Alignment Forum and LessWrong isn't super well suited to large block of text, so I took the liberty to unitalicize a bunch of the large blockquotes (which should be sufficiently distinguishable as blockquotes without the italics). Though I am totally happy to reverse it if you prefer the previous formatting. 

Rationalism before the Sequences

This post of mine feels closely related: https://www.lesswrong.com/posts/xhE4TriBSPywGuhqi/integrity-and-accountability-are-core-parts-of-rationality 

  • I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a la
... (read more)
7Ben Pace17dHah, I was thinking of replying to say I was largely just repeating things you said in that post. Nonetheless, thanks both Kaj and Eric, I might turn it into a little post. It's not bad to have two posts saying the same thing (slightly differently).
Rationalism before the Sequences

Mod note: I moved this to frontpage despite it being a bit similar to things we've historically left on people's personal blog. Usually there are three checks I run for deciding whether to put something on the frontpage: 

  1. Is it not timeless? 
  2. Is it trying to sell you something, or persuade you, or leverage a bunch of social connections to get you to do something?  (e.g. eliciting donations usually falls in this category)
  3. Is it about community inside-baseball that makes it hard to participate in if you aren't part of the social network?

For this ... (read more)

9Eric Raymond18dHeh. Come to think of it from that angle, "a bit true, but not really" would have been exactly my assessment if I were in your shoes. Thanks, I appreciate the nuanced judgment.
Conceptual engineering: the revolution in philosophy you've never heard of

Hi, I'm new to this site so not sure if late comments are still answered...

Late comments are generally encouraged around here, and we generally aim to have discussion that stands the test of time, and don't ever shut down comment threads because they are too old.

deluks917's Shortform

That sucks. Sorry for your loss.

Is a Self-Iterating AGI Vulnerable to Thompson-style Trojans?

Edit note: I fixed your images for you. They seemed broken on Chrome since the server on which they were hosted didn't support https. 

Roko's Basilisk

I think the Open Thread is probably a generally better place to bring up random new ideas related to Roko's basilisk stuff. This page is more for discussing the current content of the page, and how it might be improved.

Open & Welcome Thread – March 2021

Yep, agree, also want this. Just a bit complicated tech-wise and UI-wise, so it's a reasonably large investment.

Book review: Why we sleep

Note that guzey's excellent writeup on this definitely qualifies, and I offered to send him the money, but if I remember correctly he didn't want it, and we will settle it informally when we hang out in the future sometime.

A Retrospective Look at the Guild of Servants: Alpha Phase

This is great, thank you for writing this up and I am looking forward to where this goes! 

(I probably have more detailed thoughts, but not sure whether I will get around to writing them up, so it seemed better to leave encouragement instead of nothing)

just_browsing's Shortform

Airpods are amazing at switching between devices (in particular macs and iPhones). Only set of headphones that seems to have made this work reliably.

Direct effects matter!

Yeah, I meant it as "I think this comment is OK and shouldn't be deleted or cause the author to get a warning, but it seemed like the kind of thing that could lead to followup comments that would be quite bad"

1Kenny1moThanks for the clarification! I think your "mod note" comment serves as an effective and eminently sensible 'warning' anyways.
Rad Hardman's Shortform

We have a setting that allows you to view them exactly that way. But I think too large of a fraction of people reading LW posts are lurkers, and this means that I am hesitant. to force them to press an additional button for every poll, but it seems like a reasonable setting that allows some people to opt into that.

RSS Feeds are fixed and should be properly functional this time

In the left sidebar menu, click on Subscribe (RSS/Email).

Direct effects matter!

Mod note: We generally try to keep generalizations about political parties, and general central culture-war stuff out of most of the site discussion. I think this comment is fine, but I would prefer the comments on this post not become a "the left thinks or the right thinks" type of discussion, which I think is rarely fruitful. 

1Kenny1moI'm not sure what you mean by "this is comment is fine". I don't think it's a good comment, but I don't have strong feelings about it, e.g. being deleted.
8cousin_it1moJust wanna say I think this is good moderation policy, and thank you for upholding it.
How can we stop talking past each other when it comes to postrationality?

David Gerard not only has 1000 karma but for a long time admin rights at as least our Wiki. I think it's strawmanning him to say that he just doesn't understand LessWrong when he spent years in our community and then decided that it's not the right place for him anymore.

No, just because you spend years here does not mean you understand the core ideas. 

I think we have plenty of evidence that David Gerard frequently completely makes up random strawmans that have nothing to do with us, and maybe there is a small corner of his mind that does have an accur... (read more)

How can we stop talking past each other when it comes to postrationality?

No, his critique of bayesianism is also attacking something very different from the sequences, it is again talking about something much narrower. Indeed, substantial fractions of the sequences overlap with his critique of bayesianism (in particular all the stuff about embededness, logical uncertainty, incomputability and TDT-style concerns). I don't think he agrees with everything in the sequences, but when he writes critiques, I am pretty sure he is responding to something else than the sequences.

How can we stop talking past each other when it comes to postrationality?

Also, having 220 karma on the site is really not much evidence you understand what rationality is about. David Gerard has over 1000 karma and very clearly doesn't understand what the site is about either.

I am pretty sure Chapman has also said he hasn't read the sequences, though generally I think he understands most content on the site fine. The problem is again not that he doesn't understand the site, but just that he is using the word rationality to mean something completely different. I like a bunch of his critique, and indeed Eliezer made 90% of the sa... (read more)

-2ChristianKl1moYes, but he does use the word Bayesanism to talk about the paradigm of the sequences. He has written a substantial criticism of Bayesanism (with is Yudkowsky and not Plato, Kant or Bertrand Russell). David Gerard not only has 1000 karma but for a long time admin rights at as least our Wiki. I think it's strawmanning him to say that he just doesn't understand LessWrong when he spent years in our community and then decided that it's not the right place for him anymore. I also think there's an issue here of saying that people who spent most of their time on LessWrong long before you signed up your account left and had critiques simple don't understand what LessWrong was about. I think David has a strong sense that it's important to put faith in established authorities and he correctly assess that LessWrong is not about following established authority. It's the same clash that gets him to write against crypto currency.
How can we stop talking past each other when it comes to postrationality?

David Chapman has said himself that when he is referring to rationality, what he is talking about has nothing to do with LessWrong. He is referring to the much older philosophical movement of "Rationalism". The whole thing with Chapman is literally just an annoying semantic misunderstanding. He also has some specific critiques of things that Eliezer said, but 95% of the time when he critiques rationalism has absolutely nothing to do with what is written on this site.

Chapman has also specifically said that he does not understand LW:

I frequently emphasize that by “rationalism” I am specifically NOT referring to the LW usage, and that is not the target of my critique. I gave up on trying to understand LW rationalism ~5 years ago.

Also, having 220 karma on the site is really not much evidence you understand what rationality is about. David Gerard has over 1000 karma and very clearly doesn't understand what the site is about either.

I am pretty sure Chapman has also said he hasn't read the sequences, though generally I think he understands most content on the site fine. The problem is again not that he doesn't understand the site, but just that he is using the word rationality to mean something completely different. I like a bunch of his critique, and indeed Eliezer made 90% of the sa... (read more)

Rad Hardman's Shortform

I also prefer spoiler blocks

2Yoav Ravid1moSame, but it would be much better if it was accessible through the editor UI, I constantly forget how to apply it as it is now.
[Lecture Club] Awakening from the Meaning Crisis

Required some integration from both sides. But yeah, the new editor made it much easier.

Lifelogging

My guess is we want to rename this tag to "Quantified Self" since that sure seems like it should get a tag?

6Raemon1moHmm? I think Quantified Self is fairly different from what (I assume Mati Roy) intended this tag to focus on.
supposedlyfun's Shortform

Yep, I expect some people will want them turned off, which is why we tried to make that pretty easy! It might also make sense to batch them into a weekly batch instead of a daily one, which I've done at some points to reduce the degree to which I felt like I was goodharting on them.

A whirlwind tour of Ethereum finance

Thanks for reporting! Will make sure we fix this soon.

John_Maxwell's Shortform
  • Replication Crisis definitely hit hard. Lots of stuff there. 
  • People's timelines have changed quite a bit. People used to plan for 50-60 years, now it's much more like 20-30 years. 
  • Bayesianism is much less the basis for stuff. I think this one is still propagating, but I think Embedded Agency had a big effect here, at least on me and a bunch of other people I know.
  • There were a lot of shifts on the spectrum "just do explicit reasoning for everything" to "figuring out how to interface with your System 1 sure seems really important". I think Eliezer
... (read more)
Seven Years of Spaced Repetition Software in the Classroom

Wow, it's great to see follow-up posts over the course of seven years. Thank you so much for the work you put into this! I am really looking forward to reading this thoroughly sometime in the next few days.

John Vervaeke

Yep, I feel similarly, though overall think the EA Forum is pursuing a cultural strategy that is somewhat different from ours that makes it a bit less costly, but not much. I have generally been open about various cultural concerns I've had about the EA Forum when talking to CEA.

Takeaways from one year of lockdown

Also... it seems really unreasonable to say "if you can't handle 10 hours of grueling negotiations about what COVID precautions to take, you're weak and I need to cut you out of my life and/or take away decisionmaking power from you during times of stress." I would guess that, uhh, most people are weak by that definition.

To be clear, I do indeed think we have the luxury to exclude most people from our lives. Indeed any rule that doesn't exclude 90%+ of people from your life to a very large degree seems far too lax to me.

Also, 10 hours really doesn't seem t... (read more)

Introduction to Reinforcement Learning

Note: There is a broken image in the post: 

Anna and Oliver discuss Children and X-Risk

I am pretty confused on this, and as I said above, don't put much weight on this study because I also have some sense that the author isn't super trustworthy (though I haven't found any critique of this specific paper). 

Overall, my current sense is that the effect on women in particular is quite strong, and women who choose to have children will reduce their chance of major achievement by at least 40% or so. For men it's probably weaker, and I am a lot less sure what the data says. 

Anna and Oliver discuss Children and X-Risk

There is also this paper, which aims to show that as soon as great scientists marry, they very quickly stop producing great achievements, but something about it irks me out and I don't currently put a ton of weight on it: 

1Owain_Evans2moI haven't re-read the paper, although IIRC there are critiques online of this paper and the author's other statistical analyses. How strong do you think the evidence is for the counterfactual "If person has chooses to have kids, their chance of major achievement will drop substantially" (for a range of different people)? Ideally there'd be natural experiments (due to infertility or someone who didn't want kids raising their sibling's children etc). These graphs aren't that different and (I'd guess) it wouldn't be hard to p-hack to get the intended result. Rate of being unmarried will vary over time and with country and this will correlate with age of achievements (e.g. if people in biology peak later than math/physics, if there's more biologists in UK and math/physics people in Germany and Italy). And there's the causal / counterfactual inference..
Anna and Oliver discuss Children and X-Risk

I found this analysis one of the most useful I have found: https://academic.oup.com/psychsocgerontology/article/64B/6/767/550078

Abstract: 

Compared with married parents, childless married couples tend to have slightly more income and about 5% more wealth. Unmarried childless men enjoy no income advantage over unmarried fathers but have 24%–33% more wealth. Compared with older unmarried mothers, unmarried childless women have 12%–31% more income and about 33% more wealth. The strength of these relationships increases as one moves up the distribution of

... (read more)
4habryka2moThere is also this paper [https://sci-hub.se/https://www.sciencedirect.com/science/article/abs/pii/S009265660200538X?via%3Dihub] , which aims to show that as soon as great scientists marry, they very quickly stop producing great achievements, but something about it irks me out and I don't currently put a ton of weight on it:
Anna and Oliver discuss Children and X-Risk

Yep, in general about 85% of people have kids, with something like half of the people who don't not doing so because they have fertility problems, or other things that tend to classify them as being "involuntarily childless" in a bunch of studies. So the population to study here (people who voluntarily don't have children) have historically only made up something like 7% of the population. So just looking through successful lists of people and seeing that most of them have kids isn't really going to provide a ton of evidence.

5Ben Pace2moI had been considering the hypothesis that they would obviously not have kids, because look how much work they have to do. Just people most people have kids doesn't mean a really small set of strongly selected outliers do – I expected that the most successful people also have very few major illnesses, have 99th percentile IQs, have married parents, etc. (I'm not sure that the health and parental ones check out either, but I think the 99th percentile IQ probably does.) I had anticipated that most great scientists, inventors and builders would not have children, and was actively surprised when I found out (before today) that Elon and Demis did (and a few other people I had thought to check). It changed my attitude toward having kids substantially.
Covid 2/25: Holding Pattern

Just to check, do you want us to reimport, or did you do it yourself?

4Zvi2moDid it myself, if I want a reimport I will say so explicitly.
Utility Maximization = Description Length Minimization

Promoted to curated: As Adele says, this feels related to a bunch of the Jeffery-Bolker rotation ideas, which I've referenced many many times since then, but in a way that feels somewhat independent, which makes me more excited about there being some deeper underlying structure here.

I've also had something like this in my mind for a while, but haven't gotten around to formalizing it, and I think I've seen other people make similar arguments in the past, which makes this a valuable clarification and synthesis that I expect to get referenced a bunch.

Apply to Effective Altruism Funds now

Maybe, but really depends on whether you have a good track record or there is some other reason why it seems like a good idea to fund from an altruistic perspective.

Best way to write a bicolor article on Less Wrong?

If you ever want to do anything particularly weird in an article, you can send me plain HTML via the Intercom and I insert it into the post directly (after doing some basic sanitization). This will make the post usually admin-only editable (if you used any HTML features that are admin only), but works well-enough, and I've done this a few times for articles that really wanted to use color (Beth's AI Safety Debate writeup was one that comes to mind here)

The slopes to common sense

(Mod note: I added the image you linked to inline, since this seemed like it would improve the post and was more like what you intended to happen. But let me know if that was an intentional choice, and I am happy to reverse it)

1George2moyes, that's what I had intended, thanks for correcting that.
Why I Am Not in Charge

Does seem kinda important to get this right. My guess is it's an honest mistake, but still one I would like to see corrected, and think is worth investing some effort into avoiding.

6Zvi2moCorrected the wording to ensure it is definitely accurate. Speed premium among a lot of very strong claims that definitely happened and all that, but yeah, more careful would have been better.
The feeling of breaking an Overton window

(Mod Note: I sent Adam Smith a mod warning for making a bunch of seemingly low-effort/low-quality comments. This is your first warning, we might ban you if you continue posting content of this level of quality.)

Load More