All of Paul Crowley's Comments + Replies

Affordances

I would guess a lot of us picked the term up from Donald Norman's The Design of Everyday Things.

Direct effects matter!

The image of this tweet isn't present here, only on Substack.

3aaronb502moThanks very much. Just fixed that.
The rationalist community's location problem

True; in addition, places vary a lot in their freak-tolerance.

The rationalist community's location problem

If I lived in Wyoming and wanted to go to a fetish event, I guess I'm driving to maybe Denver, around 3h40 away? I know this isn't a consideration for everyone but it's important to me.

The same is basically true for any niche interest - it will only be fulfilled where there's adequate population to justify it. In my case, particular jazz music.

Probably a lot of people have different niche interests like that, even if they can't agree on one.

A simple device for indoor air management

Why the 6in fan rather than the 8in one? Would seem to move a lot more air for nearly the same price.

3Richard Korzekwa 7moI think I was just trying to match the CFM of my Coway purifier, since I was using the same filters. I was also worried it would be harder to properly mate a larger/heavier fan to a box. Now that I've actually built the thing, I would say the larger fan is probably better.
The Goldbach conjecture is probably correct; so was Fermat's last theorem

Reminiscent of Freeman Dyson's 2005 answer to the question: "what do you believe is true even though you cannot prove it?":

Since I am a mathematician, I give a precise answer to this question. Thanks to Kurt Gödel, we know that there are true mathematical statements that cannot be proved. But I want a little more than this. I want a statement that is true, unprovable, and simple enough to be understood by people who are not mathematicians. Here it is.
Numbers that are exact powers of two are 2, 4, 8, 16, 32, 64, 128 and so on. Numbers th
... (read more)
2TheMajor10moHow does the randomness of the digits imply that the statement cannot be proven? Superficially the quote seems to use two different notions of randomness, namely "we cannot detect any patterns" (i.e. a pure random generator is the best predictor we have) and "we have shown that there can be no patterns" (i.e. we have shown no other predictor can do any better). Is this a known result from Ergodic Theory?
Covid-19: My Current Model

You're not able to directly edit it yourself?

7Ben Pace1yZvi's crossposts are a bit messy to edit (html and other things) so for Zvi we said we would make fixes to his posts when he makes updates, to reduce the cost on him for cross-posting (having to deal with two editors, especially when the LW one is messy). (I have now made the edit to the post above.)
Paul Crowley's Shortform

On Twitter I linked to this saying

Basic skills of decision making under uncertainty have been sorely lacking in this crisis. Oxford University's Future of Humanity Institute is building up its Epidemic Forecasting project, and needs a project manager.

Response:

I'm honestly struggling with a polite response to this. Here in the UK, Dominic Cummings has tried a Less Wrong approach to policy making, and our death rate is terrible. This idea that a solution will somehow spring from left-field maverick thinking is actually lethal.
3toonalfrink1yDid Dominic Cummings in fact try a "Less Wrong approach" to policy making? If so, how did it fail, and how can we learn from it? (if not, ignore this)
2Kaj_Sotala1yHuh. Wow.
Paul Crowley's Shortform

For the foreseeable future, it seems that anything I might try to say to my UK friends about anything to do with LW-style thinking is going to be met with "but Dominic Cummings". Three separate instances of this in just the last few days.

-3TAG1yYou mean the swine are judging ideas by how they work in practice?

Can you give some examples of "LW-style thinking" that they now associate with Cummings?

2Dagon1ySeems like a good discussion could be had about long-term predictions and how much evidence there is to be had in short-term political fluctuations. The Cummings silliness vs unprecedented immigration restrictions - which is likely to have impact 5 years from now?
New Year's Predictions Thread

I look back and say "I wish he had been right!"

New Year's Predictions Thread

Britain was in the EU, but it kept Pounds Sterling, it never adopted the Euro.

New Year's Predictions Thread

How many opportunities do you think we get to hear someone make clearly falsifiable ten-year predictions, and have them turn out to be false, and then have that person have the honour necessary to say "I was very, very wrong?" Not a lot! So any reflections you have to add on this would I think be super valuable. Thanks!

New Year's Predictions Thread

Hey, looks like you're still active on the site, would be interested to hear your reflections on these predictions ten years on - thanks!

[This comment is no longer endorsed by its author]Reply
Hero Licensing

It is, of course, third-party visible that Eliezer-2010 *says* it's going well. Anyone can say that, but not everyone does.

A sealed prediction

I note that nearly eight years later, the preimage was never revealed.

Actually, I have seen many hashed predictions, and I have never seen a preimage revealed. At this stage, if someone reveals a preimage to demonstrate a successful prediction, I will be about as impressed as if someone wins a lottery, noting the number of losing lottery tickets lying about.

Why so much variance in human intelligence?

Half formed thoughts towards how I think about this:

Something like Turing completeness is at work, where our intelligence gains the ability to loop in on itself, and build on its former products (eg definitions) to reach new insights. We are at the threshold of the transition to this capability, half god and half beast, so even a small change in the distance we are across that threshold makes a big difference.

Why so much variance in human intelligence?
As such, if you observe yourself to be in a culture that is able to reach technologically maturity, you're probably "the stupidest such culture that could get there, because if it could be done at a stupider level then it would've happened there first."

Who first observed this? I say this a lot, but I'm now not sure if I first thought of it or if I'm just quoting well-understood folklore.

4Ben Pace2yFor me, I’m pretty sure it was Yudkowsky (but maybe Bostrom) who put it pithily enough that I remembered. Would have to look for a cite.
2018 AI Alignment Literature Review and Charity Comparison

May I recommend spoiler markup? Just start the line with >!

Another (minor) "Top Donor" opinion. On the MIRI issue: agree with your concerns, but continue donating, for now. I assume they're fully aware of the problem they're presenting to their donors and will address it in some fashion. If they do not might adjust next year. The hard thing is that MIRI still seems most differentiated in approach and talent org that can use funds (vs OpenAI and DeepMind and well-funded academic institutions)

2Dr_Manhattan2yThanks for doing this! I couldn't figure out how.
LessWrong 2.0 Feature Roadmap & Feature Suggestions

I note that this is now done. As I have for so many things here. Great work team!

Spoiler space test

2018 AI Alignment Literature Review and Charity Comparison

Rot13's content, hidden using spoiler markup:

Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. Additionally, they already have a larger budget than any other organisation (except perhaps FHI) and a large amount of reserves.

Despite FHI producing very high quality research, GPI having a lot of promising papers in the pipeline, and both having highly qualified and value-aligned researchers, the ... (read more)

Nyoom

I think the Big Rationalist Lesson is "what adjustment to my circumstances am I not making because I Should Be Able To Do Without?"

Topological Fixed Point Exercises

Just to get things started, here's a proof for #1:

Proof by induction that the number of bicolor edges is odd iff the ends don't match. Base case: a single node has matching ends and an even number (zero) of bicolor edges. Extending with a non-bicolor edge changes neither condition, and extending with a bicolor edge changes both; in both cases the induction hypothesis is preserved.

Here's a more conceptual framing:

If we imagine blue as labelling the odd numbered segments and green as labelling the even numbered segments, it is clear that there must be an even number of segments in total. The number of gaps between segments is equal to the number of segments minus 1, so it is odd.

Last Chance to Fund the Berkeley REACH

From what I hear, any plan for improving MIRI/CFAR space that involves the collaboration of the landlord is dead in the water; they just always say no to things, even when it's "we will cover all costs to make this lasting improvement to your building".

0Said Achmiz3yDoes MIRI/CFAR view having such a landlord as an acceptable state of affairs? Is there a plan for moving to another space, with less recalcitrant owners/renters?
LessWrong 2.0 Feature Roadmap & Feature Suggestions

Of course I should have tested it before commenting! Thanks for doing so.

LessWrong 2.0 Feature Roadmap & Feature Suggestions

Spoiler markup. This post has lots of comments which use ROT13 to disguise their content. There's a Markdown syntax for this.

I note that this is now done. As I have for so many things here. Great work team!

Spoiler space test

9Elo3y
On the Chatham House Rule

"If you're running an event that has rules, be explicit about what those rules are, don't just refer to an often-misunderstood idea" seems unarguably a big improvement, no matter what you think of the other changes proposed here.

April Fools: Announcing: Karma 2.0

I notice your words are now larger thanks to the excellence of this comment!

April Fools: Announcing: Karma 2.0

Excellent, my words will finally get the prominence they deserve!

Leaving beta: Voting on moving to LessWrong.com

When does voting close? EDIT: "This vote will close on Sunday March 18th at midnight PST."

Making yourself small

I thought of a similar example to you for big-low-status, but I couldn't think of an example I was happy with for small-high-status. Every example I could think of was one where someone is visually small, but you already know they're high status. So I was struck when your example also used someone we all know is high status! Is there a pose or way of looking which both looks small and communicates high status, without relying on some obvious marker like a badge or a crown?

1Helen3ySee my response to PeterBorah above - the main thing that comes to mind for me is how overall comfortable you seem with the situation. Agree with Benquo's comment as well.
3Benquo3yBuckley [https://uvcarmel.files.wordpress.com/2008/02/sam-falk-the-new-york-times.jpg] is interesting in this regard. If you watch old Firing Line episodes you often see him slouched in his armchair like a rag doll casually tossed onto it, mumbling his words, trying to let his interlocutors have their say, but very very confident that you will listen to him and regard him as a Serious Public Intellectual anyway.
Two Coordination Styles

Ainslie, not Ainslee. I found this super distracting for some reason, partly because his name is repeated so often.

2abramdemski3yD:
A LessWrong Crypto Autopsy

A plausible strategy would be to buy say 100 bitcoins for $1 each, then sell 10 at $10, 10 at $100, and so on. With this strategy you would have made $111,000 and hold 60 bitcoins.

List of civilisational inadequacy

"Even though gaining too much in pregnancy" is missing the word "weight" I think.

Security Mindset and the Logistic Success Curve

I can't work out where you're going with the Qubes thing. Obviously a secure hypervisor wouldn't imply a secure system, any more than a secure kernel implies a secure system in a non-hypervisor based system.

More deeply, you seem to imply that someone who has made a security error obviously lacks the security mindset. If only the mindset protected us from all errors; sadly it's not so. But I've often been in the situation of trying to explain something security-related to a smart person, and sensing the gap that seemed wider than a mere lack of knowledge.

2John_Maxwell3yThe point I'm trying to make is that this statement seems false to me, if you have good isolation--which is what a project like Qubes tries to accomplish. Kernel vs hypervisor is discussed in this blog post [http://theinvisiblethings.blogspot.com/2008/09/three-approaches-to-computer-security.html] . It's possible I'm describing Qubes incorrectly; I'm not a systems expert. But I feel pretty confident in the broader point about trusted computing bases. This was the implication I was getting from Eliezer. I attempted a reductio ad absurdum.
Against Modest Epistemology

Please don't bold your whole comment.

3Rob Bensinger4yI think this is a bug, not TAG's fault.
Living in an Inadequate World

Looks like this hasn't been marked as part of the "INADEQUATE EQUILIBRIA" sequence: unlike the others, it doesn't carry this banner, and it isn't listed in the TOC.

1habryka4yFixed now. Sorry for that!
Why no total winner?

I agree, if the USA had decided to take over the world at the end of WWII, it would have taken absolutely cataclysmic losses. I think it would still have ended up on top of what was left, and the world would have rebuilt, with the USA on top. But not being prepared to make such an awful sacrifice to grasp power probably comes under a different heading than "moral norms".

Seek Fair Expectations of Others’ Models

There are many ways to then conclude that AGI is far away where far away means decades out. Not that decades out is all that far away. Eliezer conflating the two should freak you out. AGI reliably forty years away would be quite the fire alarm.

I don't think I understand this point. Is the conflation "having a model of the long-term that builds on a short-term model" and "having any model of the long term", in which case the conflation is akin to expecting climate scientists to predict the weather? If so I agree that that's a s

... (read more)
6jsteinhardt4yI think the conflation is "decades out" and "far away".
The Typical Sex Life Fallacy

I move in circles where asking "why is X bad" is as bad as X itself. So for the avoidance of doubt, I do not think that your comment here makes you a bad person.

I'm trying to imagine a conversation where one person expresses a preference about the other's pubic hair that wouldn't be inappropriate, and I'm struggling a little. Here's what I've come up with:

  • A BDSM context in which that sort of thing is a negotiated part.

  • The two have been playing for a while and are intimate enough for that to be appropriate.

  • The other p

... (read more)
5Stuart_Armstrong4yIt seems to me you can keep all the parts, and all the points, and cut half the paragraphs approximately. The kind of "repetition until everyone gets all subtle nuances of the points" can work well for shorter essays, but many people are not going to wade through all of this.
There's No Fire Alarm for Artificial General Intelligence

Dawkins's "Middle World" idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.

1whpearson4yI agree that research can probably be improved upon quickly and easily. Lab on a chip [https://en.wikipedia.org/wiki/Lab-on-a-chip] is obviously a way we are doing that currently. If an AGI system has got the backing of a large company or country and can get new things fabbed in secrecy it can improve on these kinds of things. But I still think it is worth trying to quantify things. We can get ideas about stealth scenarios where the AGI is being developed by non-state/megacorp actors that can't easily fab new things. We can also get ideas about how useful things like lab on a chip are for speeding up the relevant science. Are we out of low hanging fruit and is it taking us more effort to find novel interesting chemicals/materials?
Deontologist Envy

Thank you! Hooray for this sort of thing :)

LW 2.0 Strategic Overview

Also I have already read them all more than once and don't plan to do so again just to get the badge :)

LessWrong 2.0 Feature Roadmap & Feature Suggestions

Facebook-like reactions.

I would like to be able to publicly say eg "hear hear" on a comment or post, without cluttering up the replies. Where the "like" button is absent eg on Livejournal, I sorely miss it. This is nothing to do with voting and should be wholly orthogonal; voting is anonymous and feeds into the ranking algorithm, where this is more like a comment that says very little and takes up minimal screen real estate, but allows people to get a quick feel for who thinks what about a comment.

Starting with "thumbs up" wou

... (read more)
1Chris_Leong4yThe main difficulty with these systems is finding the right balance of expressiveness and simplicity. I can definitely see some advantages of having a "dislike tone" option that is separate from down-voting. I don't know if I'd want a whole bunch of options added though, as that might become too complex.
1PDV4yI disagree. I think this is anti-epistemic and tends to devolve easily into bad manners, politics, and harassment. See, e.g., the Sufficient Velocity forum, where any reaction with a negative sentiment (including things as mild as 'sarcasm' and 'Picard facepalm', and ones that were only accessible in limited quantities to paid subscribers) were quickly discontinued because they were used for flaming and Internet Arguments.
LessWrong 2.0 Feature Roadmap & Feature Suggestions

I think these are two wholly orthogonal functions: anonymous voting, and public comment badges. For badges, I'd like to see something much more like eg Discord where you can apply as many as you think apply, rather than Facebook where you can only apply at most one of the six options (eg both "agree" and "don't like tone").

EDIT: now a feature request.

Welcome to Lesswrong 2.0

I think publicly applying badges to a comment should be completely orthogonal to anonymously voting on it. EDIT: now a feature request.

Load More