LESSWRONG
LW

Ruby
14429Ω13717316801003
Message
Dialogue
Subscribe

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
11Ruby's Quick Takes
6y
121
LW Team Updates & Announcements
Novum Organum
LessWrong Feed [new, now in beta]
Ruby2d20

I'm curious for examples, feel free to DM if you don't want to draw further attention to them

Reply
LessWrong Feed [new, now in beta]
Ruby2d60

Thread for feedback on the New Feed

Question, complaints, confusions, bug reports, feature requests, and long philosophical screeds – here is the place!

Reply
The Sixteen Kinds of Intimacy
Ruby5d30

I think that intellectual intimacy should include having similar mental capacities.

Seems right, for both reasons of understanding and trust.

A part of me wants to argue that these are intertwined

I think the default is they're intertwined but the interesting thing is they can come apart: for example, you develop feelings of connection and intimacy through shared experience, falsely assume you can trust (or shared values or whatever), but then it turns out the experiences shared never actually filtered for that.

Reply
johnswentworth's Shortform
Ruby8d141

This matches with the dual: mania. All plans, even terrible ones, seem like they'll succeed and this has flow through effects to elevated mood, hyperactivity, etc.

Whether or not this happens in all minds, the fact that people can alternate fairly rapidly between depression and mania with minimal trigger suggests there can be some kind of fragile "chemical balance" or something that's easily upset. It's possible that's just in mood disorders and more stable minds are just vulnerable to the "too many negative updates at once" thing without greater instability.

Reply
Eric Neyman's Shortform
Ruby10d20

To clarify here, I think what Habryka says about LW generally promoting lots of content being normal is overwhelmingly true (e.g. spotlights and curation) and this is book is completely typical of what we'd promote to attention, i.e. high quality writing and reasoning. I might say promotion is equivalent to upvote, not to agree-vote.

I still think there details in the promotion here that I think make inferring LW agreement and endorsement reasonable:

  1. lack of disclaimers around disagreement (absence is evidence) together with a good prior that LW team agrees a lot with Eliezer/Nate view on AI risk
  2. promoting during pre-order (which I do find surprising)
  3. that we promoted this in a new way (I don't think this is as strong evidence as we did before, mostly it's that we've only recently started doing this for events and this is the first book to come along, we might have and will do it for others). But maybe we wouldn't have or as high-effort absent agreement.

But responding to the OP, rather than motivation coming from narrow endorsement of thesis, I think a bunch of the motivation flows more from a willingness/desire to promote Eliezer[1] content, as (i) such content is reliably very good, and (ii) Eliezer founded LW and his writings make up the core writings that define so much of site culture and norms. We'd likely do the same for another major contributor, e.g. Scott Alexander.

I updated from when I first commented thinking about what we'd do if Eliezer wrote something we felt less agreement over, and I think we'd do much the same. My current assessment is the book placements is something like ~"80-95%" neutral promotion of high-quality content the way we generally do, not because of endorsement, but maybe there's a 5-20% it got extra effort/prioritization because we in fact endorse the message, but hard to say for sure.

 

  1. ^

    and Nate

Reply2
sunwillrise's Shortform
Ruby13d30

LW2 had to narrow down in scope under the pressure of ever-shorter AI timelines

I wouldn't say the scope was narrowed, in fact the admin team took a lot of actions to preserve the scope, but a lot of people have shown up for AI or are now heavily interested in AI, simply making that the dominant topic. But, I like to think that people don't think of LW as merely an "AI website".

Reply
Habryka's Shortform Feed
Ruby13d22

It really does look dope

Reply
Futarchy's fundamental flaw
Ruby14d*110

Curated. The idea of using Futarchy and prediction markets to make decision markets was among the earliest ideas I recall learning when I found the LessWrong/Rationality cluster in 2012 (and they continue to feature in dath ilani fiction). It's valuable then to have an explainer for fundamental challenges with prediction markets. I suggest looking at the comments and references, as there's some debate here, but overall I'm glad to have this key topic explored critically.

Reply
Eric Neyman's Shortform
Ruby15d43

Fwiw, it feels to me like we're endorsing the message of the book with this placement. Changing the theme is much stronger than just a spotlight or curation, not to the mention that it's pre-order promotion.

Reply1
A Straightforward Explanation of the Good Regulator Theorem
Ruby22d81

Curated. Simple straightforward explanations of notable concepts is among my favorite genre of posts. Just a really great service when a person, confused about something, goes on a quest to figure it out and then shares the result with others. Given how misleading the title of the theorem is, it's valuable here to have it clarified. Something that is surprising, is given what this theorem actual says and how limited it is, that it's the basic of much other work given what it purportedly states, but perhaps people are assuming that the spirit of it is valid and it's saved by modifications that e.g. John Wentworth provides. It'd be neat to see more of analysis of that. It'd be sad if a lot of work cites this theorem because people believed the claim of the title without checking the proof really supports it. All in all, kudos for making progress on all this.

This may be the most misleading title and summary I have ever seen on a math paper. If by “making a model” one means the sort of thing people usually do when model-making - i.e. reconstruct a system’s variables/parameters/structure from some information about them - then Conant & Ashby’s claim is simply false. - John Wentworh

Reply1
Load More
Eliezer's Lost Alignment Articles / The Arbital Sequence
5mo
(+10050)
Tag CTA Popup
5mo
(+4/-231)
LW Team Announcements
5mo
GreaterWrong Meta
5mo
Intellectual Progress via LessWrong
5mo
(-401)
Wiki/Tagging
5mo
Moderation (topic)
5mo
Site Meta
5mo
What's a Wikitag?
5mo
54The Sixteen Kinds of Intimacy
14d
2
53LessWrong Feed [new, now in beta]
1mo
26
48A collection of approaches to confronting doom, and my thoughts on them
3mo
18
84A Slow Guide to Confronting Doom
3mo
20
207Eliezer's Lost Alignment Articles / The Arbital Sequence
4mo
10
281Arbital has been imported to LessWrong
4mo
30
43Which LessWrong/Alignment topics would you like to be tutored in? [Poll]
10mo
12
49How do we know that "good research" is good? (aka "direct evaluation" vs "eigen-evaluation")
1y
21
67Friendship is transactional, unconditional friendship is insurance
1y
24
46Enriched tab is now the default LW Frontpage experience for logged-in users
1y
27
Load More